YaST Team posted at 08:00

Highlights of YaST Development Sprints 99 and 100

One hundred development sprints, that’s a nice rounded number… and a good moment to rethink the way we write and publish our reports.

Yes, you read it right. This post will be the last one following our traditional format, assuming something can already be called “traditional” after four and a half years. As we will explain at the end of this post, subsequent reports will look more as a digest with links to information and not that much as a traditional blog post that tries to tell a story.

But the Age of the Digest has not come yet. Let’s close the Age of the Stories with a last blog post covering several topics:

  • A better editor for the <partitioning> section of AutoYaST
  • Improved YaST support for advanced LVM and Btrfs features
  • Ensuring YaST is not left behind in the migration to /usr/etc
  • Taking XML processing in YaST to the next level

Many of those topic are clearly oriented into the mid-term future and, as such, it will take some time until they reach stable versions of openSUSE Leap or SUSE Linux Enterprise. Unless told otherwise, the changes described in this post will only be visible in the short term for openSUSE Tumbleweed users.

AutoYaST UI: Improve Editing Drives

Let’s start with an exception to the rule just mentioned. That is, something many users will see soon in their systems, since we plan to release it as a maintenance update for both 15.1 and 15.2 and the corresponding SLE versions.

As part of the initiative to modernize AutoYaST, the user interface for creating or editing the profile’s partitioning section has been finally revamped. That means no more missing storage features such as Bcache, RAIDs, Btrfs multi-device file systems, etc.

AutoYaST UI: drive section

Apart from allowing to set every supported attribute by the AutoYaST partitioning section, during the rewrite we tried to keep the module as simple as possible to easy the user interaction, not only distributing the fields to put related stuff together and not having a crammed interface but also avoiding warnings and confirmation popup for each change done.

AutoYaST UI: partition section, general

Of course, there is room for improvement yet, but at least we did the first step getting rid of the complex and broken old UI to start working in a simpler one.

AutoYaST UI: partition section, usage

Recognizing LVM Advanced Features

As you know, YaST can be used to configure storage setups combining several technologies like MD RAID, bcache, Btrfs and LVM. Not every single feature of each one of those technologies can be tweaked using YaST, since would lead to a completely unmanegable interface and a hardly predictable behavior.

But even if YaST cannot create certain advanced configurations, it should be able to recognize them, display them in the interface and allow simple modifications.

In the last couple of sprints we have focused in teaching YaST how to deal with advanced LVM features. Now it is able to detect and use LVM RAID, mirror logical volumes and LVM snapshots, both thick and thin ones.

YaST does not allow to create such setups and we don’t plan to support that in a foreseable future. But users that already use those LVM features will now be able to use YaST for some operations and to have a more complete and accurate picture of their storage configuration. In addition, the partial support for such technologies makes easier the upgrade process for systems using them.

Probing Btrfs Snapshot Relations and Two Peculiarities

On a purely technical level, all the additions explained above imply adding new possibilities to the so-called devicegraph which is used by libstorage-ng to represent any system. That included representing the LVM snapshots and the relations between origins and snapshots. That being done, it should be easy enough to go one step further and also recognize and represent those relations for btrfs subvolumes… except for two peculiarities.

The first one is that a btrfs snapshot can be both the child and the snapshot of the same subvolume. Suppose we have a btrfs mounted at /test. First we create a subvolume and then a snapshot of it:

cd /test
btrfs subvolume create origin
btrfs subvolume snapshot origin origin/snapshot

Now “snapshot” is a child of “origin” since it is placed directly under “origin” in the directory structure. In the devicegraph this means we have parallel edges (the gray for the child and the green for the snapshot):

Btrfs subvolumes relations

The second peculiarity comes from the freedom of btrfs to rename and move subvolumes. We can move “origin” under “snapshot”:

mv origin/snapshot .
mv origin snapshot

The resulting devicegraph has a cycle:

Btrfs subvolumes loop

So far that could not happen and some algorithms assumed that the devicegraph is always a directed acyclic graph (DAG). To remedy those algorithms we let them operate on a filtered devicegraph that hides the snapshot relations. Those filters were also added lately to libstorage-ng for LVM snapshots.

Now that libstorage-ng contains the mechanisms to deal with all these circumstances, we can think about offering advanced Btrfs options in YaST. But that will take some time since it comes with some challenges, with the design of the interface not being exactly the smallest one. Meanwhile, you can play around with this new libstorage-ng feature using the “probe” command included in the package “libstorage-ng-utils”, as long as you enable the experimental feature with the new variable “YAST_BTRFS_SNAPSHOT_RELATIONS”.

YAST_BTRFS_SNAPSHOT_RELATIONS=yes /usr/lib/libstorage-ng/utils/probe --display

Automatic Check for New /usr/etc/ Files

As we have already mentioned in previous reports, more and more parts of the distribution are changing the way they organize their configuration files. Things are moving from single configuration files in /etc to a layered structure that starts at /usr/etc and includes multiple files with different orders of precedence.

The YaST developers need to know when a file location is changed so we could adapt YaST to read and write to the correct location. Otherwise it could easily happen that the YaST changes would be ignored and we would get bug reports later. Therefore we implemented a regular check which scans for all etc configuration files in Tumbleweed and reports new unknown files.

The script is run once a week automatically at Travis. It downloads the repository metadata file which contains a list of files from all packages. It also contains files which are marked as “ghost”, these files are not part of the package but might be created by RPM post-install script or by user. We just compare the found files with a known list.

Some implementation details of that script may be interesting for our more technical readers, since it uncompresses a *.xml.gz file and parses the XML on the fly while downloading. That means it does not need to temporarily save the file to disk or need it loaded whole in memory. That is really important in this case because the unpacked XML is really huge, it has more than 650MB!

Improvements in YaST’s XML Parser

And talking about technical details, you may remember a couple of sprints ago we mentioned we had started a small side project to improve YaST’s XML parser. Now we can report significant progress on that front.

Let’s first clarify that we are not trying to re-invent the wheel by writing just another low-level parser to deal with XML internals. We are relying on existing parsers to rethink the way YaST serializes data into XML and back. All that with the following goals:

  • Ensuring consistency of information when data is transformed into XML and then into raw data again, with our previous tools some information got silently lost in translation.
  • Report accurate errors if something goes wrong during the mentioned transformation, so we know whether we can trust the result.
  • Possibility of validating the XML at an early stage of (Auto)YaST execution.
  • Reduce the amount of code we need to maintain.
  • Make hand writting of XML easier.

We wrote quite some details about how we are achieving each one of those goals just to realize all that fits better into a separate dedicated blog post. Stay tuned if you are interested in those technical details. If you are not, you may be wondering how all this affects you as a final user. So far, the advantages will be specially noticeable for AutoYaST users. More consistent and shorter XML enables easier editing of the AutoYaST profile. Schema validation allows early warning when the XML is malformed. And the more precise error reporting will prevent silent failures caused by errors in the AutoYaST profile that can lead to subsequent problems in the system.

We keep moving

The future will not only bring a better AutoYaST, more LVM and Btrfs capabilities or a new XML parser. As mentioned at the beginning of this post, we will also change a bit the way we communicate the changes implemented in every YaST development sprint.

We currently invest quite some effort every two weeks putting all the information together in the form of a text that can be read as a consistent and self-contained nice story, and that is not always that easy. We feel time has come to try an alternative approach.

From now on, our sprint reports will consist basically in a curated and organized list of links to pull requests at Github. On one hand, the YaST Team puts a lot of effort into writing comprehensive and didactic descriptions for each pull request. On the other hand, our readers are usually more interested in some particular topics than in others. So we feel such digest will provide a good overview allowing the readers to focus on some topics by checking the descriptions of the corresponding pull requests.

We will still publish those summaries in this blog. And we will continue publishing occasional extra posts dedicated to a single topic, like the one about the revamped XML parser we just mentioned some lines above.

So, see you soon in this blog and in all our usual communication channels… and keep having a lot of fun!

Developing Software for Linux on Mainframe at Home

When developing for architectures that are not mainstream, developers often have challenges to get access to current systems that allow to work on a specific software. Especially when asking to fix an issue that shows up only on big endian hardware, the answer I repeatedly get is, that it’s hard to get access to an appropriate machine.

I just recently saw reports that told that the qemu project made substantial progress with supporting more current Mainframe hardware. Thus I thought, how hard could it be to create a virtual machine that allows to develop for s390x on local workstation hardware.

It turned out to be much easier than I thought. First, I did a standard install of tumbleweed for s390x, which went quite easy. But then I remembered that also the OBS supports emulators, and specifically qemu to run virtual machines.

I got myself a recent version of qemu-s390 from the Virtualization project:

osc repourls Virtualization
...
su 
cd /etc/zypp/repos.d && \
wget https://download.opensuse.org/repositories/Virtualization/openSUSE_Leap_15.1/Virtualization.repo
zypper install --allow-vendor-change qemu-s390
exit

After this, we are almost done. The next part is to checkout some package from OBS and try to build it:

mkdir ~/obs && cd $_
osc co openSUSE:Factory:zSystems cmsfs
cd openSUSE:Factory:zSystems/cmsfs

Now, you can run the build locally with the ‘osc’ command. You will have to specify the amount of memory you want to give to the resulting virtual machine, in my case here, it is 8GByte:

osc build --vm-type qemu --vm-memory=8192 standard s390x

Building locally is nice, but how about working on that software? That is where the fun begins. Typically you would be able to do a chroot to a local directory when building in a chroot environment. So, lets just do some beginner error and run osc with the chroot command:

osc chroot --vm-type qemu --vm-memory=8192 standard s390x

To my big surprise, that command did not complain. I just opened up a second terminal, and found that in the background some processes were heavily working, and after a while, I was actually placed into a shell.

To double check, I ran ‘cat /proc/cpuinfo’ and yes, I am placed into a s390x virtual machine!

Putting things together: All you have to do, to have a running s390x virtual machine crafted for some specific package with all latest updates is:

  1. Get the package source from OBS
  2. Run osc chroot

I think that is really great functionality. Thanks to the excellent OBS team that made this work out of the box without much hassle. Great Job!!!

openSUSE for INNOVATORS Project is born

It is with great enthusiasm that I announce the INNOVATORS for openSUSE project, is an initiative to share projects, articles and news about innovative projects on the openSUSE platform developed by the community and public and private companies.

All information on this wiki is related to innovative projects that use augmented reality technology, artificial intelligence, computer vision, robotics, virtual assistants and any and all innovative technology (in all hardware plataforms ).

This initiative search collaborators for the project, the objective is to show the power of the openSUSE platform in innovative projects. To send suggestions, criticisms or be part of the INNOVATORS openSUSE community, send an email to the administrator Alessandro de Oliveira Faria (A.K.A. CABELO) at the email cabelo@opensuse.org.

To inaugurate the openSUSE for INNOVATORS project, we announced the ISOLALERT software created to combat COVID-19 by monitoring social isolation with computer vision. More information HERE

openSUSE Leap "15.2" Enters Release Candidate Phase

The openSUSE community, contributors and release engineers for the project have entered into the release candidate phase today after the Build “665.2” snapshot was released for the upcoming openSUSE Leap “15.2” version.

In an email to the openSUSE Factory mailing list, Leap release manager Lubos Kocman recommended Beta and RC users using the “zypper dup” command in the terminal prior switching to the General Availability (GA).

The release candidate signals the package freeze for software that will make it into the distribution. Among some of the packages that are expected in the release are KDE’s Plasma “5.18” Long-Term-Support version, GNOME “3.34” and Xfce “4.14”. New package for Artificial Intelligence and data scientist will be in the release. The release will also contain the tiling Wayland compositor Sway, which is a drop-in replacement for the i3 window manager for X”11”. The DNF package manager has been rebased to version “4.2.19”, which brings many fixes and improvements. In addition, a lightweight C implementation of DNF called “Micro DNF” is now included. Pagure, which provides an easy, customizable, lightweight solution for setting up your own full-featured Git repository server, has been updated to version “5.10.0”. A list of some of the packages in Leap “15.2” can be found on the openSUSE Wiki.

During the development stage of Leap versions, contributors, packagers and the release team use a rolling development method that are categorized into phases rather than a single milestone release; snapshots are released with minor version software updates once passing automated testing until the final release of the Gold Master (GM). At that point, the GM becomes available to the public (GA expected on July “2”) and the distribution shifts from a rolling development method into a supported release where it receives maintenance and security updates until its End of Life (EOL). The EOL of openSUSE Leap “15.1” is six months after the release of Leap “15.2”, so users of “15.1” will need to update by the beginning of “2021”.

Kocman listed the following important dates related to the release: June “22”: Translation deadline for packages June “23”: Final package submission deadline June “23”: Translation deadline for infrastructure June “25”: Gold Master followed by a public release the next week on Thursday, July 2.

Thursday

28 May, 2020

Creating a dedicated log management layer

Event logging is a central source of information both for IT security and operations, but different teams use different tools to collect and analyze log messages. The same log message is often collected by multiple applications. Having each team using different tools is complex, inefficient and makes systems less secure. Using a single application to create a dedicated log management layer independent of analytics instead, however, has multiple benefits.

Using syslog-ng is a lot more flexible than most log aggregation tools provided by log analytics vendors. This is one of the reasons why my talks and blogs focused on how to make your life easier using its technical advantages. Of course, I am aware of the direct financial benefits as well. If you are interested in that part, talk to my colleagues on the business side. They can help you to calculate how much you can save on your SIEM licenses when syslog-ng collects log messages and ensures that only relevant messages reach your SIEM and only at a predicatively low message rate. You can learn more about this use case on our Optimizing SIEM page.

In this blog, I will focus on a third aspect: simplifying complexity. This was the focus of many of my conference discussions before the COVID-19 pandemic. If we think a bit more about it, we can see that this is not really a third aspect, but a combination of the previous two instead. Using the flexibility of syslog-ng, we create a dedicated log management layer in front of different log analytics solutions. By reducing complexity, we can save in many ways: on computing and human resources, and on licensing when using commercial tools for log analysis as well.

Back to basics

While this blog is focusing on how to consolidate multiple log aggregation systems that are specific to analytics softwares into a common log management layer, I also often see that many organizations still do not see the need for central log collection. So, let’s quickly jump back to the basics: why central log collection is important. There are three major reasons:

  • Convenience: a single place to check logs instead of many.

  • Availability: logs are available even when the sender machine is down or offline.

  • Security: you can check the logs centrally even if a host was breached and logs were deleted or falsified locally.

The client-relay-server architecture of syslog-ng can make sure that central logging scales well, even for larger organizations with multiple locations. You can learn more about relays at https://www.syslog-ng.com/community/b/blog/posts/what-syslog-ng-relays-are-good-for

Reducing complexity

Collecting system logs with one application locally, forwarding the logs with another one, collecting audit logs with a different app, buffering logs with a dedicated server, and processing logs with yet another app centrally means installing several different applications on your infrastructure. This is the architecture of the Elastic stack, for example. Many others are simpler, but still separating system log collection (journald and/or one of the syslog variants) and log shipping. This is the case of Splunk forwarder and many of the different Logging as a Service agents. And on top of that, you might need a different set of applications for different log analysis software. Using multiple software solutions makes a system more complex, difficult to update and needs more computing, network and storage resources as well.

All these features can be implemented using a single application, which in the end can feed multiple log analysis software. A single app to learn and to follow in bug & CVE trackers. A single app to push through the security and operations teams, instead of many. Less resources needed both on the human and technical side.

Implementing a dedicated log management layer

The syslog-ng application collects logs from many different sources, performs real-time log analysis by processing and filtering them, and finally, it stores the logs or routes them for further analysis.

In an ideal world, all log messages come in a structured format, ready to be used for log analysis, alerting or dashboards. But in a real world, only part of the logs fall into this category. Traditionally, most of the log messages come as free format text messages. These are easy to be read by humans, which was the original use of log messages. However, nowadays logs are rarely processed by the human eye. Fortunately, syslog-ng has several tools to turn unstructured (and many of the structured) message formats into name-value pairs, and thus delivers the benefits of structured log messages.

Once you have name-value pairs, log messages can be further enriched with additional information in real-time, which helps responding to security events faster. One way of doing that is adding geo-location based on IP addresses. Another way is adding contextual data from external files, like the role of a server based on the IP address or the role of the user based on the name. Data from external files can also be used to filter messages (for example, to check firewall logs to determine whether certain IP addresses are contained in various black lists for malware command centers, spammers, and so on).

Logging is subject to an increasing number of compliance regulations. PCI-DSS or many European privacy laws require removing sensitive data from log messages. Using syslog-ng logs can be anonymized in a way that they are still useful for security analytics.

With log messages parsed and enriched, you can now make informed decisions where to store or forward log messages. You can already do basic alerting in syslog-ng, and you can receive critical log messages on a Slack channel. There are many ready-to-use destinations within syslog-ng, like Kafka, MongoDB or Elasticsearch. Also, you can easily create your own custom destination based on the generic network or HTTP destinations, and using templates to log in a format as required by a SIEM or a Logging as a Service solution, like Sumo Logic.

What is next?

Many of these concepts were covered before in earlier blogs, and the individual features are covered well in the documentation. If you want to learn more about them and see some configuration examples, join me at the Pass the SALT conference, where among many other interesting talks, you can also learn more in depth about creating a dedicated log management layer.

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

KDE Applications, Wireshark, IceWM update in Tumbleweed

The last week has produced a total of three openSUSE Tumbleweed snapshots bringing the total amount of snapshots for the month to 18.

All 18 snapshots have recorded a stable rating above 91, according to the Tumbleweed snapshot reviewer. With 14 of them, recording a rating of 99 and the last two snapshots trending at a 99 rating.

The most recent 202000526 snapshot provided the 3.2.4 release of Wireshark. The new version fixed a Common Vulnerabilities and Exposures where it was possible to make Wireshark crash by injecting a malformed packet onto the wire or by convincing someone to read a malformed packet trace file. Linux Kernel 5.6.14 re-established support for RTL8401 chip version. DNS server and client utilities package bind 9.16.3 fixed to security problems and added engine support for OpenSSL Edwards-curve Digital Signature Algorithm implementation. Document viewer evince 3.36.1 updated translations, fixed an incorrect markup in the Czech User Interface and updated the French help image. SSL VPN client package openconnect 8.10 installed a bash completion script and fixed a potential buffer overflow with security communications library GnuTLS. GNOME’s 0.30.10 image organizer shotwell, which was the subject of a recently settled a patient lawsuit, modified web publishing authentication to comply with Google’s requirements.

Snapshot 202000523 updated the rest of KDE ‘s Applications 20.04.1 stack. Among the fixes highlighted for the release are having kio-fish only store passwords in KWallet if the user asked for it, Kdenlive video editor had many stability updates, and fixes were made to the JuK music player that sometimes crashed. DNS resolver package unbound 1.10.1 offered up two security fixes for CVE 2020-12662 and 2020-12663. Perl Compatible Regular Expressions (pcre2) updated to version 10.35 and the Ruby code style checking tool rubygem-rubocop had multiple community contributions in the recent update to version 0.83.0, which added new features, support and fixes.

Application 20.04.1 began updating in Tumbleweed’s 202000523 snapshot where the file manager Dolphin received a crash fix if no Konsole is installed. The ImageMagick package fixed a black line bug when converting gif images in the snapshot. There was added filesystem UUID support in the erofs-utils 1.1 package update. The package for the window manager for the X Window System, icewm, updated to version 1.6.5 and fix for positioning of splash window on multi-head displays. IceWM also provided fixes and updates for both the configure and the cmake build. The Mail Transport Agent, postfix, updated to 3.5.2; the release fixed a bug introduced in postfix 2.2 where a Transport Layer Security (TLS) error for a database client caused a false ‘lost connection’ error for an Simple Mail Transfer Protocol (SMTP) over a TLS session in the same Postfix process. Python3 3.8.3 fix possible memory leak and improved error reporting and sudo 1.9.0 updated the default TLS listener to only be enabled when either the TLS certificate file is explicitly specified in sudo_logsrvd.conf or the default TLS certificate file exists in the file system.

Se buscan candidat@s para las elecciones 2020 de la Fundación de GNOME

Se me olvidó escribir esto hace unos días; espero que no sea demasiado tarde.

Las elecciones para el cuerpo de directores de la Fundación de GNOME (el Board) están en marcha, y buscamos candidat@s. De los 7 directores, vamos a remplazar a 4, y los otros 3 restantes se quedan un año más en su puesto. Tú podrías ser uno de esos cuatro.

Me gustaría mucho que hubiera candidaturas y director@s que caigan afuera de la cajita de "hombre blanco programador"; es una pena que el último período haya sido de puros fulanos. GNOME tiene un código de conducta para que sea un buen lugar para contribuir.

Allan Day escribió un resumen de lo que el Board ha estado haciendo el año pasado. Estamos pasando de un modelo de ser un cuerpo de directores que hace un poco de todo, a uno más estratégico — ahora que la Fundación tiene empleados de tiempo completo, ellos se encargan de buena parte del trabajo ejecutivo.

¡Las candidaturas se pueden mandar hasta el 29 de mayo, o sea que apúrense!

Looking for candidates for the 2020 GNOME Foundation elections

I forgot to write this a few days ago; I hope it is not too late.

The GNOME Foundation's elections for the Board are coming up, and we are looking for candidates. Of the 7 directors, we are replacing 4, and the 3 remaining positions remain for another year. You could be one of those four.

I would like it very much if there were candidates and directors that fall outside the box of "white male programmer"; it is unfortunate that for the current Board we ended up with all dudes. GNOME has a Code of Conduct to make it a good place to be.

Allan Day wrote a review of the Board's activies for the last year. We are moving from a model where the Board does a little bit of everything, to one with a more strategic role — now that the Foundation has full-time employees, they take care of most of the executive work.

The call-for-candidates is open until May 29, so hurry up!

Welcome PDF Quirk

How often have you scanned a letter, a certificate or whatever and looked for the right way to call $UTILITY to convert it to a PDF that can be shared via internet?

For this very common use case I could not find a tool to make that really easy for the Linux desktop. Given my mission to help making the Linux desktop more common in the small business world (do you know Kraft?) I spent some time starting this little project.

Please welcome PDF Quirk, the desktop app to easily create PDFs out of images from storage or directly from the scanner!

It is just what this screenshot shows: A one page app to pick images from either the harddisk or from a scanner if configured, and save it right away to a multi page PDF file. The only option is to have either monochrome or color scan. No further scan chi-chi, just nice PDFs within seconds.

Of course I did not want to spend too much time and reinvent the wheel. PDF Quirk uses ImageMagicks convert utility and the command line scan client scanimage of the SANE Project in the background. Both are welknown standard commands on Linux, and serve well for this purpose.

Maybe you find PDF Quirk useful and wanna try it. You find it on Github, or packages on the openSUSE Buildservice.

Contributions and comments are very welcome. It is a fun little project!

openSUSE Talks at SUSECON Digital

SUSECON Digital 2020 starts today and it is free to register and participate in SUSE’s premier annual event. This year features more than 190 sessions and hands-on training from experts.

There are less than a handful of openSUSE related talks. The first openSUSE related talk is about openSUSE Kubic and takes place on May 20 at 14:00 UTC. In the presentation, attendees will receive an update about the last year of openSUSE Kubic development and see a demonstration on deploying Kubernetes with Kubic-control on a Raspberry Pi 4 cluster. Attendees will see how to install new nodes with YOMI, which is the new Salt-based auto installer that was integrated into Kubic.

The next talk is about open-source licenses. The talk will focus on the legal review app used by SUSE lawyers and the openSUSE Community. The app called Cavil was developed as an open source application to verify the use of licenses in open source software that help lawyers to evaluate risk related to software solutions. The talk scheduled for June 3 at 13:00 UTC will explain the ease of use for the lawyers and how the model significantly increases the throughput of the ecosystem.

The last openSUSE related talks centers on openSUSE Leap and SUSE Linux Enterprise. The talk will take place on June 10 at 13:30 UTC. Attendees will learn about the relationship between openSUSE Leap and SUSE Linux Enterprise 15, how it is developed and maintained and how to migrate instances of Leap to SLE.

The event as a whole is a great opportunity for people who are familiar with open-source and openSUSE. People who just want to learn more about the technology free of cost also have a great opportunity to experience the openness of SUSE and the event.