GNOME, KDE Frameworks, Mutt update in Tumbleweed
Four openSUSE Tumbleweed snapshots have been released since last Thursday.
Only two packages came in the most recent 20201124 snapshot. Email client mutt had a version bump from 1.14.7 to 2.0.2; the new major release was not because of the magnitude of features but because a few changes are backward incompatible. There were some important changes highlighted like when using attach-file to browse and add multiple attachments to an email; quit can be used to exit after tagging the files. For the full list, read the release notes. The release also fixed a Common Vulnerabilities and Exposures that ensures the IMAP connection is closed after a connection error to avoid sending credentials over an unencrypted connection. The other package in the snapshot was the Ruby static code analyzer rubygem-rubocop. The updated 1.3.1 version offers multiple new features and fixes like reading the required_ruby_version from gemspec file if it exists.
The 20201123 snapshot had several GNU package updates like an update to GNU Compiler Collection 10, which Included a fix for a memcpy miscompilation on aarch64. GNU binary tool binutils cleaned up the specfile in the 2.35.1 version and general-purpose parser generator bison 3.7.4 now defines YYBISON macro as an integer. The ipset 7.9 update enabled memory accounting for ipset allocations and the Passlib password hashing library for Python, python-passlib 1.7.4, added optional dependencies for web framework Django and apache2-utils. Bar code reader package zbar 0.23.1 changed defaults to autodetect Python and GTK versions. Several YaST packages were updated like yast2-network 4.3.28, which provided a fix for the detecting of connection configuration changes, and yast2-firstboot 4.3.8, which removed a duplicated lan client from the firstboot control file and modified the firstboot_dhcp_setup client using the installation dhcp setup client directly.
Snapshot 20201121 highlights the CVE hunting the Mozilla Thunderbird project did in version 78.5.0. The email client closed out more than a dozen CVEs like single-word search queries that were broadcast to local networks (CVE-2020-26966) and the software keyboards that may have remembered typed passwords (CVE-2020-26965). Privacy guard gpg2 2.2.24 fixed the encrypt+sign hash algo preference selection for Elliptic Curve Digital Signature Algorithm, which is needed for keys created from existing smartcard based keys. Support exporting secret keys was made to the cryptography support program gpgme 1.15.0. Sudo now logs when a user-specified command-line option is rejected by a sudoers rule in sudo 1.9.3p1 and ucode-intel 20201118 removed TGL/06-8c-01/80 due to functional issues with some original equipment manufacturer platforms.
The snapshot released a week ago on Thursday was a release many were waiting for with GNOME 3.38. Snapshot 20201119 updated GNOME users to the new Orbis release that highlights the main functionality of the desktop and provides first time users a nice welcome to GNOME with the GNOME tour welcome app. The release provides better screen recording infrastructure in GNOME Shell, made improvements to take advantage of multimedia processing package PipeWire and kernel APIs to reduce resource consumption and improve responsiveness. KDE Frameworks 5.76.0 arrived in the snapshot as well made improvements to Plasma’s Breeze Icons; Plasma Frameworks locked the header colors of Breeze Dark and Breeze Light themes and remove unnecessary anchors in the ComboBox.contentItem. Kirigami package improved the look of the FormLayout on mobile and fixed the menus in contextualActions. The 2.66.3 glib2 package fixed sending large D-Bus messages. Tools for accessing the process power of the Linux Kernel gained Alder Lake, Rocket Lake and Sapphire Rapid support in the update to cpupower 5.10. Other notable packages to that updated in the snapshot were gtksourceview4 4.8.0, pango 1.48.0, libsoup 2.72.0 and vala 0.50.1.
Presenting Cockpit Wicked
What is Cockpit?
If you are into systems management, you most likely have heard about
Cockpit at some point. In a nutshell, it offers a good looking
web-based interface to perform system tasks like inspecting the logs, applying system updates,
configuring the network, managing services, and so on. If you want to give it a try, you can install
Cockpit in openSUSE Tumbleweed just by typing zypper in cockpit.
And Cockpit Wicked?
Recently, the YaST team got informed that MicroOS developers wanted to offer Cockpit as an option for system management tasks. Unfortunately, Cockpit does not have support for Wicked, a network configuration framework included in SUSE-based distributions.
As we are experts in systems management, we were given the task of building a Cockpit module to handle network configuration using Wicked instead of NetworkManager. And today we are presenting the first version of such a module. It is still a work in progress, but it already supports some basic use cases:
- Inspect interfaces configuration.
- Configure common IPv4/IPv6 settings.
- Set up wireless devices, although only WEP and WPA-PSK authentication mechanisms are supported.
- Set up bridges, bonding and VLAN devices.
- Manage routes (work in progress).
- Set basic DNS settings, like the policy, the search list or the list of static name servers.
Why a new module?
Cockpit already features a nice module to configure the network so you might be wondering why not extending the original instead of creating a new one. The module shipped with Cockpit is specific to NetworkManager and adapting it to a different backend can be hard.
In our case, we are trying to build something that could be adapted in the future to support more backends, but we are not sure how realistic this idea is.
See It In Action
Before giving it a try, we guess you would like to see some screenshots, right? So here you are. Below you can see the list of interfaces and some details about their configurations. It features a switch to activate/deactivate each device and a button to wipe the configuration. You can change the configuration by clicking on the links.
While applying the changes, Cockpit Wicked tries to keep you informed by updating the user interface as soon as possible. And, if something goes wrong, you will get an error message. Sure, we need to improve those messages, but you have something to look into it.
At this point, WEP and WPA-PSK authentication is supported by wireless devices, and we plan to expand the support for the same mechanisms that are already supported by YaST.
Another interesting feature is the support for some virtual devices like bridges, bonding and VLAN.
And last but not least, support for routes management or DNS configuration is rather simple but already functional.
Installation
The module has started its way to Tumbleweed. But, if you are interested in giving it a try, you can grab the RPM from the GitHub release page.
If you are already using Wicked, we recommend you to take a backup of your network configuration
just in case something goes wrong. Just copying the /etc/sysconfig/network directory is enough. In
case you are using NetworkManager but you are curious enough to give this module a try, you can
easily switch to Wicked using the YaST2 network module.YaST2 Network
Bear in mind that this module will replace the one included by default in Cockpit. If you want the
original module back, you need to uninstall the cockpit-wicked package.
What’s Next?
Apart from polishing what we already have and fixing bugs, there are many things to do. In the short term, we are focused on:
- Submitting the strings for translation so you can enjoy the module in your preferred language.
- Improving the UX according to our usability experts.
- Filtering out interfaces that are not managed by Wicked (like
virbr0).
But before deciding our next steps, we would love to hear from you. So please, if you have some time and you are interested, give the module a try and tell us what you think.
Additional Links
- GitHub repository: https://github.com/openSUSE/cockpit-wicked
- Development tips: https://github.com/openSUSE/cockpit-wicked/blob/master/DEVELOPMENT.md
Gaming Rack Design and Construction
Web interfaces for your syslog server – an overview
This is the 2020 edition of my most read blog entry about syslog-ng web-based graphical user interfaces (web GUIs). Many things have changed in the past few years. In 2011, only a single logging as a service solution was available, while nowadays, I regularly run into others. Also, while some software disappeared, the number of logging-related GUIs is growing. This is why in this post, I will mostly focus on generic log management and open source instead of highly specialized software, like SIEMs.
Why grep is not enough?
Centralized event logging has been an important part of IT for many years for many reasons. Firstly, it is more convenient to browse logs in a central location rather than viewing them on individual machines. Secondly, central storage is also more secure. Even if logs stored locally are altered or removed, you can still check the logs on the central log server. Finally, compliance with different regulations also makes central logging necessary.
System administrators often prefer to use the command line. Utilities such as grep and AWK are powerful tools, but complex queries can be completed much faster with logs indexed in a database and a web interface. In the case of large amounts of messages, a web-based database solution is not just convenient, it is a necessity. With tens of thousands of incoming messages per second, the indexes of log databases still give Google-like response times even for the most complex queries, while traditional text-based tools are not able to scale as efficiently.
Why still syslog-ng?
Many software used for log analysis come with their own log aggregation agents. So why should you still use syslog-ng then? As organizations grow, so does the IT staff starts to diversify. Separate teams are created for operations, development and security, each with its own specialized needs in log analysis. And even the business side often needs log analysis as an input for business decisions. You can quickly end up with 4-5 different log analysis and aggregation systems running in parallel and working from the very same log messages.
This is where syslog-ng can come handy: creating a dedicated log management layer, where syslog-ng collects all of the log messages centrally, does initial basic log analysis, and feeds all the different log analysis software with relevant log messages. This can save you time and resources in multiple ways:
-
You only have to learn one tool instead of many.
-
Only a single tool to push through security and operations teams.
-
There are less computing resources on clients.
-
Logs travel only once over the network.
-
Long term archival in a single location with syslog-ng instead of using multiple log analysis software.
-
Filtering on the syslog-ng side can save significantly on the hardware costs of the log analysis software, and also on licensing in case of a commercial solution.
The syslog-ng application can collect both system and application logs, and can be installed both as a client and a server. Thus, you have a single application to install for log management everywhere on your network. It can reliably collect and transport huge amounts of log messages, parse (“look into”) your log messages, enrich them with geographical location and other extra data, making filters and thus, log routing, much more accurate.
Logging as a Service (LaaS)
A couple years ago, Loggly was the pioneer of logging as a service (LaaS). Today, there are many other LaaS providers (Papertrail, Logentries, Sumo Logic, and so on) and syslog-ng works perfectly with all of them.
Structured fields and name-value pairs in logs are increasingly important, as they are easier to search, and it is easier to create meaningful reports from them. The more recent IETF RFC 5424 syslog standard supports structured data, but it is still not in widespread use.
People started to use JSON embedded into legacy (RFC 3164) syslog messages. The syslog-ng application can send JSON-formatted messages – for example, you can convert the following messages into structured JSON messages:
-
RFC5424-formatted log messages.
-
Windows EventLog messages received from the syslog-ng Agent for Windows application.
-
Name-value pairs extracted from a log message with PatternDB or the CSV parser.
Loggly and other services can receive JSON-formatted messages, and make them conveniently available from the web interface.
A number of LaaS providers are already supported by syslog-ng out of the box. If your service of choice is not yet directly supported, the following blog can help you create a new LaaS destination: https://www.syslog-ng.com/community/b/blog/posts/how-to-use-syslog-ng-with-laas-and-why
Some non-syslog-ng-based solutions
Before focusing on the solutions with syslog-ng at their heart, I would like to say a few words about the others, some which were included in the previous edition of the blog.
LogAnalyzer from the makers of Rsyslog was a simple, easy to use PHP application a few years ago. While it has developed quite a lot, recently I could not get it to work with syslog-ng. Some of the popular monitoring software have syslog support to some extent, for example, Nagios, Cacti and several others. I have tested some of these, I have even sent patches and bug reports to enhance their syslog-ng support, but syslog is clearly not their focus, just one of the possible inputs.
The ELK stack (Elasticsearch + Logstash + Kibana) and Graylog2 have become popular recently, but they have their own log collectors instead of syslog-ng, and syslog is just one of many log sources. Syslog support is quite limited both in performance and protocol support. They recommend using file readers for collecting syslog messages, but that increases complexity, as it is an additional software on top of syslog(-ng), and filtering still needs to be done on the syslog side. Note that syslog-ng can send logs to Elasticsearch natively, which can greatly simplify your logging architecture.
Collecting and displaying metrics data
You can collect metrics data using syslog-ng. Examples include netdata or collectd. You can send the collected data to Graphite or Elasticsearch. Graphite has its own web interface, while you can use Kibana to query and visualize data collected to Elasticsearch.
Another option is to use Grafana. Originally, it was developed as an alternative web interface to the Graphite databases, but now it can also visualize data from many more data sources, including Elasticsearch. It can combine multiple data sources to a single dashboard and provides fine-grained access control.
Loki by Grafana is one of the latest applications that lets you aggregate and query log messages, and of course, to visualize logs using Grafana. It does not index the contents of log messages, only the labels associated with logs. This way, processing and storing log messages requires less resources, making Loki more cost-effective. Promtail, the log collector component of Loki, can collect log messages using the new, RFC 5424 syslog protocol. Learn here how syslog-ng can send its log messages to Loki.
Splunk
One of the most popular web-based interfaces for log messages is Splunk. A returning question is whether to use syslog-ng or Splunk. Well, the issue is a bit of apples vs. oranges: they do not replace, but rather complement each other. As I already mentioned in the introduction, syslog-ng is good at reliably collecting and processing huge amounts of data. Splunk, on the other hand, is good at analyzing log messages for various purposes. Learn more about how you can integrate syslog-ng with Splunk from our white paper!
Syslog-ng based solutions
Here I show a number of syslog-ng based solutions. While every software described below is originally based on syslog-ng Open Source Edition (except for One Identity’s own syslog-ng Store Box (SSB)), there are already some large-scale deployments available also with syslog-ng Premium Edition as their syslog server.
-
The syslog-ng application and SSB focus on generic log management tasks and compliance.
-
LogZilla focuses on logs from Cisco devices.
-
Security Onion focuses on network and host security.
-
Recent syslog-ng releases are also able to store log messages directly into Elasticsearch, a distributed, scalable database system popular in DevOps environments, which enables the use of Kibana for analyzing log messages.
Benefits of using syslog-ng PE with these solutions include the logstore, a tamper-proof log storage (even if it means that your logs are stored twice), Windows support, and enterprise grade support.
LogZilla
LogZilla is the commercial reincarnation of one of the oldest syslog-ng web GUIs: PHP-Syslog-NG. It provides the familiar user interface of its predecessor, but also includes many new features. The user interface supports Cisco Mnemonics, extended graphing capabilities, and e-mail alerts. Behind the scenes, LDAP integration, message de-duplication, and indexing for quick searching were added for large datasets.
Over the past years, it received many small improvements. It became faster, and role-based access control was added, as well as the live tailing of log messages. Of course, all these new features come with a price; the free edition, which I have often recommended for small sites with Cisco logs is completely gone now.
A few years ago, a complete rewrite became available with many performance improvements under the hood and a new dashboard on the surface. Development never stopped, and now LogZilla can parse and enrich log messages, and can also automatically respond to events.
Therefore, it is an ideal solution for a network operations center (NOC) full of Cisco devices.
Web site: http://logzilla.net/
Security Onion
One of the most interesting projects utilizing syslog-ng is Security Onion, a free and open source Linux distribution for threat hunting, enterprise security monitoring, and log management. It is utilizing syslog-ng for log collection and log transfer, and uses the Elastic stack to store and search log messages. Even if you do not use its advanced security features, you can still use it for centralized log collection and as a nice web interface for your logs. But it is also worth getting acquainted with its security monitoring features, as it can provide you some useful insights about your network. Best of all, Security Onion is completely free and open source, with commercial support available for it.
You can learn more about it at https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-and-security-onion
Elastisearch and Kibana
Elasticsearch is gaining momentum as the ultimate destination for log messages. There are two major reasons for this:
-
You can store arbitrary name-value pairs coming from structured logging or message parsing.
-
You can use Kibana as a search and visualization interface.
The syslog-ng application can send logs directly into Elasticsearch. We call this an ESK stack (Elasticsearch + syslog-ng + Kibana).
Learn how you can simplify your logging to Elasticsearch by using syslog-ng: https://www.syslog-ng.com/community/b/blog/posts/logging-to-elasticsearch-made-simple-with-syslog-ng
syslog-ng Store Box (SSB)
SSB is a log management appliance built on syslog-ng Premium Edition. SSB adds a powerful indexing engine, authentication and access control, customized reporting capabilities, and an easy-to-use web-based user interface.
Recent versions introduced AWS and Azure cloud support and horizontal scalability using remote logspaces. The new content-based alerting can send an e-mail alert whenever a match between the contents of a log message and a search expression is found.
SSB is really fast when it comes to indexing and searching log data. To put this scalability in context, the largest SSB appliance stores up to 10 terabytes of uncompressed, raw logs. With SSB’s current indexing performance of 100,000 events per second, that equates to approximately 8.6 billion logs per day or 1.7 terabytes of log data per day (calculating with an average event size of 200 bytes). Using compression, a single, large SSB appliance could store approximately one month of log data for an enterprise generating 1.7 terabytes of event data a day. This compares favorably to other solutions that require several nodes for collecting this amount of messages, and even more additional nodes for storing them. While storing logs to the cloud is getting popular, on-premise log storage is still a lot cheaper for a large amount of logs.
The GUI makes searching logs, configuring and managing the SSB easy. The search interface allows you to use wildcards and Boolean operators to perform complex searches, and drill down on the results. You can gain a quick overview and pinpoint problems fast by generating ad-hoc charts from the distribution of the log messages.
Configuring the SSB is done through the user interface. Most of the flexible filtering, classification and routing features in the syslog-ng Open Source and Premium Editions can be configured with the UI. Access and authentication policies can be set to integrate with Microsoft Active Directory, LDAP and RADIUS servers. The web interface is accessible through a network interface dedicated to the management traffic. This management interface is also used for backups, sending alerts, and other administrative traffic.
SSB is a ready-to-use appliance, which means that no software installation is necessary. It is easily scalable, because SSB is available both as a virtual machine and as a physical appliance, ranging from entry-level servers to multiple-unit behemoths. For mission critical applications, you can use SSB in High Availability mode. Enterprise-level support for SSB and syslog-ng PE is also available.
■ Read more about One Identity’s syslog-ng and SSB products here.
Digest of YaST Development Sprint 113
Time flies and it has been already two weeks since our previous development report. On these special days, we keep being the YaST + Cockpit Team and we have news on both fronts. So let’s do a quick recap.
Cockpit Modules
Our Cockpit module to manage wicked keeps improving.
Apart from several small enhancements, the module has now better error reporting and correctly
manages those asynchronous operations that wicked takes some time to perform. In addition, we have
improved the integration with a default Cockpit installation, ensuring the new module replaces the
default network one (which relies on Network Manager) if both are installed. In the following days
we will release RPM packages and a separate blog post to definitely present Cockpit Wicked to the
world.
On the other hand, we also have news about our Cockpit module to manage transactional updates. We are creating some early functional prototypes of the user interface to be used as a base for future development and discussions. You can check the details and several screenshots at the following pull requests: request#3, request#5.
Btrfs Subvolumes in the Partitioner
Regarding YaST and as already mentioned in our previous blog post, we are working to ensure Btrfs subvolumes get the attention they deserve in the user interface of the YaST Partitioner, becoming first class citizens (like partitions or LVM logical volumes) instead of an obscure feature hidden in the screen for editing a file system.
As part of that effort, we improved the existing mechanism to suggest a given list of subvolumes, based on the selected product and system role. See more details and screenshots at the corresponding pull request.
We also added some support for Btrfs quotas, a mechanism that can be used to improve space
accounting and to ensure a given subvolume (eg. /var or /tmp) does not grow too much and ends
up filling up all the space in the root file system. This pull
request explains the new feature with several
screenshots, including the new quite informative help texts.
All the mentioned changes related to subvolumes management will be submitted to openSUSE Tumbleweed in the following days.
More YaST enhancements
Talking about the YaST Partitioner, you may know that we recently added a menu bar to its interface. During this sprint we improved the YaST UI toolkit to ensure the keyboard shortcuts for such menu bar stay as stable as possible. Check the details at this pull request.
We have also been working in making the installer more flexible by adding support to define, per product and per system role, whether YaST should propose to configure the system for hibernation. In the case of SUSE Linux Enterprise, we have adapted the control file to propose hibernation in the SLED case, but not for other members of the SLE family.
See you soon
Of course, we have done much more during the latest two weeks. But we assume you don’t want to read about small changes and boring bug-fixes… and we are looking forward to jump into the next sprint. So let’s go back to work and see you in two weeks!
Gnome Asia summit 2020
24-26 November 2020 @ https://events.gnome.org/event/24
I think it maybe too late to help out telling anyone about this event since registeration already closed but look like the registeration still open. Anyway, let see the schedule:

Some of the segment that offer good topic and caught my eyes :
- The Use and Benefits of Open Source Software in Tour & Travel Company - Rahman Nur
- Building a career in open source : Lessons and Learnings - Umang Jain
- Under COVID-19 pandemic, Korea’s Community Challenges and Futures - DaeHyun Sung
- Leveraging Open Source Tools for Distance Learning During Pandemic Season - Mohammad Hafiz Ismail
Gnome Asia summit 2020 will start by tomorrow today and conference will be online. This event was sponsor by Gitlab and openSUSE.
MicroOS & Kubic: New Lighter Minimum Hardware Requirements
You Spoke, We Heard
openSUSE MicroOS has been getting a significant amount of great attention lately.
We’d like to thank everyone who has reviewed and commented on what we are doing lately. One bit of clear feedback we received loud and clear was that the Minimum Hardware requirement of 20 GB disk space was surprisingly large for an Operating System calling itself MicroOS. We agree! And so we’ve reviewed and retuned that requirement.
New Minimum Storage Requirements
The New Minimum Supported Storage Requirements for MicroOS are
-
5 GBfor the read-only/ (root)partition, with 20GB as the recommended maximum size. -
5 GBfor the read-write/varpartition, with 40GB as the recommended size, or however large you require for your workloads.
Please Note, a standard installation of the minimal MicroOS system role currently uses no more than
-
450 MBwith bare metal hardware support. -
285 MBwithout bare metal hardware support.
Therefore these new lighter requirements still ensure that your MicroOS installations have plenty of room for many automated snapshots from transactional-updates. These changes will not compromise the promise that MicroOS can be updated and rolled back atomically without worry.
MicroOS Desktop Differences
The MicroOS Desktop, which is currently in Alpha and being actively developed, has a subtly different minimum requirement, as a result of its different use case.
-
5 GBfor the read-only/ (root)partition, with at least 40GB recommended, or however large you require for your desktop. -
/varand/homeare provided as read-writenoCoWsub-volumes as part of the/ (root)partition for the storage of containers, flatpaks and user-data.
Available Now
These changes have all been submitted to openSUSE:Factory, tested in openQA, and will soon be released as part of Snapshot version 20201121. They will soon be available for both MicroOS and Kubic across all ISO, Cloud, and VM Images.
Thanks and have a lot of fun!
The MicroOS & Kubic Team
Xfce Virtual Machine Images For Development
The openSUSE distributions offer a variety of graphical desktop environments, one of them being the popular and lightweight Xfce. Up to now there was the stable tested branch available in Tumbleweed already during install. Furthermore, for interested users the development OBS repository xfce:next offered a preview state of what’s coming up next to Tumbleweed.
Xfce Development in openSUSE
Thanks to the hard work of openSUSE’s Xfce team there is a third option: Xfce Development Repository aka RAT In a playful way, a rat is meant to represent the unpolished nature of this release: a rat is scruffy looking compared to a mouse (the cute and beloved mascot of Xfce). And the RAT repository provides packages automatically built right from the Git Master Branch of Xfce upstream development. The goal of this project is to test and preview the new software so that bugs can be spotted and fixed ahead of time by contributing upstream. The packages pull in source code state on a daily basis and offer a quite convenient way to test and eventually help development. So this is where the team builds and tests the latest and unstable releases of Xfce Desktop Environment for openSUSE.
One step beyond
While this has been around for quite some time now the openSUSE Xfce team managed to make things even more easy to use. Instead of having to install Tumbleweed yourself, add package repositories and install packages you can now let things be done for you by the use of OBS, Kiwi and Virtualization combined.
Standing on the shoulders of these great projects the Xfce team built KVM & Xen images use Xfce Git master with customized openSUSE settings. The build process happens fully automated and on a regular basis giving users qcow2 disk images with Xfce’s latest builds based on openSUSE’s rolling release Tumbleweed.
To help upstream Xfce development there is another disk image accompanying the openSUSE customized one. This so called “vanilla” disk image ships Xfce completely unmodified from Git sources and without any openSUSE visual tweakings. This gives Xfce devs (and testers alike) who want to build and test software inside a complete Xfce environment the most convenient way to do so. No more building everything themselves or maintaining test setups continuously. Just downloading the latest Xfce RAT disk image and you’re good to go!
Downloads
- openSUSE version:
https://download.opensuse.org/repositories/X11:/xfce:/rat:/images/images/ - Upstream version:
https://download.opensuse.org/repositories/X11:/xfce:/rat:/images:/upstream/images/
Usage
The most convenient way to use openSUSE’s Xfce RAT disk images is virt-manager. It is a desktop user interface for managing virtual machines through libvirt and primarily targets KVM VMs but is also capable for Xen and LXC. Read more about it on their website or install it from openSUSE’s standard repositories right away.
While virt-manager is the recommended way there are of course quite a few others like virt-install or Cockpit Web console. All of them are expected to work like any other solution supporting qcow disk images.
You can find more detailed information on how to use these images in the dedicated wiki page.
Contributing
Like any other team in openSUSE the Xfce team is always happy to welcome people interested in development, packaging, testing and reporting bugs. Find out more about our work here and say hello in the chat if you like!
News in openSUSE Packaging
If you are interested in openSUSE, sooner or later you will probably learn how packages and specfiles work. But packaging is not static knowledge that you learn once and are good to go. The rules change over time, new macros are created and old ones are erased from history, new file paths are used and the old ones are forgotten. So how can one keep up with these changes?
In this article, we will serve you with all recent news and important changes in openSUSE packaging on a silver platter. Whether you are a pro package maintainer or just a casual packager who wants to catch up, you will definitely find something you didn’t know here. We promise.
Table of contents
openSUSE macros
%_libexecdir
TL;DR
-
%_libexecdirmacro expands to/usr/libexecnow (not/usr/lib)
We will start with the most recent change, which is the %_libexecdir macro. In the past, it was a standard practice to store binaries that are not intended to be executed directly by users or shell scripts in the /usr/lib directory. This has been changed with a release of FHS 3.0 that now defines that applications should store these internal binaries in the /usr/libexec directory.
In openSUSE, the first discussions about changing the %_libexecdir macro from /usr/lib to /usr/libexec appeared in fall 2019 but it took several months for all affected packages to be fixed and the change to be adopted. It was fully merged in TW 0825 in August 2020.
Please note, openSUSE Leap distributions, including upcoming Leap 15.3, still expand %_libexecdir to the old /usr/lib.
systemd macros
TL;DR
- Use
%{?systemd_ordering}instead of%{?systemd_requires} - Use
pkgconfig(libsystemd)instead ofpkgconfig(systemd-devel) -
BuildRequires: systemd-rpm-macrosis not needed
In the past, you’ve been told that if your package uses systemd, you should just add the following lines to your spec file and you are good to go:
BuildRequires: systemd-rpm-macros
%{?systemd_requires}
Times are changing, though, and modern times require a bit of a different approach, especially if you want your package to be ready for inclusion inside a container. To explain it, we need to know what the %{?systemd_requires} macro looks like:
$ rpm --eval %{?systemd_requires}
Requires(pre): systemd
Requires(post): systemd
Requires(preun): systemd
Requires(postun): systemd
This creates a hard dependency on systemd. In the case of containers, this can be counterproductive as we don’t want to force systemd to be included when it’s not needed. That’s why the %{?systemd_ordering} macro started being used instead:
$ rpm --eval %{?systemd_ordering}
OrderWithRequires(post): systemd
OrderWithRequires(preun): systemd
OrderWithRequires(postun): systemd
OrderWithRequires is similar to the Requires tag but it doesn’t generate actual dependencies. It just supplies ordering hints for calculating the transaction order, but only if the package is present in the same transaction. In the case of systemd it means that if you need systemd to be installed early in the transaction (e.g. creating an installation), this will ensure that it’s ordered early.
Unless you need to explicitly call the systemctl command from the specfile (which you probably don’t because of the %service_* macros that can deal with it), you shouldn’t use %{?systemd_requires} anymore.
Also note, that systemd-rpm-macros has been required by the rpm package for some time, so it’s not necessary to explicitly require it. You can safely omit it unless you are afraid that rpm will drop it in the future, which is highly unlikely.
The last is the BuildRequires, this is needed in cases where your package needs to link against systemd libraries. In this case, you should use:
BuildRequires: pkgconfig(libsystemd)
instead of the older
BuildRequires: pkgconfig(systemd-devel)
as the new variant can help to shorten the build chain in OBS.
Cross-distribution macros
TL;DR
-
%leap_versionmacro is deprecated - See this table for all distribution macros and their values for specific distros
Commonly, you want to build your package for multiple target distributions. But if you want to support both bleeding-edge Tumbleweed and Leap or SLE, you need to adjust your specfile accordingly. That is why you need to know the distribution version macros.
The best source of information is the table on the openSUSE wiki that will show you the values of these distribution macros for every SLE/openSUSE version. If you want examples on how to identify a specific distro, see this table.
The biggest change between Leap 42 (SLE-12) and Leap 15 (SLE-15) is that %leap_version macro is deprecated. If you want to address e.g. openSUSE Leap 15.2, you should use:
%if 0%{?sle_version} == 150200 && 0%{?is_opensuse}
As you can see, to distinguish specific Leap minor versions, the %sle_version macro is used. The value of %sle_version is %nil in Tumbleweed as it’s not based on SLE.
If you want to identify SLE-15-SP2, you just negate the %is_opensuse macro:
%if 0%{?sle_version} == 150200 && !0%{?is_opensuse}
The current Tumbleweed release (which is changing, obviously) can be identified via:
%if 0%{?suse_version} > 1500
In general, if you want to show the value of these macros on your system, you can do it via rpm --eval macro:
$ rpm --eval %suse_version
1550
Deprecated macros
TL;DR
These macros are deprecated
-
%install_info/%install_info_delete -
%desktop_database_post/%desktop_database_postun -
%icon_theme_cache_post/%icon_theme_cache_postun %glib2_gsettings_schema-
%make_jobs(is now known as%cmake_buildor%make_build)
If you have been interested in packaging for some time, you probably learned a lot of macros. The bad thing is that some of them shouldn’t be used anymore. In this section, we will cover the most common of them.
Database/cache updating macros
The biggest group of deprecated macros is probably those that called commands for updating databases and caches when new files appeared in specific directory:
-
%install_info/%install_info_delete- update info/dir entries
-
%desktop_database_post/%desktop_database_postun- update desktop database cache when
.desktopfiles is added/removed to/from/usr/share/applications
- update desktop database cache when
-
%icon_theme_cache_post/%icon_theme_cache_postun- update the icon cache when icon is added to
/usr/share/icons
- update the icon cache when icon is added to
-
%glib2_gsettings_schema- compile schemas installed to
/usr/share/glib-2.0/schemas
- compile schemas installed to
For example, in the past whenever you installed a new .desktop file in your package, you should have called:
%post
%desktop_database_post
%postun
%desktop_database_postun
Since 2017, these macros have started being replaced with file triggers, which is a new feature of RPM 4.13. See File triggers section for more info.
%make_jobs
The %make_jobs macro was initially used in cmake packaging, but was later adopted in a number of other packages, confusingly sometimes with a slightly different definition. To make matters more confusing it also ended up being more complex than the expected /usr/bin/make -jX. Because of this and to bring the macro more inline with other macros such as meson’s, %make_jobs has been replaced with %cmake_build when using cmake and %make_build for all other usages.
In the past, you called: %cmake, %make_jobs, and %cmake_install.
Now it’s more coherent and you call: %cmake, %cmake_build, and %cmake_install when using cmake and just replace %make_jobs with %make_build in other cases.
For completeness, we will add that the naming is also nicely aligned with the meson and automake macros, that are:
%meson, %meson_build, and %meson_install
or
%configure, %make_build, and %make_install.
The %make_jobs macro is still provided by KDE Framework kf5-filesystem package and is used by about 250 Factory packages, but its use is being phased out.
Paths and Tags
Configuration files in /etc and /usr/etc
TL;DR
-
/usr/etcwill be the new directory for the distribution provided configuration files -
/etcdirectory will contain configuration files changed by an administrator
Historically, configuration files were always installed in the /etc directory. Then if you edited this configuration file and updated the package, you often ended up with .rpmsave or .rpmnew extra files that you had to solve manually.
Due to this suboptimal situation and mainly because of the need to fulfill new requirements of transactional updates (atomic updates), the handling of configuration files had to be changed.
The new solution is to separate distribution provided configuration (/usr/etc) that is not modifiable and host-specific configuration changed by admins (/etc).
This change of course requires a lot of work. First, the applications per se need to be adjusted to read the configuration from multiple locations rather than just good old /etc and there are of course a lot of packaging changes needed as well. There are 3 variants of how to implement the change within packaging and you as a packager should choose one that fits the best for your package.
Also, there is a new RPM macro that refers to the /usr/etc location:
%_distconfdir /usr/etc
Group: tag
TL;DR
-
Group:tag is optional now
Maybe you noticed a wild discussion about removing Group: tag that hit the opensuse-factory mailing list in Fall 2019. It aroused emotions to such an extent that the openSUSE Board had to step in and helped to resolve this conflict.
They decided that including groups in spec files should be optional with the final decision resting with the maintainer.
News in RPM
RPM minor version updates are released approximately once every two years and they always bring lots of interesting news that will make packaging even easier. Sometimes it’s a little harder to put some of these changes into practice as it can mean a lot of work or hundreds of packages or dealing with backward compatibility issues. This is why you should find more information about their current adoption status in openSUSE before you use new features in your packages.
Current SUSE and openSUSE status of rpm package is as follows:
| Distribution | RPM version |
| openSUSE:Factory | 4.15.1 |
| SLE-15 / openSUSE:Leap:15.* | 4.14.1 |
| SLE-12 | 4.11.2 |
The following paragraphs present a couple of the most interesting features introduced in recent RPM versions.
File Triggers
TL;DR
- File trigger is a scriptlet that gets executed whenever a package installs/removes a file in a specific location
- Used e.g. in Factory for
texinfo,glib schemas,mime,icons, anddesktopfiles, so your package doesn’t have to call database/cache updating macros anymore - Currently (Nov, 2020), zypper doesn’t handle
transfiletriggerproperly.
RPM 4.13 introduced file triggers, rpm scriptlets that get executed whenever a package installs or removes a file in a specific location (and also if a package with the trigger gets installed/removed).
The main advantage of this concept is that a single package introduces a file trigger and it is then automatically applied to all newly installed/reinstalled packages. So, instead of each package carrying a macro for certain post-processing, the code resides in the package implementing the file trigger and is transparently run everywhere.
The trigger types are:
filetrigger{in, un, postun}transfiletrigger{in, un, postun}
The *in/*un/*postun scriptlets are executed similarly to regular rpm scriptlets, before package installation/uninstallation/after uninstallation, depending on the variant.
The trans* variants get executed once per transaction, after all the packages with files matching the trigger get processed.
Example (Factory shared-mime-info):
%filetriggerin -- %{_datadir}/mime
export PKGSYSTEM_ENABLE_FSYNC=0
%{_bindir}/update-mime-database "%{_datadir}/mime"
This file trigger will update the mime database right after the installation of a package that contains a file under /usr/share/mime. The file trigger will be executed once for each package (no matter how many files in the package match).
File triggers can easily replace database/cache updating macros (like e.g. %icon_theme_cache_post). This approach has been used in Factory since 2017. File triggers are used for processing icons, mime and desktop files, glib schemas, and others.
You probably haven’t noticed this change at all, as in general having these database/cache updating macros in your specfile doesn’t harm anything now. The change has been made in corresponding packages (texinfo, shared-mime-info, desktop-file-utils, glib2) by adding a file trigger while all these old macros are now expanded to command without action. So you can safely remove them from your specfiles.
! IMPORTANT !
Currently (Nov, 2020), zypper doesn’t handle transfiletrigger properly. If there is a %transfiletrigger and a %post scriptlet in the transaction, then zypper will only call the scriptlet and not your %transfiletrigger. See more information in Bug#1041742.
%autopatch and %autosetup
TL;DR
- Use
%autopatchto automatically apply all patches in the spec file - Use
%autosetupto automatically run%setupand%autopatch
The old and classic way to apply patches was:
Patch1: openssl-1.1.0-no-html.patch
Patch2: openssl-truststore.patch
Patch3: openssl-pkgconfig.patch
%prep
%setup -q
%patch1 -p1
%patch2 -p1
%patch3 -p1
With the recent RPM, you can use %autosetup and %autopatch macros to automate source unpacking and patch application. There is no need to specify each patch by name.
%autopatch applies all patches from the spec. The disadvantage is that it’s not natively usable with conditional patches or patches with differing fuzz levels.
Example (Factory openssl-1_1.spec):
Patch1: openssl-1.1.0-no-html.patch
Patch2: openssl-truststore.patch
Patch3: openssl-pkgconfig.patch
%prep
%setup -q
%autopatch -p1
The -p option controls the patch level passed to the patch program.
The most powerful is the %autosetup macro that combines %setup and %autopatch so that it can unpack the tarball and apply the patchset in one command.
%autosetup accepts virtually the same arguments as %setup except for:
-
-vfor verbose source unpacking, the quiet mode is the default, so-qis not applicable -
-Ndisables automatic patch application. The patches can be later applied manually using%patchor with%autopatch. It comes in handy in cases where some kind of preprocessing is needed on the upstream sources before applying the patches. -
-Sspecifies a VCS to use in the build directory. Supported are for examplegit,hg, orquilt. The default ispatch, where the patches are simply applied in the directory using patch. Settinggitwill create a git repository within the build directory with each patch represented as a git commit, which can be useful e.g. for bisecting the patches
So the simplest patch application using %autosetup will look like this.
Example (Factory openssl-1_1):
Patch1: openssl-1.1.0-no-html.patch
Patch2: openssl-truststore.patch
Patch3: openssl-pkgconfig.patch
%prep
%autosetup -p1
%patchlist and %sourcelist
TL;DR
- Use
%patchlistsection directive for marking a plain list of patches - Use
%sourcelistsection directive for marking a plain list of sources - Then use
%autosetupinstead of %setup and%patch<number>
These are new spec file sections for declaring patches and sources with minimal boilerplate. They’re intended to be used in conjunction with %autopatch or %autosetup.
Example - normal way (Factory openssl-1_1):
Source: https://www.%{_rname}.org/source/%{_rname}-%{version}.tar.gz
Source2: baselibs.conf
Source3: https://www.%{_rname}.org/source/%{_rname}-%{version}.tar.gz.asc
Source4: %{_rname}.keyring
Source5: showciphers.c
Patch1: openssl-1.1.0-no-html.patch
Patch2: openssl-truststore.patch
Patch3: openssl-pkgconfig.patch
%prep
%autosetup -p1
The files need to be tagged with numbers, so adding a patch in the middle of a series requires renumbering all the consecutive tags.
Example - with %sourcelist/%patchlist:
%sourcelist
https://www.%{_rname}.org/source/%{_rname}-%{version}.tar.gz
baselibs.conf
https://www.%{_rname}.org/source/%{_rname}-%{version}.tar.gz.asc
%{_rname}.keyring
showciphers.c
%patchlist
openssl-1.1.0-no-html.patch
openssl-truststore.patch
openssl-pkgconfig.patch
%prep
%autosetup -p1
Here the source files don’t need any tagging. The patches are then applied by %autopatch in the same order as listed in the section. The disadvantage is that it’s not possible to refer to the sources by %{SOURCE} macros or to apply the patches conditionally.
%elif
TL;DR
- RPM now supports
%elif,%elifosand%elifarch
After 22 years of development, RPM 4.15 finally implemented %elif. It’s now possible to simplify conditions which were only possible with another %if and %else pair.
Example Using %if and %else only (Java:packages/ant):
%if %{with junit}
%description
This package contains optional JUnit tasks for Apache Ant.
%else
%if %{with junit5}
%description
This package contains optional JUnit5 tasks for Apache Ant.
%else
%description
Apache Ant is a Java-based build tool.
%endif
%endif
Example Using %elif:
%if %{with junit}
%description
This package contains optional JUnit tasks for Apache Ant.
%elif %{with junit5}
%description
This package contains optional JUnit5 tasks for Apache Ant.
%else
%description
Apache Ant is a Java-based build tool.
%endif
The else if versions were implemented also for %ifos (%elifos) and %ifarch (%elifarch).
Boolean dependencies
TL;DR
- Factory now supports boolean dependency operators that allow rich dependencies
- Example:
Requires: (sles-release or openSUSE-release)
RPM 4.13 introduced support for boolean dependencies (also called “rich dependencies”). These expressions are usable in all dependency tags except Provides. This includes Requires, Recommends, Suggests, Supplements, Enhances, and Conflicts. Boolean expressions are always enclosed with parentheses. The dependency string can contain package names, comparison, and version description.
How does it help? It greatly simplifies conditional dependencies.
Practical example:
Your package needs either of two packages pack1 or pack2 to work. Until recently, there wasn’t an elegant way to express this kind of dependency in RPM.
The idiomatic way was to introduce a new capability, which both pack1 and pack2 would provide, and which can then be required from your package.
Both pack1 and pack2 packages would need adding:
Provides: pack-capability
And your package would require this capability:
Requires: pack-capability
So in order to require one of a set of packages, you had to modify each of them to introduce the new capability. That was a lot of extra effort and might not have always been possible.
Nowadays, using boolean dependencies, you can just simply add
Requires: (pack1 or pack2)
to your package and everything will work as expected, no need to touch any other package.
The following boolean operators were introduced in RPM 4.13. Any set of available packages can match the requirements.
-
and- all operands must be met
Conflicts: (pack1 >= 1.1 and pack2)
-
or- one of the operands must be met
Requires: (sles-release or openSUSE-release)- The package requires (at least one of)
sles-release,openSUSE-release
-
if- the first operand must be met if the second is fulfilled
Requires: (grub2-snapper-plugin if snapper)
-
if-else- same as
ifabove, plus requires the third operand to be met if the second one isn’t fulfilled Requires: (subpack1 if pack1 else pack2)
- same as
RPM 4.14 added operators that work on single packages. Unlike the operators above, there must be a single package that fulfills all the operands
-
with- similar to
and, both conditions need to be met BuildRequires: (python3-prometheus_client >= 0.4.0 with python3-prometheus_client < 0.9.0)- The
python3-prometheus_clientmust be in the range <0.4.0, 0.9.0)
- similar to
-
without- the first operand needs to be met, the second must not
Conflicts: (python2 without python2_split_startup)
-
unless- the first operand must be met if the second is not
Conflicts: (pack1 unless pack2)
-
unless-else- same as
unlessabove, plus requires the third operand to be met if the second isn’t fulfilled Conflicts: (pack1 unless pack2 else pack3)
- same as
The operands can be nested. They need to be surrounded by parentheses, except for chains of and or or operators.
Examples:
Recommends: (gdm or lightdm or sddm)
Requires: ((pack1) or (pack2 without func2))
Until recently, Factory only allowed boolean dependencies in Recommends/Suggests (aka soft dependencies), as it would have otherwise caused issues when doing zypper dup from older distros. Now all operators above are supported.
%license
TL;DR
- Pack license files via
%licensedirective, not%doc
A %license directive was added to RPM in 4.11.0 (2013) but openSUSE and other distributions adopted it later, in 2016. The main reason for it is to allow easy separation of licenses from normal documentation. Before this directive, license texts used to be marked with the %doc directive, that managed copying of the license to the %_defaultdocdir (/usr/share/doc/packages). With %license, it’s nicely separated as is copied to %_defaultlicensedir (/usr/share/licenses).
That’s also useful for limited systems (e.g. containers), which are built without doc files, but still need to ship package licenses for legal reasons.
Example:
%files
%license LICENSE COPYING
%doc NEWS README.SUSE
The license files are annotated in the rpm, which allows a search for the license files of a specific package:
$ rpm -qL sudo
/usr/share/licenses/sudo/LICENSE
OBS
New osc options
The osc command-line tool received several new features as well. Let’s have a quick look at the most interesting changes.
osc maintained –version
New --version option prints versions of the maintained package in each codestream, which is very useful e.g. when you want to find out which codestreams are affected by a specific issue. The only problem is that it’s not very reliable yet - sometimes it prints just “unknown”.
$ osc maintained --version sudo
openSUSE:Leap:15.1:Update/sudo (version: unknown)
openSUSE:Leap:15.2:Update/sudo (version: 1.8.22)
osc request –incoming
New --incoming option for request command shows only requests/reviews where the project is the target.
Example List all incoming request in the new or review state for Base:System project:
$ osc request list Base:System --incoming -s new,review
osc browse
Sometimes it’s just easier to watch the build status or build log in OBS GUI than via osc. With this new option, you can easily open specific packages in your browser. Just run:
$ osc browse [PROJECT [PACKAGE]
If you run it without any parameters, it will open the package in your current working directory.
Delete requests for entire projects
This is not something you want to call every day. But if you need to delete the entire project with all packages inside, you can just call:
$ osc deletereq PROJECT --all
Real names in changelogs
This is a change you probably noticed. If you create a changelog entry via osc vc, it adds not just your email to the changelog entry header but also your full name.
rdiff and diff enhancements
Also, the rdiff subcommand comes with new options. Probably the most useful is rdiff --issues-only that instead of printing the whole diff, shows just a list of fixed (mentioned really) issues (bugs, CVEs, Jiras):
Example osc rdiff --issues-only:
# osc rdiff -c 124 --issues-only openSUSE:Factory/gnutls
CVE-2020-13777
boo#1171565
boo#1172461
boo#1172506
boo#1172663
More new options were added for the osc diff command. The first is --unexpand that performs a local diff, ignoring linked package sources. The second is diff --meta that performs a diff only on meta files.
osc blame
osc finally comes with a blame command that you probably know from git. It shows who last modified each line of a tracked file.
It uses the same invocation as osc cat:
$ osc blame <file>
$ osc blame <project> <package> <file>
The drawback is that it shows the user who checked in the revision, such as the person who accepted the submission, not its actual author. But it also shows the revision number in the first column, so you can easily show the specific revision with the original author.
Example:
# osc blame openssl-fips-DH_selftest_shared_secret_KAT.patch
[...]
2 (jsikes 2020-09-17 10:51:27 62) +
5 (jsikes 2020-09-22 19:07:01 63) + if ((len = DH_compute_key(shared_secret, dh->pub_key, dh)) == -1)
2 (jsikes 2020-09-17 10:51:27 64) + goto err;
2 (jsikes 2020-09-17 10:51:27 65) +
[...]
Let’s say we’re interested in line 63, where DH_compute_key() is called. It was last changed in revision 5, so we’ll examine that revision:
> osc log -r 5
----------------------------------------------------------------------------
r5 | jsikes | 2020-09-22 17:07:01 | 16a582f1397aa14674261a54c74056ce | unknown | rq227064
Fix a porting bug in openssl-fips-DH_selftest_shared_secret_KAT.patch
----------------------------------------------------------------------------
The change was created by request 227064, so we can finally find the author of the actual code:
$ osc rq show -b 227064
227064 State:accepted By:jsikes When:2020-09-22T17:07:07
submit:
From: Request created: vitezslav_cizek -> Request got accepted: jsikes
Descr: Fix a porting bug in openssl-fips-
DH_selftest_shared_secret_KAT.patch
You can also blame the meta files and show the author of each line of the meta file, where it shows the author, as the metadata is edited directly.
$ osc meta pkg <project> <package> --blame
Please note that it works on project and package metadata but it doesn’t work on attributes.
osc comment
osc allows you to work with comments on projects, packages, and requests from the command line. That’s particularly useful for writing bots and other automatic handling.
-
osc comment list- Prints comments for a project, package, or a request.
-
osc comment create- Adds a new top-level comment, or using the
-poption, a reply to an existing one.
- Adds a new top-level comment, or using the
-
osc comment delete- Removes a comment with the given ID.
Examining workers and constraints
osc checkconstraints
When you have a package that has special build constraints, you might be curious about how many OBS workers are able to build it. osc checkconstraints does exactly that.
It can either print the list of matching workers
$ osc checkconstraints LibreOffice:Factory libreoffice openSUSE_Tumbleweed x86_64
Worker
------
x86_64:cloud137:1
x86_64:cloud138:1
x86_64:goat01:1
x86_64:goat01:2
[...]
or even a per-repo summary (when called from a package checkout):
$ osc checkconstraints
Repository Arch Worker
---------- ---- ------
openSUSE_Tumbleweed x86_64 94
openSUSE_Factory_zSystems s390x 18
[...]
osc workerinfo
This command prints out detailed information about the worker’s hardware, which can be useful when searching for proper build constraints.
Example:
$ osc workerinfo x86_64:goat01:1
It will print lamb51's kernel version, CPU flags, amount of CPUs, and available memory and disk space.
Multibuild
TL;DR
- Multibuild is an OBS feature that allows you to build the same spec file with different flavors (e.g. once with GUI and then without GUI)
multibuild is an OBS feature introduced in OBS 2.8 (2017) that offers the ability to build the same source in the same repo with different flavors. Such a spec file is easier to maintain than separate spec files for each flavor.
The flavors are defined in a _multibuild xml file in the package source directory. In addition to the normal package, each of the specified flavors will be built for each repository and architecture.
Example of a _multibuild file (from Factory python-pbr):
<multibuild>
<package>test</package>
</multibuild>
Here OBS will build the regular python-pbr package and additionally the test flavored RPM. Users can then distinguish the different flavors in spec using and perform corresponding actions (adjusting BuildRequires, package names/descriptions, turning on additional build switches, etc.).
Here we can see, that an additional flavor is getting built:
$ osc r -r standard -a x86_64
standard x86_64 python-pbr succeeded
standard x86_64 python-pbr:test succeeded
Example of spec file usage (python-pbr again):
%global flavor @BUILD_FLAVOR@%{nil}
%if "%{flavor}" == "test"
%define psuffix -test
%bcond_without test
%else
%define psuffix %{nil}
%bcond_with test
%endif
Name: python-pbr%{psuffix}
First, the spec defines a flavor macro as the value it got from OBS. Then it branches the spec depending on the flavor value. It sets a name suffix for the test flavor and defines a build conditional for easier further handling in the build and install sections.
If you need inspiration for your package, you can have a look at the following packages:
python39, libssh, python-pbr, or glibc.
Oldies
TL;DR
-
PreReqis nowRequires(pre) - Use
/run, not/var/run -
/bin,/sbin,/liband/lib64were merged into their counterpart directories under/usr -
SysVis dead, usesystemd
We realize that the changes described below are very, very, VERY old. But we put this section here anyway as we are still seeing it in some spec files from time to time. So let’s take it quickly.
PreReq → Requires(pre)
PreReq is not used anymore, it was deprecated and remapped to Requires(pre) in RPM 4.8.0 (2010).
/var/run → /run
Since openSUSE 12.2 (2012), /run directory was top-leveled as it was agreed across the distributions, that it doesn’t belong under /var. It’s still symlinked for backward compatibility but you should definitely use /run (%_rundir macro).
/usr Merge
/usr Merge was a big step in the history of all Linux distributions that helped to improve compatibility with other Unixes/Linuxes, GNU build systems or general upstream development.
In short, it aimed to merge and move content from /bin, /sbin, /lib and /lib64 into their counterpart directories under /usr (and creating backward compatibility symbolic links of course). In openSUSE it happened around 2012.
SysV is dead
The only excuse for missing the fact that SysV is dead is just that you’ve been in cryogenic hibernation for the last 10 years. If yes, then it’s the year 2020 and since openSUSE 12.3 (2013) we use systemd.
Automatic tools for cleaning
TL;DR
- Call
spec-cleaner -i mypackage.specto clean your specfile according to the openSUSE style guide. - Call
rpmlint mypackage.rpmor inspect the rpmlint report generated after the OBS build for common packaging errors/warnings.
If you read as far as here, you are probably a bit overwhelmed with all these new things in packaging. Maybe you ask yourself how you should remember all of it or more importantly, how you should keep all your maintained packages consistent with all these changes. We have good news for you. There are automated tools for it.
spec-cleaner
spec-cleaner is a tool that cleans the RPM spec file according to the style guide. It can put the lines in the right order, transform hardcoded paths with the correct macros, and mainly replace all old macros with new ones. And it can do much more.
It’s also very easy to use it, just call
$ spec-cleaner -i mypackage.spec
for applying all changes inline directly to your spec file.
If you just want to watch the diff of the changes that spec-cleaner would make, call:
$ spec-cleaner -d mypackage.spec
rpmlint
Another tool that will help you keep your package in a good shape is rpmlint. It checks common errors in RPM packages and specfiles. It can find file duplicates, check that binaries are in the proper location, keep an eye on correct libraries, systemd, tmpfiles packaging and much more. Inspecting your package from top to bottom, it reports any error or warning.
rpmlint runs automatically during the OBS build so it can fail the whole build if there are serious problems. It works as a tool for enforcing specific standards in packages built within OBS. If you want to run it on your own, call:
$ rpmlint mypackage.rpm
Both spec-cleaner and rpmlint implement the new packaging changes and new rules as soon as possible. But it’s possible that maintainers may miss something. In that case, feel free to report it as an issue on their github.
Acknowledgment
Thanks, Simon Lees, Tomáš Chvátal, and Dominique Leuenberger for suggestions, corrections, and proofreading. This post first appeared at https://packageninjas.github.io/packaging/2020/10/13/news-in-packaging.html.
Humility
Extreme Programming describes five values: communication, feedback, simplicity, courage, and respect. I think that humility might be more important than all of these.
Humility enables compassion. Compassion both provides motivation for and maximises the return on technical practices. Humility pairs well with courage, helps us keep things simple, and makes feedback valuable.
Humility enables Compassion
Humility helps you respect the people you’re working with and see what they bring. We can’t genuinely respect them if we’re feeling superior; if we think we have all the answers.
If we have compassion for our teammates (and ourselves) we will desire to minimise their suffering.
We will want to avoid inflicting difficult merges on anyone. We will want to avoid wasting their time, or forcing them to re-work; having been surprised by our changes. The practice of Continuous Integration can come from the desire to minimise suffering in this way.
We will want those who come after us in the future to be able to understand our work—understand the important behaviour and decisions we made. We’ll want them to have the best safety net possible. Tests and living documentation such as ADRs can come from this desire.
We’d desire the next person to have the easiest possible job to change or build upon what we’ve started, regardless of their skill and knowledge. Simplicity and YAGNI can come from this desire.
Humility and compassion can drive us to be curious: what are the coding and working styles and preferences of our team mates? What’s the best way to collaborate to maximise my colleagues’ effectiveness?
Without compassion we might write code that is easiest for ourselves to understand—using our preferred idioms and style without regard for how capable the rest of the team is to engage with it.
Without humility our code might show off our cleverness.
Humility to keep things simple
To embrace simplicity we have to admit that we might be wrong about what we’ll need in the future.
Humility helps us acknowledge that we will find this harder and harder to maintain in the future. Even if we’re still part of the team. We all have limited capacity to deal with complexity.
We need humility to realise we will likely be wrong about what we’ll need in the future. We’ll have courage to try to predict our direction, but strive for the simplest possible code to support what we have now. This will make it easier for whomever must change it when we realise how we’re wrong.
Humility to value feedback
To value feedback we have to admit that we might be wrong
Why pair program if you already know what is best and have nothing to learn from others? They’ll just slow you down!
Why talk with the customer regularly to understand their needs. We’re the experts!
Why do user testing, anybody could use this!
So many tech practices are about getting feedback fast so we can iterate on code, on product, and on our team ways of working. Humility helps us accept that we can be better.
Letting design emerge from the tests with TDD requires the humility to accept that we might not have the best design already in mind. We can’t have foreseen all the interactions with the rest of the code and necessary behaviours.
Humility maximises blamelessness and learning opportunities. We talk about blameless post incident reviews and retrospectives: focusing on understanding and learning from things that happen. Even if we don’t outwardly blame those involved it’s easy to feel slightly superior: that there’s no way we would have made the mistake that triggered the incident. A humble participant would have more compassion for those involved. A humble participant would see that they are themselves part of the system of people that has resulted in this outcome. There is always something to learn about the consequences of our own actions and inactions.
Humility pairs well with Courage
Courage is not overconfidence. Courage is not fearlessness. Courage is being able to do something even though it might be hard or scary.
With humility we know we are fallible and may be wrong. We courageously seek out feedback to learn as early as possible.
Deploying changes to production always carries certain risk; even with safety nets like tests and canary deploys. (Delaying deploys creates even more risk).
An overconfident person might avoid deploying to production until they’re finished with a large chunk of work. After all, they know what they’re doing! Figuring out how to break it down into separately deployable chunks will take more time and be inefficient.
A fearless person might fire and forget changes into production. This is a safe change after all. Click deploy; go to the pub!
A humble person on the other hand understands they’re working with a complex system; bigger than they can fit in their head. They understand that they can’t be certain of the results of their change, no matter the precautions they’ve taken. Having courage to deploy anyway. Acting to observe reality and find out whether their fallible prediction was correct.
The post Humility appeared first on Benji's Blog.




