VM test cluster using JeOS/MinimalVM images
JeOS (Just enough OS) or MinimalVM images are minimal VM images (duh!) that can be used to quickly deploy VMs. Instead of a installation you only need to go through a first boot setup. This makes those images very handy if you need to spin up a bunch of test VMs as for instance if you need a custom cluster.
BBC The Green Planet: Quality vs Content
Where should I begin. I bought a 4K Blu-Ray player last Autumn. I did not plan to use it for movies: this was the cheapest way of buying a player for all of my various discs. For a couple of months I really only listened to my CD/DVD-Audio/SACD collection on it.
While listening to TIDAL, I realized that there is a new David Attenborough series out there (the soundtrack was recommended to me by TIDAL). I found some short excerpts on the BBC Earth YouTube channel. In the series, David Attenborough explains the life of plants and also plays with plants just like a little kid. The series is available on 4K Blu-Ray discs. Fantastic, I found something to test the 4K capabilities of my Blu-Ray player.
I ordered the discs from the German Amazon, but they were shipped from Great Britain. In theory there was a simplified customs process, so I did not have to do any paperwork or pay any extra. In practice the Hungarian post office billed me a nice sum as handling fee, even if they did not have to do anything.
The box had four discs inside. The first two are 4K Blu-Ray, the other two are regular Blu-Ray. My initial reaction was that I only need the first two, and I can give away the other two to someone with a regular player. I was wrong.
Obviously, I watched the 4K version of the series. Breath-taking pictures. I have never seen this quality of video recordings previously. Streaming services all have a lot lower bit rate. However, something was missing. You could hear Attenborough talking, but the playful old man, who made the series a lot more of a personal experience, was nowhere to be seen. Checking the box I found that the 4K discs have half an hour less content than the regular discs.
So, all the discs stay with me, I will not give them away. Of course, after watching the 4K version, the FullHD version seems to be a cheap imitation, but the content and mood are a lot better.
Some of the missing content is available on YouTube: https://www.youtube.com/playlist?list=PL5A4nPQbUF8AzMgnRRINtVzk1I2ptf6h8
Syslog-ng 101, part 10: Parsing
This is the tenth part of my syslog-ng tutorial. Last time, we learned about syslog-ng filters. Today, we learn about message parsing using syslog-ng.
You can watch the video on YouTube:
and the complete playlist at https://www.youtube.com/playlist?list=PLoBNbOHNb0i5Pags2JY6-6wH2noLaSiTb
Or you can read the rest the tutorial as a blog at: https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-101-part-10-parsing

syslog-ng logo
Compare Calling D-Bus in Ruby and Rust
For D-Installer we already have a Ruby CLI that was created as a proof of concept. Then as part of of the hackweek we created another one in Rust to learn a bit about Rust and get our hands dirty. Now that we are familiar with both, we want to measure the overhead of calling D-Bus methods in Rust and Ruby, to make sure that if we continue with Rust, we won't be surprised by its speed. speed (hint: we do not expect it, but expectations and facts may be different).
Small CLI scenario
Since we want to measure mainly overhead, we use a simple program that reads a property from the d-bus and print it to stdout. The data structure of the property is not trivial, so the efficiency of so the efficiency of the data marshalling is also tested. We use the D-Bus interface we have in D-Installer and the property was a list of available base products.
The libraries used for communication with D-Bus are well known. For D-Bus we use rubygem-dbus and for rust we use zbus. To keep the code simple, we do not use advanced stuff from the libraries like creating objects/proxies, but simple direct calls.
Ruby code
require "dbus"
sysbus = DBus.system_bus
service = sysbus["org.opensuse.DInstaller.Software"]
object = service["/org/opensuse/DInstaller/Software1"]
interface = object["org.opensuse.DInstaller.Software1"]
products = interface["AvailableBaseProducts"]
puts "output: #{products.inspect}"
Rust code
use zbus::blocking::{Connection, Proxy};
fn main() {
let connection = Connection::system().unwrap();
let proxy = Proxy::new(&connection,
"org.opensuse.DInstaller.Software",
"/org/opensuse/DInstaller/Software1",
"org.opensuse.DInstaller.Software1").unwrap();
let res: Vec<(String,String)> = proxy.get_property("AvailableBaseProducts").unwrap();
println!("output: {:?}", res);
return;
}
Results
To get some reasonable numbers, we run it a hundred times and measure it with the time utility.
So here is the result for ruby 3.1:
time for i in {1..100}; do ruby dbus_measure.rb &> /dev/null; done
real 0m40.491s
user 0m18.599s
sys 0m3.823s
Here is the result for ruby 3.2:
time for i in {1..100}; do ruby dbus_measure.rb &> /dev/null; done
real 0m31.025s
user 0m16.412s
sys 0m3.441s
And to compare rust one built with --release:
time for i in {1..100}; do ./dbus_measure &> /dev/null; done
real 0m10.286s
user 0m0.254s
sys 0m0.188s
As you can see, the rust looks much faster. It is also nice to see that in Ruby3.2 the cold start has been has been nicely improved. We also discussed this with the ruby-dbus maintainer and he mentioned that ruby dbus calls introspection on object and there is a way to avoid this
Overall impression is that if you want a small CLI utility that needs to call d-bus, then rust is much better for it.
Multiple calls scenario
Our CLI in some cases is really just a single dbus call like when you set some DInstaller option, but there are other cases like a long running probe that needs progress reporting, and in that case there will be many more dbus calls. So we want to simulate the case where we need to call progress multiple times during a single run.
Ruby code
require "dbus"
sysbus = DBus.system_bus
service = sysbus["org.opensuse.DInstaller.Software"]
object = service["/org/opensuse/DInstaller/Software1"]
interface = object["org.opensuse.DInstaller.Software1"]
100.times do
products = interface["AvailableBaseProducts"]
end
Rust code
use zbus::blocking::{Connection, Proxy};
fn main() {
let connection = Connection::system().unwrap();
let proxy = Proxy::new(&connection,
"org.opensuse.DInstaller.Software",
"/org/opensuse/DInstaller/Software1",
"org.opensuse.DInstaller.Software1").unwrap();
for _ in 1..100 {
let _: Vec<(String,String)> = proxy.get_property("AvailableBaseProducts").unwrap();
}
return;
}
Results
We see no difference for different Ruby versions, so we just show the times:
time ruby dbus_measure.rb
real 0m10.529s
user 0m0.372s
sys 0m0.039s
time ./dbus_measure
real 0m0.052s
user 0m0.005s
sys 0m0.003s
Here it gets even more interesting and reason reveals busctl --system monitor org.opensuse.DInstaller.Software.
Rust caches the property in its proxy and just does a single dbus call GetAll to init all its properties.
On the other hand, the ruby library calls introspection first and then calls Get with the property specified.
Same behaviour can be achieved in ruby too, but it is more work. So even for simple CLI that needs multiple
calls to D-Bus, rust looks fast enough. Only remaining question we need to answer is whether rust proxy
correctly detects when a property is changed (in other words, when it sends observer signals by default).
For this reason we create a simple rust program with sleep and use d-feet to change property.
use zbus::blocking::{Connection, Proxy};
use std::{thread, time};
fn main() {
let connection = Connection::system().unwrap();
let proxy = Proxy::new(&connection,
"org.opensuse.DInstaller.Software",
"/org/opensuse/DInstaller/Software1",
"org.opensuse.DInstaller.Software1").unwrap();
let second = time::Duration::from_secs(1);
for _ in 1..100 {
let res: String = proxy.get_property("SelectedBaseProduct").unwrap();
println!("output {}", res);
thread::sleep(second)
}
return;
}
And we have successfully verified that everything is working as it should in the rust.
Request Page Redesign - Diff Comments, Request Actions for Role Additions, and More
Linux Saloon | 04 Mar 2023 | News Flight 11
Cross build and packaging
Introduction
Let’s start by clarifying what we mean by cross-building and cross-packaging. Cross-compilation is the process of compiling source code on one platform, called the host, in order to generate an executable binary for a different target platform. The emphasis here is on the word “different”. The target platform may have a different CPU architecture, such as when we work on an x86 computer and want to build software for a Raspberry Pi board with an ARM CPU. But even if the target platform has the same CPU architecture as the host, there may be several other possible differences. For example, the host may be running Debian Sid, while the target may be running openSUSE Leap. Different Linux distributions may have different compilers, linkers, and run-time libraries. Even when using the same distribution as the host for the target, they may be different releases, such as openSUSE Tumbleweed and Leap. In short, nothing guarantees that the target system will have the same shared libraries as the host system.
openSUSE Tumbleweed – Review of the week 2023/09
Dear Tumbleweed users and hackers,
The weather is unpredictable and here changes from almost spring-like back to winter in a few days. In this world I am happy to have one constant: snapshot delivering 7 snapshots in as many days (0223…0301). Snapshot 0226 has not been announced to the mailing list as it did not contain any change of packages that are part of the DVD)
The snapshots delivered these changes:
- gimp 2.10.34
- Node.JS 19.7.0
- SQLite 3.41.0
- KDE Plasma 5.27.1
- NetworkManager 1.42.2
- MariaDB 10.10.3
- Linux kernel 6.2.0 & linux-glibc-devel 6.2
- cURL 7.88.1
- make 4.4.1
- Mesa 23.0.0
- AppArmor 3.1.3 (fixes for log format change in kernel 6.2)
- zstd 1.5.4
The most relevant changes being prepared in staging projects are:
- Linux kernel 6.2.1
- libqt5: drop support for systems without SSE2 to fix boo#1208188 (i586 related)
- Podman 4.4.2
- KDE Gear 22.12.3
- KDE Plasma 5.27.2
- SELinux 3.5
Systemd Container and Podman in GitHub CI
As D-Installer consists of several components like D-Bus backend, CLI or web frontend, we see a need to test in CI that each component can start and communicate properly with each other. For this we use a test framework and more importantly GitHub CI where we need a systemd container which is not documented at all. In the following paragraphs we would like to share with you how we did it so that so that each of you can be inspired by it or use it for your own project.
A container including Systemd
We created a testing container in our build service that includes what is needed for the backend and the frontend. After some iterations, we discovered that we depend on NetworkManager which is really coupled with systemd. Additionally, we needed to access the journal for debugging purposes, which also does not work without systemd. For those reasons, we decided to include systemd.
If you are interested, you can find the container in YaST:Head:Containers/d-installer-testing although you should know that it has some restrictions (e.g., the first process must be systemd init).
GitHub CI
Asking Google, we found no relevant information about running a systemd container on GitHub CI. However, we read a piece of advice about using Podman for such containers due to its advanced support for systemd (which is enabled by default). But for using Podman, most of the answers suggested using a self-hosting workers, something we do not want to maintain.
Just running the systemd container on GitHub CI does not work because GitHub sets the
entry point to tail -f (i.e., systemd init is not the first process). However, we
noticed that the Ubuntu VM host in GitHub comes with Podman pre-installed. So the solution
became obvious.
GitHub CI, Podman and a Systemd Container
The idea is to run Podman as steps in GitHub CI using our systemd container. We did
not define container keyword at all but just run it manually. Each step is encapsulated
in a podman container exec.
A configuration example looks like this:
integration-tests:
runs-on: ubuntu-latest
steps:
- name: Git Checkout
uses: actions/checkout@v3
- name: start container
run: podman run --privileged --detach --name dinstaller --ipc=host -v .:/checkout registry.opensuse.org/yast/head/containers/containers_tumbleweed/opensuse/dinstaller-testing:latest
- name: show journal
run: podman exec dinstaller journalctl -b
This snippet checks out the repository, starts the container and prints the journal's
contents. The important part is to set the name of the container so you can use it in the
exec calls. Moreover, it mounts the Git check out on the container so it is accessible
from there.
Of course, how to use the container to do the real testing is up to you, but at least you have a working systemd container and you can inspect the logs. You can see the full configuration in action in the pull request https://github.com/yast/d-installer/pull/425.
Remaining Issues
For the integration testing of D-Installer we still face some issues:
- Different kernel: the Ubuntu VM has a different kernel and we found out that the device mapper kernel module is missing.
-
Restricted privileges: Not all actions are possible, even on a privileged container.
For instance, we cannot mount the host
/var/logto container's/var/log, which was pretty convenient to get log artifacts. - Cannot test the whole installation: We cannot "overwrite" the host VM, so it is not possible to perform a full end-to-end testing of the whole installation process. Not to mention that it might take quite some time (and for each pull request). But that's fine: our openQA instance will take care of running those tests, although we are still discussing the best way to achieve that.
Mesa, Flatpak, Plasma Update in Tumbleweed
This week openSUSE Tumbleweed users learned of the performance optimizations gained with changes for x86-64-v3 and received a few snapshots.
Some of the packages to arrive this week included software for KDE users, gamers and people beginning their Linux journey.
Snapshot, 20230301 delivered a new major version of a 3D graphics library. Mesa 23.0.0 was announced by Dylan Baker, who highlighted all the community’s improvements, fixes and changes for the release. A major Link Time Optimization leak was fixed in the release and several Radeon (RADV) drivers and Zink Vulkan fixes became available with the release. AppStream 0.16.1 updated is documentation and fixed some behavior with the binding helper macros. Flatpak 1.14.3 introduced splitting an upgrade into two steps for the wrapper. It also introduces the filename in an error message if an app has an invalid syntax in its overrides or metadata. The Linux Apps Summit, which covers Flatpak, AppImage, Snap, will take place in Brno, Czech Republic, next month and is a great event to hear from developers working on cross-distro solutions in the application space. The second update of the week for sudo arrived in the snapshot. The 1.9.13p2 fixed an --enable-static-sudoers option arriving in the 20230225 snapshot. An update of apparmor 3.1.3 added support for more audit.log formats, fixed a parser bug and fixed boo#1065388, which had progressed to be resolved over a five-year period.
The 20230228 snapshot took care of a few Common Vulnerabilities and Exposures, which arrived in the curl 7.88.1 update. Daniel Stenberg knocked out a video about the bug fixes in 7.88.1, but the video about the 7.88 release covers the CVEs like CVE-2023-23916, which would cause a malloc bomb and make curl end up spending enormous amounts of allocated heap memory. CVE-2023-23914 and CVE-2023-23915 were also covered in his video. The kernel-source was updated to 6.2.0, which refreshed and updated configurations like the disabling of a misdesigned mechanism that doesn’t fit well with the v2 cgroups kernel feature. The utility package to maintain groups of programs make updated to version 4.4.1, which had a backward-incompatibility warning related to visibility of a flag inside a makefile. The make release provides new features like being able to override the built-in rules with a slightly different set of rules to use parallel builds, which previously was not possible to use with archives. The text editor vim 9.0.1357 update fixed several problems including a crash when using an unset object variable and a cursor being in the wrong position with virtual text ending in a multi-byte character. The package for diagnostic, debugging and instructional userspace, strace, updated to version 6.2 and implemented collision resolution for overlapping ioctl commands from tty and subsystems.
The file changes for 20230227 fixed some crashes with the mlterm package and a CVE-2022-24130 patch was added.
The 20230225 snapshot updated ImageMagick 7.1.0.62. The image editor had some security updates, eliminated compiler warnings and Block Compression 5. The update of NetworkManager 1.42.2 added a new setting to control whether to remove the local route rule that is automatically generated. The network package also fixed a race condition when setting the MAC address of an Open vSwitch interface. Updated translations arrived in the glib2 2.74.6 version along with some bug fixes, and the mariadb 10.10.3 release fixed a crash recover with InnoDB. The package also removed some InnoDB Buffer Pool load throttling and a shutdown hang when the change buffer is corrupted. A major version for the device memory enabling project arrived in the snapshot; ndctl 76 has a new command to monitor CXL events. Other packages up update in the snapshot were sudo 1.9.13p1, yast2-security 4.5.6, zstd 1.5.4 and more.
There is no doubt that snapshot 20230224 was a Plasma snapshot. All the packages to update in the snapshot were KDE related. The Plasma 5.27.1 update had its fill of bug fixes, and a few were related to packages that would come later in the week. The Discover software center had some fixes related to Flatpak and AppStream. There were a large amount of KWin changes and a couple related to Wayland. A potential crash for screen management package libkscreen was resolved with new setting configurations. Power consumption package powerdevil fixed a bug about the charging limit.
For the next two weeks, there won’t be a Tumbleweed blog providing updates on the week’s snapshots. Tumbleweed users are encouraged to subscribe to the Factory mailing list where the release manager posts an update about the rolling release and highlights a few packages that are forthcoming for the distribution.