Skip to main content

the avatar of YaST Team

Slow YaST in Container or How to Run Programs from Ruby?

Slow YaST in Container

We noticed that when running the YaST services manager module in a container then the start of the module is about 3 times slower than running in the host system directly. It takes about a minute to start, that is way much…

Root vs Non-root

This actually does not influence YaST in the end, but it is an interesting difference and might be useful for someone.

If you run this

# time -p ruby -e '10000.times { system("/bin/true") }'
real 2.90
user 2.45
sys 0.54

and if you run the very same command as root

# sudo time -p ruby -e '10000.times { system("/bin/true") }'
real 9.92
user 5.89
sys 4.16

it takes about triple time! :open_mouth:

But in the end it turned out that the reason is that Ruby uses the optimized vfork() system call instead of the traditional fork(). But because of some security implications it should not be used when running as root, in that case Ruby uses the standard (and slower) fork() call. See more details in the Ruby source code.

So in the end it actually is not 3 times slower because running as root, it is the other way round, it is 3 times faster because running as non-root. But because we almost always run YaST as root then we cannot use this trick…

Cheetah vs SCR

Ok, but why is the services manager much slower at start?

In YaST there are also other ways how to run process. You can use the SCR component (legacy from the YCP times) or the Cheetah Ruby gem.

Cheetah

Let’s try how Cheetah works when running as different users:

# time -p ruby -r cheetah -e '10000.times { Cheetah.run("/bin/true") }'
real 17.83
user 10.41
sys 8.73
# sudo time -p ruby -r cheetah -e '10000.times { Cheetah.run("/bin/true") }'
real 15.74
user 9.51
sys 7.47

The numbers are roughly the same, the reason is that Cheetah always uses the less optimal fork() call so there is no difference, it does not matter who runs the script.

Um, maybe we could improve the Cheetah gem to use the vfork() trick as well… :thinking:

SCR

The .target.bash agent in SCR also uses fork so it also does not matter which user uses it.

Benchmarking

The traditional YaST SCR component is implemented in the C++ and the call needs to go through the YaST component system, on the other hand the Cheetah gem uses a native Ruby code. Additionally SCR uses system() call which uses intermediate shell process while Cheetah uses exec() which executes the command directly.

So it would be nice to compare these two options and see how they perform. For that we wrote a small cheetah_vs_scr.rb benchmarking script. It just lists all systemd targets and runs that many times to have more reliable results.

Results

Running the script as root directly in the system:

# ./cheetah_vs_scr.rb
Number of calls: 1000
Cheetah   : 8.84ms per call
SCR       : 22.90ms per call

As you can see, even without any container involved the Cheetah gem is more than twice faster!

So how this changes when running in a container and running the systemctl command in the /mnt chroot?

yast-container # ./cheetah_vs_scr.rb
Number of calls: 1000
Cheetah   : 7.30ms per call
SCR       : 91.78ms per call

As you can see the SCR calls are more than 4 times slower. That corresponds to the slowdown we can see in the services manager. But more interesting is that the Cheetah case is actually slightly faster when running in a container! And if you compare Cheetah with SCR in container then Cheetah is more than 10x faster! Wow!

So in this case the SCR calls are the bottleneck, the container environment should affect the speed in general only slightly. We actually do not know what is the exact reason for this slowdown, probably the extra chrooting… :thinking:

But as we should switch to Cheetah anyway (because it is a clean native Ruby solution) we are not interested in researching this, this slow functionality caused troubles only in one specific case so far.

Notes: It was tested in an openSUSE Leap 15.4 system, it also heavily depends on the hardware, you might get very different numbers for your system.

Summary

If you just execute other programs from an YaST module few times it probably does not matter much which way you use.

But if you call external programs a lot (hundred times or so) then it depends whether you are running as root or non-root. For non-root case it is better to use the Ruby native calls (system or `` backticks). For the root access, as usually in YaST, prefer the Cheetah gem to the YaST SCR.

In our case it means we should update the YaST service manager module to use Cheetah, it should significantly reduce the start delay. And will improve the start also when running in the host system directly.

the avatar of Nathan Wolf

All-in on PipeWire for openSUSE Tumbleweed

I have written about using PipeWire previously where I did have a very positive experience with it. Unfortunately, I did have some irritating quarks with it that ultimately resulted in my going back to using PulseAudio on my openSUSE Tumbleweed machines. They were little things needing to refresh the browser after a Bluetooth device changed […]
a silhouette of a person's head and shoulders, used as a default avatar

The syslog-ng disk-buffer

A three parts blog series:

The syslog-ng disk buffer is one of the most often used syslog-ng options to ensure message delivery. However, it is not always necessary and using the safest variant has serious performance impacts. If you utilize disk-buffer in your syslog-ng configuration, it is worth to make sure that you use a recent syslog-ng version.

From this blog, you can learn when to use the disk-buffer option, the main differences between reliable and non-reliable disk-buffer, and why is it worth to use the latest syslog-ng version.

Read more at https://www.syslog-ng.com/community/b/blog/posts/when-not-to-use-the-syslog-ng-disk-buffer

Last time, we had an overview of the syslog-ng disk-buffer. This time, we dig a bit deeper and take a quick look at how it works, and a recent major change that helped speed up the reliable disk-buffer considerably.

Read more at https://www.syslog-ng.com/community/b/blog/posts/how-does-the-syslog-ng-disk-buffer-work

Most people expect to see how many log messages are waiting in the disk-buffer from the size of the syslog-ng disk-buffer file.. While it was mostly true for earlier syslog-ng releases, for recent syslog-ng releases (3.34+) the disk-buffer file can stay large even when it is empty. This is a side effect of a recent syslog-ng performance tuning.

Read more at https://www.syslog-ng.com/community/b/blog/posts/why-is-my-syslog-ng-disk-buffer-file-so-huge-even-when-it-is-empty

syslog-ng logo

the avatar of openSUSE News

Hack Week starts Hacking for Humanity next week

It’s back. No, not the McRib. It’s Hack Week.

The coveted Hack Week 21 runs from June 27 to July 1 and has both virtual and physical participation elements. Hack Week is put on for openSUSE contributions and gives any open-source contributor and SUSE employees a playground to experiment, innovate, collaborate and learn for an entire week.

People all over the world can create, view or join projects on hackweek.opensuse.org. The projects range from the packaging of freeware games to improving full-disk encryption and from learning network related knowledge to writing a software.opensuse.org replacement. There is even a project using solar panels to regulate water heating. There are more than 80 projects for this year’s Hack Week.

Companies, hobbyists and technologists are encouraged to participate. There is no affiliation one needs to participate in Hack Week.

The efforts are all about being innovative and providing solutions for users, developers and industry. The theme for this Hack Week is Hacking for Humanity!

Hack Week has been running since 2007 and you can find out how it works by joining a project.

the avatar of Open Build Service

Token Party!

With the introduction of the workflows, a wide range of integrations became available for individual users. Now those integrations start to get interesting at team level too. But, until now, you could not use the same workflow token with a group of users. We’ve fixed that for you. We started off the continuous integration between OBS and GitHub/GitLab in May 2021, then made some improvements in June 2021. We introduced advanced features like reporting filters...

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE Tumbleweed – Review of the week 2022/24

Dear Tumbleweed users and hackers,

Week 24 contained again some public holidays for my place, but as in the past, this has never stopped Tumbleweed from rolling. After all, it’s all our contributor’s work making it go as smoothly as it is. And their great work has paid off once again, as we could finally, after months and months of preparation, testing, fixing, and redo, make the switch of the default python interpreter to version 3.10. This resulted in a rather large rebuild of the distro, as all python3- symbols needed to move to the correct python subpackage again. As a positive side effect, the recently introduced SOURCE_FORTIFY=3 is now enabled on all binaries. With the rebuild taking a bit longer, we have still managed to publish 5 snapshots this week (0609, 0611, 0612, 0613, and 0614)

The main changes delivered this week include:

  • GNOME 42.2 – the missing pieces are now finally also there
  • GTK 4.7.0
  • openssl 1.1.1o
  • KDE Gear 22.04.2
  • KDE Plasma 5.25.0
  • python 3.10 as the default python3 interpreter; pymodules are still built for python 3.8, 3.9, and 3.10

The python 3.10 switch also took a bit of a toll on the stagings, as they all needed to be rebased on top of that change and all competed for build powers. That in turn means they are all filled with the new, good stuff:

  • Linux kernel 5.18.4
  • Mozilla Firefox 101.0.1
  • KDE Frameworks 5.95
  • Inkscape 1.2
  • systemd 251.2
  • SELinux 3.4
  • krb5 1.20.0: breaks samba
  • Sphinx 5: breaks qemu
the avatar of openSUSE News

Community work group update post oSC22

The community workgroup (CWG) for the Adaptable Linux Platform (ALP) would like to update you on what has happened since the openSUSE Conference 2022. Make sure to check the panel discussion with the ALP steering committee and other talks from oSC22 as they become public.

First, there is a new official communication channel! We’re using the “openSUSE ALP” matrix channel, bridged with https://discord.gg/opensuse and ircs://libera.chat/#opensuse-alp. We’ve got you covered! This is the best way how to consume updates.

We have revisited the way we would like to communicate updates on ALP. The idea is to switch away from a digest report for all workgroups to something a bit easier to follow and more exciting to read.

We’ve invited SUSE’s newly appointed Engineering Director Marco Varlese to the weekly ALP CWG meeting, and we have great news to share!

We’ve received confirmation about the upcoming Roadmap. A delivery of the Proof of Concept is expected in September, and the public availability of an ALP Based product is projected for September 2023.

Next Tuesday, June 21, Marco will work with Robert Sirchia from the SUSE’s Community team on a suse.com article about the strategy and vision for the Adaptable Linux Platform. We’d like to then interview individual work groups and share some updates in a story-telling way and communicate better how they fit into the bigger picture shared by Marco. Dan Čermák will put together a sorted list of WGs by the impact on the openSUSE community. This will make sure that the interviews are as relevant as possible.

We’ve had Quality Engineering over this week. For next Tuesday, we’ve invited Jiří Šrain and Frederic Crozat from the ALP steering committee, with whom we’d like to discuss building of the community part of ALP, essentially the Package Hub for ALP. The remaining items on our radar are a test instance of Fedocal and centralized documentation.

The community platform is being referred to as openSUSE ALP during its development cycle with SUSE to be concise with planning the next community edition.

the avatar of openQA-Bites

BCI test tutorial

Base Container Images (BCI) are a SUSE offer for a variety of container images suitable for building custom applications atop of the SUSE Linux Enterprise (SLE). They are a suitable building platform for different container applications and are available for free without subscription. In this blog post I’m covering how we test BCI before they are released and how you can run individual tests on them.

the avatar of openSUSE News

An update from ALP Quality Engineering

Building our products in an open and transparent way allows us to rethink the way how we test.

Jose Lausuch from our ALP Quality Engineering was invited to the Community Workgroup weekly meeting to speak about current plans of Quality Engineering for ALP.

Jose mentioned that the QE Workgroup would like to start testing existing ALP images with the existing MicroOS test suite. The effort is coordinated in poo#112409.

Let’s start with what we already have and run it against ALP in openqa.opensuse.org (o3). We’d initially cover KVM and self-install images for x86_64 and aarch64. VMware and possibly others would come later.

Once we have a proof of concept, other QE experts will jump to cover specific testing areas (virtualization, containers, public cloud, yast) and contribute to ALP as well back to upstream, which is in this case MicroOS.

We hope that this new model will raise community interest in contributions to the Quality Engineering area, as this has been always a challenge.

Jose already submitted the initial ALP job group to o3 and started adding some code to the upstream test repository.

the avatar of openSUSE News

Community aims to grow communication, marketing team

The openSUSE community has been having community meetings on a regular basis for some time and attendees in the latest meeting have expressed a desire to grow the communications and marketing team.

The project has a Telegram marketing group and a #marketing channel on discord.

Gaining people interested in helping with marketing, communications and planning comes at a time where several Work Groups are focused on the formation of an Adaptable Linux Platform, which has yet to formalize (or change) the name for the community platform.

There are some immediate aspects that can be addressed by the team like a thorough review of the bare necessities of the communications and marketing effort plans from previous years and how these can be adjusted or improved.

The Thursday community meeting plans to have a regular marketing segment as part of the weekly update. People interested in joining the group should reach out to @profetik777 on the Telegram marketing group.