Skip to main content

the avatar of openSUSE News

OpenSSL, Squid, Dracut Update in Tumbleweed

Five openSUSE Tumbleweed snapshots have been released since last Friday.

The snapshots had a small amount of packages in each release.

The 20220629 snapshot updated OpenSSL to version 1.1.1p. This newer version fixed CVE-2022-2068 affecting the c_rehash script, which was not properly sanitizing the shell metacharacters to prevent command injection. The script, which is distributed by some operating systems in a manner where it is automatically executed, could give an attacker execute arbitrary commands with the privileges of the script. Another package updated in the snapshot was perl-JSON 4.07, which provided some backport updates from 4.10 version. New memory device types, processor upgrades, slot types, processor characteristics and more came in the update of dmidecode 3.4. There were also several table engine updates in the snapshot like ibus-table 1.16.9, ibus-table-chinese 1.8.8 and more.

A single package was updated in snapshot 20220628. The update of mpg123 1.30.0 has a new network backend using external tools/libraries to support HTTPS and the terminal control keys are now case-sensitive.

Two Python Package Index updates were released in 20220626. Missing constructors for UUID for each Bluetooth service were added in the python-qt5 5.15.7 update. The package is a comprehensive set of Python bindings for Qt v5. The other PyPI package update was python-rsa 4.8, which switched to Poetry for dependency and release management and made decryption 2-4x faster by using the Chinese Remainder Theorem when decrypting with a private key.

Text editor vim fixed an invalid memory access when using an expression on the command line in the 8.2.5154 update and some fixes related valgrind became available in the 20220625 snapshot. Caching proxy squid fixed some parser regressions and improved the handling of Gopher responses in version 5.6. The updated open-source printing package cups-filters 1.28.15 had improvements to identify old LaserJets more precisely and switch to Poppler when appropriate. The 5.18.6 Linux Kernel came in the snapshot as well and had several ALSA System on Chip enhancements and fixes. The kernel also had a couple KVM for arm64 changes and handled some GNU Compiler Collection 12 warnings.

Snapshot 20220624 brought an updated dracut version, which stopped leaking shell options and put in a temporary workaround for openSUSE appliance builder kiwi. The gstreamer 1.20.3 made some WebRTC and performance improvements; it also fixed scrambled video playback with hardware-accelerated VA-API decoders on certain Intel hardware. The D-Bus interface for user account query and manipulation, accountsservice, updated from version 0.6.55 to 22.08.8. Other packages to update in the snapshot were Imath 3.1.5, KDE’s amarok and more.

the avatar of Federico Mena-Quintero

Fixing test coverage reports in at-spi2-core

Over the past weeks I have been fixing the test coverage report for at-spi2-core. It has been a bit of an adventure where I had to do these:

  • Replace a code coverage tool for another one...
  • ... which was easier to modify to produce more accessible HTML reports.
  • Figuring out why some of at-spi2-core's modules got 0% coverage.
  • Learning to mock DBus services.

What is a code coverage report?

In short — you run your program, or its test suite. You generate a coverage report, and it tells you which lines of code were executed in your program, and which lines weren't.

A coverage report is very useful! It lets one answer some large-scale questions:

  • Which code in my project does not get exercised by the tests?

  • If there is code that is conditionally-compiled depending on build-time options, am I forgetting to test a particular build configuration?

And small-scale questions:

  • Did the test I just added actually cause the code I am interested in to be run?

  • Are there tests for all the error paths?

You can also use a coverage report as an exploration tool:

  • Run the program by hand and do some things with it. Which code was run through your actions?

I want to be able to do all those things for the accessibility infrastructure: use the report as an exploration tool while I learn how the code works, and use it as a tool to ensure that the tests I add actually test what I want to test.

A snippet of a coverage report

This is a screenshot of the report for at-spi2-core/atspi/atspi-accessible.c:

Coverage report for the atspi_accessible_get_child_count() function

The leftmost column is the line number in the source file. The second column has the execution count and color-coding for each line: green lines were executed one or more times; red lines were not executed; white lines are not executable.

By looking at that bit of the report, we can start asking questions:

  • There is a return -1 for an error condition, which is not executed. Would the calling code actually handle this correctly, since we have no tests for it?

  • The last few lines in the function are not executed, since the check before them works as a cache. How can we test those lines, and cause them to be executed? Are they necessary, or is everything handled by the cache above them? How can we test different cache behavior?

First pass: lcov

When I initially added continuous integration infrastructure to at-spi2-core, I copied most of it from libgweather, as Emmanuele Bassi had put in some nice things in it like static code analysis, address-sanitizer, and a code coverage report via lcov.

The initial runs of lcov painted a rather grim picture: test coverage was only at about 30% of the code. However, some modules which are definitely run by the test suite showed up with 0% coverage. This is wrong; those modules definitely have code that gets executed; why isn't it showing up?

Zero coverage

At-spi2-core has some peculiar modules. It does not provide a single program or library that one can just run by hand. Instead, it provides a couple of libraries and a couple of daemons that get used through those libraries, or through raw DBus calls.

In particular, at-spi2-registryd is the registry daemon for accessibility, which multiplexes requests from assitive technologies (ATs) like screen readers into applications. It doesn't even use the session DBus; it registers itself in a separate DBus daemon specific to accessibility, to avoid too much traffic in the main session bus.

at-spi2-registryd gets started up as soon as something requires the accessibility APIs, and remains running until the user's session ends.

However, in the test runner, there is no session. The daemon runs, and gets a SIGTERM from its parent dbus-daemon when it terminates. So, while at-spi2-registryd has no persistent state that it may care about saving, it doesn't exit "cleanly".

And it turns out that gcc's coverage data gets written out only if the program exits cleanly. When you compile with the --coverage option, gcc emits code that turns on the flag in libgcc to write out coverage information when the program ends (libgcc is the compiler-specific runtime helper that gets linked into normal programs compiled with gcc).

It's as if main() had a wrapper:

void main_wrapper(void)
{
    int r = main(argc, argv);

    write_out_coverage_info();

    exit(r);
}

int main(int argc, char **argv) 
{
    /* your program goes here */
}

Of course, if your program terminates prematurely through SIGTERM, the wrapper will not finish running and it will not write out the coverage info.

So, how do we simulate a session in the test runner?

Mocking gnome-session

I recently learned of a fantastic tool, python-dbusmock, which makes it really easy to create mock implementations of DBus interfaces.

There are a couple of places in at-spi2-core that depend on watching the user session's lifetime, and fortunately they only need two things from the gnome-session interfaces:

I wrote a mock of these DBus interfaces so that the daemons can register against the fake session manager. Then I made the test runner ask the mock session to tell the daemons to exit when the tests are done.

With that, at-spi2-registryd gets coverage information written out properly.

Obtaining coverage for atk-adaptor

atk-adaptor is a bunch of glue code between atk, the GObject-based library that GTK3 uses to expose accessible interfaces, and libatspi, the hand-written DBus binding to the accessibility interfaces.

The tests for this are very interesting. We want to simulate an application that uses atk to make itself accessible, and to test that e.g. the screen reader can actually interface with them. Instead of creating ATK implementations by hand, there is a helper program that reads XML descriptions of accessible objects, and exposes them via ATK. Each individual test uses a different XML file, and each test spawns the helper program with the XML it needs.

Again, it turns out that the test runner just sent a SIGTERM to the helper program when each test was done. This is fine for running the tests normally, but it prevents code coverage from being written out when the helper program terminates.

So, I installed a gmain signal handler in the helper program, to make it exit cleanly when it gets that SIGTERM. Problem solved!

Missing coverage info for GTK2

The only part of at-spi2-core that doesn't have coverage information yet is the glue code for GTK2. I think this would require running a test program under xvfb so that its libgtk2 can load the module that provides the glue code. I am not sure if this should be tested by at-spi2-core itself, or if that should be the responsibility of GTK2.

Are the coverage reports accessible?

For a sighted person, it is easy to look at a coverage report like the example above and just look for red lines — those that were not executed.

For people who use screen readers, it is not so convenient. I asked around a bit, and Eitan Isaacson gave me some excellent tips on improving the accessibility of lcov and grcov's HTML output.

Lcov is an old tool, and I started using it for at-spi2-core because it is what libgweather already used for its CI. Grcov is a newer tool, mostly by Mozilla people, which they use for Firefox's coverage reports. Grcov is also the tool that librsvg already uses. Since I'd rather baby-sit one tool instead of two, I decided to switch at-spi2-core to use grcov as well and to improve the accessibility of its reports.

The extract from the screenshot above looks like a table with three columns (line number, execution count, source code), but it is not a real HTML <table>it is done with div elements and styling. Something like this:

<div>
  <div class="columns">
    <div class="column">
      line number
    </div>
    <div class="column color-coding-executed">
      execution count
    </div>
    <div class="column color-coding-executed">
      <pre>source code</pre>
    </div>
  </div>
  <!-- repeat the above for each source line -->
</div>

Eitan showed me how to use ARIA tags to actually expose those divs as something that can be navigated as a table:

  • Add role="table" aria-label="Coverage report" to the main <div>. This tells web browsers to go into whatever interaction model they use for navigating tables via accessible interfaces. It also gives a label to the table, so that it is easy to find by assistive tools; for example, a screen reader may let you easily navigate to the next table in a document, and you'd like to know what the table is about.

  • Add role="row" to each row's div.

  • Add role="cell" to an individual cell's div.

  • Add an aria-label to cells with the execution count: while the sighted version shows nothing (non-executable lines), or just a red background (lines not executed), or a number with a green background, the screen reader version cannot depend on color coding alone. That aria-label will say "no coverage", or "0", or the actual execution count, respectively.

Time will tell whether this makes reports easier to peruse. I was mainly worried about being able to scan down the source quickly to find lines that were not executed. By using a screen reader's commands for tabular navigation, one can move down in the second column until you reach a "zero". Maybe there is a faster way? Advice is appreciated!

Grcov now includes that patch, yay!

Next steps

I am starting to sanitize the XML interfaces in at-spi2-core, at least in terms of how they are used in the build. Expect an update soon!

the avatar of openSUSE News

openSUSE Leap 15.4 release retrospective

We are seeking feedback regarding the release of openSUSE Leap 15.4, which was released to the general public on June 8.

With this survey, what we’re looking from you is both positive and negative feedback related to the availability and individual experience with openSUSE Leap 15.4. Your participation is very valuable for us.

Participating is very simple, this survey consists of only two questions, in this way we’ll try to continue doing what went well and try to address what went wrong. So follow this link to partipate in this survey:

The deadline for collecting responses is July 7. Your responses will be then grouped and reviewed and passed to the respective openSUSE teams.

Here you can find the results from previous retrospective.

We care about your privacy, that’s why the record of your survey responses does not contain any identifying information about you, unless a specific survey question explicitly asked for it. Feedback might be adjusted to fit our report.

Thanks a lot for being part of openSUSE community.

the avatar of openQA-Bites

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE Tumbleweed – Review of the week 2022/25

Dear Tumbleweed users and hackers,

During this week, we sweat some blood. Not only was it really hot here, but we also had a gap in the snapshots delivered. Turned out that the update to SELinux 3.4 worked in most cases – but not so well with containers. We stopped rolling for a few days to figure out the fixes for that one issue before merging other, large changes. Nevertheless, we still delivered 6 snapshots this week (0616,0617, 0618, 0619, 0622, and 0623).

The major changes delivered in the last week include:

  • Linux kernel 5.18.4
  • Mozilla Firefox 101.0.1
  • KDE Frameworks 5.95.0
  • KDE Plasma 5.25.1
  • Gimp 2.10.32
  • Inkscape 1.2
  • LibreOffice 7.3.4RC2
  • NetworkManager 1.38.2
  • krb5 1.20
  • Samba 4.16.2
  • SELinux 3.4
  • Redis 7.0.2
  • Mesa 22.1.2
  • Sphinx 5

Ok, looking at that list, it seems to be no wonder it got heated, when combined with the SELinux issues. But it worked out pretty well in the end. The staging projects are currently not overflowing, but a few things are still in the queue:

  • systemd 251.2
  • Linux kernel 5.18.6
  • rpmlint: some work to detect more of the common errors. This will result in a few packages failing to build, but detecting those errors is worth it.

Regarding build failures: in openSUSE:Factory, we currently have 192 build errors reported. Any help in getting those fixed is appreciated.

the avatar of YaST Team

Slow YaST in Container or How to Run Programs from Ruby?

Slow YaST in Container

We noticed that when running the YaST services manager module in a container then the start of the module is about 3 times slower than running in the host system directly. It takes about a minute to start, that is way much…

Root vs Non-root

This actually does not influence YaST in the end, but it is an interesting difference and might be useful for someone.

If you run this

# time -p ruby -e '10000.times { system("/bin/true") }'
real 2.90
user 2.45
sys 0.54

and if you run the very same command as root

# sudo time -p ruby -e '10000.times { system("/bin/true") }'
real 9.92
user 5.89
sys 4.16

it takes about triple time! :open_mouth:

But in the end it turned out that the reason is that Ruby uses the optimized vfork() system call instead of the traditional fork(). But because of some security implications it should not be used when running as root, in that case Ruby uses the standard (and slower) fork() call. See more details in the Ruby source code.

So in the end it actually is not 3 times slower because running as root, it is the other way round, it is 3 times faster because running as non-root. But because we almost always run YaST as root then we cannot use this trick…

Cheetah vs SCR

Ok, but why is the services manager much slower at start?

In YaST there are also other ways how to run process. You can use the SCR component (legacy from the YCP times) or the Cheetah Ruby gem.

Cheetah

Let’s try how Cheetah works when running as different users:

# time -p ruby -r cheetah -e '10000.times { Cheetah.run("/bin/true") }'
real 17.83
user 10.41
sys 8.73
# sudo time -p ruby -r cheetah -e '10000.times { Cheetah.run("/bin/true") }'
real 15.74
user 9.51
sys 7.47

The numbers are roughly the same, the reason is that Cheetah always uses the less optimal fork() call so there is no difference, it does not matter who runs the script.

Um, maybe we could improve the Cheetah gem to use the vfork() trick as well… :thinking:

SCR

The .target.bash agent in SCR also uses fork so it also does not matter which user uses it.

Benchmarking

The traditional YaST SCR component is implemented in the C++ and the call needs to go through the YaST component system, on the other hand the Cheetah gem uses a native Ruby code. Additionally SCR uses system() call which uses intermediate shell process while Cheetah uses exec() which executes the command directly.

So it would be nice to compare these two options and see how they perform. For that we wrote a small cheetah_vs_scr.rb benchmarking script. It just lists all systemd targets and runs that many times to have more reliable results.

Results

Running the script as root directly in the system:

# ./cheetah_vs_scr.rb
Number of calls: 1000
Cheetah   : 8.84ms per call
SCR       : 22.90ms per call

As you can see, even without any container involved the Cheetah gem is more than twice faster!

So how this changes when running in a container and running the systemctl command in the /mnt chroot?

yast-container # ./cheetah_vs_scr.rb
Number of calls: 1000
Cheetah   : 7.30ms per call
SCR       : 91.78ms per call

As you can see the SCR calls are more than 4 times slower. That corresponds to the slowdown we can see in the services manager. But more interesting is that the Cheetah case is actually slightly faster when running in a container! And if you compare Cheetah with SCR in container then Cheetah is more than 10x faster! Wow!

So in this case the SCR calls are the bottleneck, the container environment should affect the speed in general only slightly. We actually do not know what is the exact reason for this slowdown, probably the extra chrooting… :thinking:

But as we should switch to Cheetah anyway (because it is a clean native Ruby solution) we are not interested in researching this, this slow functionality caused troubles only in one specific case so far.

Notes: It was tested in an openSUSE Leap 15.4 system, it also heavily depends on the hardware, you might get very different numbers for your system.

Summary

If you just execute other programs from an YaST module few times it probably does not matter much which way you use.

But if you call external programs a lot (hundred times or so) then it depends whether you are running as root or non-root. For non-root case it is better to use the Ruby native calls (system or `` backticks). For the root access, as usually in YaST, prefer the Cheetah gem to the YaST SCR.

In our case it means we should update the YaST service manager module to use Cheetah, it should significantly reduce the start delay. And will improve the start also when running in the host system directly.

the avatar of Nathan Wolf

All-in on PipeWire for openSUSE Tumbleweed

I have written about using PipeWire previously where I did have a very positive experience with it. Unfortunately, I did have some irritating quarks with it that ultimately resulted in my going back to using PulseAudio on my openSUSE Tumbleweed machines. They were little things needing to refresh the browser after a Bluetooth device changed […]
a silhouette of a person's head and shoulders, used as a default avatar

The syslog-ng disk-buffer

A three parts blog series:

The syslog-ng disk buffer is one of the most often used syslog-ng options to ensure message delivery. However, it is not always necessary and using the safest variant has serious performance impacts. If you utilize disk-buffer in your syslog-ng configuration, it is worth to make sure that you use a recent syslog-ng version.

From this blog, you can learn when to use the disk-buffer option, the main differences between reliable and non-reliable disk-buffer, and why is it worth to use the latest syslog-ng version.

Read more at https://www.syslog-ng.com/community/b/blog/posts/when-not-to-use-the-syslog-ng-disk-buffer

Last time, we had an overview of the syslog-ng disk-buffer. This time, we dig a bit deeper and take a quick look at how it works, and a recent major change that helped speed up the reliable disk-buffer considerably.

Read more at https://www.syslog-ng.com/community/b/blog/posts/how-does-the-syslog-ng-disk-buffer-work

Most people expect to see how many log messages are waiting in the disk-buffer from the size of the syslog-ng disk-buffer file.. While it was mostly true for earlier syslog-ng releases, for recent syslog-ng releases (3.34+) the disk-buffer file can stay large even when it is empty. This is a side effect of a recent syslog-ng performance tuning.

Read more at https://www.syslog-ng.com/community/b/blog/posts/why-is-my-syslog-ng-disk-buffer-file-so-huge-even-when-it-is-empty

syslog-ng logo

the avatar of openSUSE News

Hack Week starts Hacking for Humanity next week

It’s back. No, not the McRib. It’s Hack Week.

The coveted Hack Week 21 runs from June 27 to July 1 and has both virtual and physical participation elements. Hack Week is put on for openSUSE contributions and gives any open-source contributor and SUSE employees a playground to experiment, innovate, collaborate and learn for an entire week.

People all over the world can create, view or join projects on hackweek.opensuse.org. The projects range from the packaging of freeware games to improving full-disk encryption and from learning network related knowledge to writing a software.opensuse.org replacement. There is even a project using solar panels to regulate water heating. There are more than 80 projects for this year’s Hack Week.

Companies, hobbyists and technologists are encouraged to participate. There is no affiliation one needs to participate in Hack Week.

The efforts are all about being innovative and providing solutions for users, developers and industry. The theme for this Hack Week is Hacking for Humanity!

Hack Week has been running since 2007 and you can find out how it works by joining a project.

the avatar of Open Build Service

Token Party!

With the introduction of the workflows, a wide range of integrations became available for individual users. Now those integrations start to get interesting at team level too. But, until now, you could not use the same workflow token with a group of users. We’ve fixed that for you. We started off the continuous integration between OBS and GitHub/GitLab in May 2021, then made some improvements in June 2021. We introduced advanced features like reporting filters...