Paying technical debt in our accessibility infrastructure - Transcript from my GUADEC talk
At GUADEC 2022 in Guadalajara I gave a talk, Paying technical debt in our accessibility infrastructure. This is a transcript for that talk.
The video for the talk starts at 2:25:06 and ends at 3:07:18; you can click on the image above and it will take you to the correct timestamp.

Hi there! I'm Federico Mena Quintero, pronouns he/him. I have been working on GNOME since its beginning. Over the years, our accessibility infrastructure has acquired a lot of technical debt, and I would like to help with that.

For people who come to GUADEC from richer countries, you may have noticed that the sidewalks here are pretty rubbish. This is a photo from one of the sidewalks in my town. The city government decided to install a bit of tactile paving, for use by blind people with canes. But as you can see, some of the tiles are already missing. The whole thing feels lacking in maintenance and unloved. This is a metaphor for the state of accessibility in many places, including GNOME.

This is a diagram of GNOME's accessibility infrastructure, which is also the one used on Linux at large, regardless of desktop. Even KDE and other desktop environments use "atspi", the Assistive Technology Service Provider Interface.
The diagram shows the user-visible stuff at the top, and the infrastructure at the bottom. In subsequent slides I'll explain what each component does. In the diagram I have grouped things in vertical bands like this:
-
gnome-shell, GTK3, Firefox, and LibreOffice ("old toolkits") all use atk and atk-adaptor, to talk via DBus, to at-spi-registryd and assistive technologies like screen readers.
-
More modern toolkits like GTK4, Qt5, and WebKit talk DBus directly instead of going through atk's intermediary layer.
-
Orca and Accerciser (and Dogtail, which is not in the diagram) are the counterpart to the applications; they are the assistive tech that is used to perceive applications. They use libatspi and pyatspi2 to talk DBus, and to keep a representation of the accessible objects in apps.
-
Odilia is a newcomer; it is a screen reader written in Rust, that talks DBus directly.
The diagram has red bands to show where context switches happen when applications and screen readers communicate. For example, whenever something happens in gnome-shell, there is a context switch to dbus-daemon, and another context switch to Orca. The accessibility protocol is very chatty, with a lot of going back and forth, so these context switches probably add up — but we don't have profiling information just yet.
There are many layers of glue in the accessibility stack: atk, atk-adaptor, libatspi, pyatspi2, and dbus-daemon are things that we could probably remove. We'll explore that soon.
Now, let's look at each component separately.

For simplicity, let's look just at the path of communication between gnome-shell and Orca. We'll have these components involved: gnome-shell, atk, atk-adaptor, dbus-daemon, libatspi, pyatspi2, and finally Orca.

Gnome-shell implements its own toolkit, St, which stands for "shell toolkit". It is made accessible by implementing the GObject interfaces in atk. To make a toolkit accessible means adding a way to extract information from it in a standard way; you don't want screen readers to have separate implementations for GTK, Qt, St, Firefox, etc. For every window, regardless of toolkit, you want to have a "list children" method. For every widget you want "get accessible name", so for a button it may tell you "OK button", and for an image it may tell you "thumbnail of file.jpg". For widgets that you can interact with, you want "list actions" and "run action X", so a button may present an "activate" action, and a check button may present a "toggle" action.

However, ATK is just abstract interfaces for the benefit of toolkits. We need a way to ship the information extracted from toolkits to assistive tech like screen readers. The atspi protocol is a set of DBus interfaces that an application must implement; atk-adaptor is an implementation of those DBus interfaces that works by calling atk's slightly different interfaces, which in turn are implemented by toolkits. Atk-adaptor also caches some things that it already asked to the toolkit, so it doesn't have to ask again unless the toolkit notifies about a change.
Does this seem like too much translation going on? It is! We will see the reasons behind that when we talk about how accessibility was implemented many years ago in GNOME.

So, atk-adaptor ships the information via the DBus daemon. What's on the other side? In the case of Orca it is libatspi, a hand-written binding to the DBus interfaces for accessibility. It also keeps an internal representation of the information that it got shipped from the toolkit. When Orca asks, "what's the name of this widget?", libatspi may already have that information cached. Of course, the first time it does that, it actually goes and asks the toolkit via DBus for that information.

But Orca is written in Python, and libatspi is a C library. Pyatspi2 is a Python binding for libatspi. Many years ago we didn't have an automatic way to create language bindings, so there is a hand-writtten "old API" implemented in terms of the "new API" that is auto-generated via GObject Introspection from libatspi.
Pyatspi2 also has a bit of logic which should probably not be there, but rather in Orca itself or in libatspi.

Finally we get to Orca. It is a screen reader written in 120,000 lines of Python; I was surprised to see how big it is! It uses the "old API" in pyatspi2.
Orca uses speech synthesis to read out loud the names of widgets, their available actions, and generally any information that widgets want to present to the user. It also implements hotkeys to navigate between elements in the user interface, or a "where am I" function that tells you where the current focus is in the widget hierarchy.

Sarah Mei tweeted "We think awful code is written by awful devs. But in reality, it's written by reasonable devs in awful circumstances."
What were those awful circumstances?

Here I want to show you some important events surrounding the infrastructure for development of GNOME.
We got a CVS server for revision control in 1997, and a Bugzilla bug tracker in 1998 when Netscape freed its source code.
Also around 1998, Tara Hernandez basically invented Continuous Integration while at Mozilla/Netscape, in the form of Tinderbox. It was a build server for Netscape Navigator in all its variations and platforms; they needed a way to ensure that the build was working on Windows, Mac, and about 7 flavors of Unix that still existed back then.
In 2001-2002, Sun Microsystems contributed the accessibility code for GNOME 2.0. See Emmanuele Bassi's talk from GUADEC 2020, "Archaeology of Accessibility" for a much more detailed description of that history (LWN article, talk video).
Sun Microsystems sold their products to big government customers, who often have requirements about accessibility in software. Sun's operating system for workstations used GNOME, so it needed to be accessible. They modeled the architecture of GNOME's accessibility code on what they already had working for Java's Swing toolkit. This is why GNOME's accessibility code is full of acronyms like atspi and atk, and vocabulary like adapters, interfaces, and factories.
Then in 2006, we moved from CVS to Subversion (svn).
Then in 2007, we get gtestutils, the unit testing framework in Glib. GNOME started in 1996; this means that for a full 11 years we did not have a standard infrastructure for writing tests!
Also, we did not have continuous integration nor continuous builds, nor reproducible environments in which to run those builds. Every developer was responsible for massaging their favorite distro into having the correct dependencies for compiling their programs, and running whatever manual tests they could on their code.
2008 comes and GNOME switches from svn to git.
In 2010-2011, Oracle acquires Sun Microsystems and fires all the people who were working on accessibility. GNOME ends up with approximately no one working on accessibility full-time, when it had about 10 people doing so before.
GNOME 3 happens, and the accessibility code has to be ported in emergency mode from CORBA to DBus.
GitHub appears in 2008, and Travis CI, probably the first generally-available CI infrastructure for free software, appears in 2011. GNOME of course is not developed there, but in its own self-hosted infrastructure (git and cgit back then, with no CI).
Jessie Frazelle invents usable containers 2013-2015 (Docker). Finally there is a non-onerous way of getting a reproducible environment set up. Before that, who had the expertise to use Yocto to set up a chroot? In my mind, that seemed like a thing people used only if they were working on embedded systems.
But it is until 2016 that rootless containers become available.
And it is only until 2018 that we get gitlab.gnome.org - a Git-based forge that makes it easy to contribute and review code, and have a continuous integration infrastructure. That's 21 years after GNOME started, and 16 years after accessibility first got implemented.
Before that, tooling is very primitive.

In 2015 I took over the maintainership of librsvg, and in 2016 I started porting it to Rust. A couple of years later, we got gitlab.gnome.org, and Jordan Petridis and myself added the initial CI. Years later, Dunja Lalic would make it awesome.
When I took over librsvg's maintainership, it had few tests which didn't really work, no CI, and no reproducible environment for compilation. The book by Michael Feathers, "Working effectively with legacy code" describes "legacy code is code without tests".
When I started working on accessibility at the beginning of this year, it had few tests which didn't really work, no CI, and no reproducible environment.
Right now, Yelp, our help system, has few tests which don't really work, no CI, and no reproducible environment.
Gnome-session right now has few tests which don't really work, no CI, and no reproducible environment.
I think you can start to see a pattern here...

This is a chart generated by the git-of-theseus tool. It shows how many lines of code got added each year, and how much of that code remained or got displaced over time.
For a project with constant maintenance, like GTK, you get a pattern like in the chart above: the number of lines of code increases steadily, and older code gradually diminishes as it is replaced by newer code.

For librsvg the picture is different. It was mostly unmaintained for a few years, so the code didn't change very much. But when it got gradually ported to Rust over the course of three or four years, what the chart shows is that all the old code shrinks to zero while new code replaces it completely. That new code has constant maintainenance, and it follows the same pattern as GTK's.

Orca is more or less the same as GTK, although with much slower replacement of old code. More accretion, less replacement. That big jump before 2012 is when it got ported from the old CORBA interfaces for accessibility to the new DBus ones.

This is an interesting chart for at-spi2-core. During the GNOME2 era, when accessibility was under constant maintenance, you can see the same "constant growth" pattern. Then there is a lot of removal and turmoil in 2009-2010 as DBus replaces CORBA, followed by quick growth early in the GNOME 3 era, and then just stagnation as the accessibility team disappeared.
How do we start fixing this?

The first thing is to add continuous integration infrastructure (CI). Basically, tell a robot to compile the code and run the test suite every time you "git push".
I copied the initial CI pipeline from libgweather, because Emmanuele Bassi had recently updated it there, and it was full of good toys for keeping C code under control: static analysis, address sanitizer, code coverage reports, documentation generation. It was also a CI pipeline for a Meson-based project; Emmanuele had also ported most of the accessibility modules to Meson while he was working for the GNOME Foundation. Having libgweather's CI scripts as a reference was really valuable.
Later, I replaced that hand-written setup for a base Fedora container image with Freedesktop CI templates, which are AWESOME. I copied that setup from librsvg, where Jordan Petridis had introduced it.

The CI pipeline for at-spi2core has five stages:
-
Build container images so that we can have a reproducible environment for compiling and running the tests.
-
Build the code and run the test suite.
-
Run static analysis, dynamic analysis, and get a test coverage report.
-
Generate the documentation.
-
Publish the documentation, and publish other things that end up as web pages.
Let's go through each stage in detail.

First we build a reproducible environment in which to compile the code and run the tests.
Using Freedesktop CI templates, we start with two base images for "empty distros", one for openSUSE (because that's what I use), and Fedora (because it provides a different build configuration).
CI templates are nice because they build the container, install the build dependencies, finalize the container image, and upload it to gitlab's container registry all in a single, automated step. The maintainer does not have to generate container images by hand in their own computer, nor upload them. The templates infrastructure is smart enough not to regenerate the images if they haven't changed between runs.
CI templates are very flexible. They can deal with containerized builds, or builds in virtual machines. They were developed by the libinput people, who need to test all sorts of varied configurations. Give them a try!

Basically, "meson setup", "meson compile", "meson install", "meson test", but with extra detail to account for the particular testing setup for the accessibility code.
One interesting thing is that for example, openSUSE uses dbus-daemon for the accessibility bus, which is different from Fedora, which uses dbus-broker instead.
The launcher for the accessibility bus thus has different code paths and configuration options for dbus-daemon vs. dbus-broker. We can test both configurations in the CI pipeline.
HELP WANTED: Unfortunately, the Fedora test job doesn't run the tests yet! This is because I haven't learned how to run that job in a VM instead of a container — dbus-broker for the session really wants to be launched by systemd, and it may just be easier to have a full systemd setup inside a VM rather than trying to run it inside a "normal" containerized job.
If you know how to work with VM jobs in Gitlab CI, we'd love a hand!

The third stage is thanks to the awesomeness of modern compilers. The low-level accessibility infrastructure is written in C, so we need all the help we can get from our tools!
We run static analysis to catch many bugs at compilation time. Uninitialized variables, trivial memory leaks, that sort of thing.
Also, address-sanitizer. C is a memory unsafe language, so catching pointer mishaps early is really important. Address-sanitizer doesn't catch everything, but it is better than nothing.
Finally, a test coverage job, to see which lines of code managed to get executed while running the test suite. We'll talk a lot more about code coverage in the following slides.

At least two jobs generate HTML and have to publish it: the documentation job, and the code coverage reports. So, we do that, and publish the result with Gitlab pages. This "static web hosting inside Gitlab", which makes things very easy.

Adding a CI pipeline is really powerful. You can automate all the things you want in there. This means that your whole arsenal of tools to keep code under control can run all the time, instead of only when you remember to run each tool individually, and without requiring each project member to bother with setting up the tools themselves.

The original accessibility code was written before we had a culture of ubiquitous unit tests. Refactoring the code to make it testable makes it a lot better!

In a way, it is rewarding to become the CI person for a project and learn how to make the robots do the boring stuff. It is very rewarding to see other project members start using the tools that you took care to set up for them, because then they don't have to do the same kind of setup.

It is also kind of a pain in the ass to keep the CI updated. But it's the same as keeping any other basic infrastructure running: you cannot think of going back to live without it.

Now let's talk about code coverage.
A code coverage report tells you which lines of code have been executed, and which ones haven't, after running your code. When you get a code coverage report while running the test suite, you see which code is actually exercised by the tests.
Getting to 100% test coverage is very hard, and that's not a useful goal - full coverage does not indicate the absence of bugs. However, knowing which code is not tested yet is very useful!
Code that didn't use to have a good test suite often has many code paths that are untested. You can see this in at-spi2-core. Each row in that toplevel report is for a directory in the source tree, and tells you the percentage of lines within the directory that are executed as a result of running the test suite. If you click on one row, you get taken to a list of files, from which you can then select an individual file to examine.

As you can see here, librsvg has more extensive test coverage. This is because over the last years, we have made sure that every code path gets exercised by the test suite. It's not at 100% yet (and there are bugs in the way we obtain coverage for the c_api, for example, which is why it shows up almost uncovered), but it's getting there.
My goal is to make at-spi2-core's tests equally comprehensive.
Both at-spi2-core and librsvg use Mozilla's grcov tool to generate the coverage reports. Grcov can consume coverage data from LLVM, GCC, and others, and combine them into a single report.

Glib is a much more complex library, and it uses lcov instead of grcov. Lcov is an older tool, not as pretty, but still quite functional (in particular, it is very good at displaying branch coverage).

This is what the coverage report looks for a single C file. Lines that were executed are in green; lines that were not executed are in red. Lines in white are not instrumented, because they produce no executable code.
The first column is the line number; the second column is the number of times each line got executed. The third column is of course the code itself.
In this extract, you can see that all the lines in the
impl_GetChildren() function got executed, but none of the lines in
impl_GetIndexInParent() got executed. We may need to write a test that
will cause the second function to get executed.

The accessibility code needs to process a bunch of properties in DBus objects. For example, the Python code at the top of the slide queries a set of properties, and compares them against their expected values.
At the bottom, there is the coverage report. The C code that handles each property is indeed executed, but the code for the error path, that handles an invalid property name, is not covered yet; it is color-coded red. Let's add a test for that!

So, we add another test, this time for the error path in the C code. Ask for the value of an unknown property, and assert that we get the correct DBus exception back.
With that test in place, the C code that handles that error case is covered, and we are all green.
What I am doing here is to characterize the behavior of the DBus API, that is, to mirror its current behavior in the tests because that is the "known good" behavior. Then I can start refactoring the code with confidence that I won't break it, because the tests will catch changes in behavior.

By now you may be familiar with how Gitlab displays diffs in merge requests.
One somewhat hidden nugget is that you can also ask it to display the code coverage for each line as part of the diff view. Gitlab can display the coverage color-coding as a narrow gutter within the diff view.
This lets you answer the question, "this code changed, does it get executed by a test?". It also lets you catch code that changed but that is not yet exercised by the test suite. Maybe you can ask the submitter to add a test for it, or it can give you a clue on how to improve your testing strategy.

The trick to enable that is to use the
artifacts:reports:coverage_report key in .gitlab-ci.yml. You have
your tools create a coverage report in Cobertura XML format, and you
give it to Gitlab as an artifact.
See the gitlab documentation on coverage reports for test coverage visualization.

When grcov outputs an HTML report, it creates something that looks and
feels like a <table>, but which is not an HTML table. It is just a
bunch of nested <div> elements with styles that make them look like
a table.
I was worried about how to make it possible for people who use screen readers to quickly navigate a coverage report. As a sighted person, I can just look at the color-coding, but a blind person has to navigate each source line until they find one that was executed zero times.
Eitan Isaacson kindly explained the basics of ARIA tags to me, and
suggested how to fix the bunch of <div> elements. First, give them
roles like table, row, cell. This tells the browser that the
elements are to be navigated and presented to accessibility tools as
if they were in fact a <table>.
Then, generate an aria-label for each cell where the report shows
the number of times a line of code was executed. For lines not
executed, sighted people can see that this cell is just blank, but has
color coding; for blind people the aria-label can be "no coverage"
or "zero" instead, so that they can perceive that information.
We need to make our development tools accessible, too!
You can see the pull request to make grcov's HTML more accessible.

Speaking of making development tools accessible, Mike Gorse found a bug in how Gitlab shows its project badges. All of them have an alt text of "Project badge", so for someone who uses a screen reader, it is impossible to tell whether the badge is for a build pipeline, or a coverage report, etc. This is as bad as an unlabelled image.
You can see the bug about this in gitlab.com.

One important detail: if you want code coverage information, your processes must exit cleanly!!!. If they die with a signal (SIGTERM, SIGSEGV, etc.), then no coverage information will be written for them and it will look as if your code got executed zero times.
This is because gcc and clang's runtime library writes out the
coverage info during program termination. If your program dies before
main() exits, the runtime library won't have a chance to write the
coverage report.

During a normal user session, the lifetime of the accessibilty daemons (at-spi-bus-launcher and at-spi-registryd) is controlled by the session manager.
However, while running those daemons inside the test suite, there is no user session! The daemons would get killed when the tests terminate, so they wouldn't write out their coverage information.
I learned to use Martin Pitt's python-dbusmock to write a minimal mock of gnome-session's DBus interfaces. With this, the daemons think that they are in fact connected to the session manager, and can be told by the mock to exit appropriately. Boom, code coverage.
I want to stress how awesome python-dbusmock is. It took me 80 lines of Python to mock the necessary interfaces from gnome-session, which is pretty great, and can be reused by other projects that need to test session-related stuff.

I am using pytest to write tests for the accessibility interfaces via DBus. Using DBus from Python is really pleasant.
For those tests, a test fixture is "an accessibility registry daemon tied to the session manager". This uses a "session manager fixture". I made the session manager fixture tear itself down by informing all session clients of a session Logout. This causes the daemons to exit cleanly, to get coverage information.

The setup for the session manager fixture is very simple; it just
connects to the session bus and acquires the
org.gnome.SessionManager name there.
Then we yield mock_session. This makes the fixture present itself to whatever needs to call it.
When the yield comes back, we do the teardown stage. Here we just
tell all session clients to terminate, by invoking the Logout method
on the org.gnome.SessionManager interface. The mock session manager
sends the appropriate singals to connected clients, and the clients
(the daemons) terminate cleanly.
I'm amazed at how smoothly this works in pytest.

The C code for accessibility was written by hand, before the time when we had code generators to implement DBus interfaces easily. It is extremely verbose and error-prone; it uses the old libdbus directly and has to piece out every argument to a DBus call by hand.

This code is really hard to maintain. How do we fix it?

What I am doing is to split out the DBus implementations:
- First get all the arguments from DBus - marshaling goo.
- Then, the actual logic that uses those arguments' values.
- Last, construct the DBus result - marshaling goo.

If you know refactoring terminology, I "extract a function" with the actual logic and leave the marshalling code in place. The idea is to do that for the whole code, and then replace the DBus gunk with auto-generated code as much a possible.
Along the way, I am writing a test for every DBus method and property that the code handles. This will give me safety when the time comes to replace the marshaling code with auto-generated stuff.

We need reproducible environments to build and test our code. It is not acceptable to say "works on my machine" anymore; you need to be able to reproduce things as much as possible.
Code coverage for tests is really useful! You can do many tricks with it. I am using it to improve the comprehensiveness of the test suite, to learn which code gets executed with various actions on the DBus interfaces, and as an exploratory tool in general while I learn how the accessibility code really works.
Automated builds on every push, with tests, serve us to keep the code from breaking.
Continuous integration is generally available if we choose to use it. Ask for help if your project needs CI! It can be overwhelming to add it the first time.
Let the robots do the boring work. Constructing environments reproducibly, building the code and running the tests, analyzing the code and extracting statistics from it — doing that is grunt work, and a computer should do it, not you.
"The Not Rocket Science Rule of Software Engineering" is to automatically maintain a repository of code that always passes all the tests. That, with monotonically increasing test coverage, lets you change things with confidence. The rule is described eloquently by Graydon Hoare, the original author of Rust and Monotone.
There is tooling to enforce this rule. For GitHub there is Homu; for Gitlab we use Marge-bot. You can ask the GNOME sysadmins if you would like to turn it on for your project. Librsvg and GStreamer use it very productively. I hope we can start using Marge-bot for the accessibility repositories soon.

The moral of the story is that we can make things better. We have much better tooling than we had in the early 2000s or 2010s. We can fix things and improve the basic infrastructure for our personal computing.
You may have noticed that I didn't talk much about accessibility. I talked mostly about preparing things to be able to work productively on learning the accessibility code and then improving it. That's the stage I'm at right now! I learn code by refactoring it, and all the CI stuff is to help me refactor with confidence. I hope you find some of these tools useful, too.

(That is a photo of me and my dog, Mozzarello.)
I want to thank the people that have kept the accessibility code functioning over the years, even after the rest of their team disappeared: Joanmarie Diggs, Mike Gorse, Samuel Thibault, Emmanuele Bassi.
Work Group Shifts to Feedback Session
Members of openSUSE’s Adaptable Linux Platform (ALP) community workgroup had a successful install workshop on August 2 and are transitioning to two install feedback sessions.
The first feedback session is scheduled to take place on openSUSE’s Birthday on August 9 at 14:30 UTC. The second feedback session is scheduled to take place on August 11 during the community meeting at 19:00 UTC.
Attendees of the workshop were asked to install MicroOS Desktop and temporarily use it. This is being done to gain some feedback on how people use their Operating System, which allows the work groups to develop a frame of reference for how ALP can progress.
The call for people to test spin MicroOS Desktop has received a lot of feedback and the workshop also provided a lot of feedback. One of the comments in the virtual feedback session was “stable base + fresh apps? sign me up”.
“stable base + fresh apps? sign me up.” - listed in comments during workshop
Two install videos were posted to the openSUSETV YouTube channel to help get people started with installing and testing MicroOS.
The video Installing Workshop Video (MicroOS Desktop) went over the expectations for ALP and then discussed experiences going through a testing spreadsheet.
The other video, which was not shown during the workshop due to time limitations, was called Installing MicroOS on a Raspberry Pi 400 and gave an overview on how to get MicroOS Desktop with a graphical interface running on the Raspberry Pi.
A final Lucid Presentation is scheduled for August 16 during the regularly scheduled workgroup.
People are encouraged to send feedback to the ALP-community-wg mailing list and to attend the feedback sessions, which will be listed in the community meeting notes.
Users can download the MicroOS Desktop at https://get.opensuse.org/microos/ and see instructions and record comments on the spreadsheet.
YaST Development Report - Chapter 6 of 2022
Time for more news from the YaST-related ALP work groups. As the first prototype of our Adaptable Linux Platform approaches we keep working on several aspects like:
- Improving Cockpit integration and documentation
- Enhancing the containerized version of YaST
- Evolving D-Installer
- Developing and documenting Iguana
It’s a quite diverse set of things, so let’s take it bit by bit.
Cockpit on ALP - the Book
Cockpit has been selected as the default tool to perform 1:1 administration of ALP systems. Easing the adoption of Cockpit on ALP is, therefore, one of the main goals of the 1:1 System Management Work Group. Since clear documentation is key, we created this wiki page explaining how to setup and start using Cockpit on ALP.
The document includes several so-called “development notes” presenting aspects we want to work on in order to improve the user experience and make the process even more straightforward. So stay tuned for more news in that regard.
Improvements in Containerized YaST
As you already know, Cockpit will not be the only way to administer an individual ALP system. Our containerized version of YaST will also be an option for those looking for a more traditional (open)SUSE approach. So we took some actions to improve the behavior of YaST on ALP.
First of all, we reduced the size of the container images as shown in the following table.
| Container | Original Size | New size | Saved Space |
|---|---|---|---|
| ncurses | 433MB | 393MB | 40MB |
| qt | 883MB | 501MB | 382MB |
| web | 977MB | 650MB | 327MB |
We detected more opportunities to reduce the size of the container images, but in most cases they would imply relatively deep changes in the YaST code or a reorganization of how we distribute the YaST components into several images. So we decided to postpone those changes a bit to focus on other aspects in the short term.
We also adapted the Bootloader and Kdump YaST modules to work containerized and we added them to the available container images.
We took the opportunity to rework a bit how YaST handles the on-demand installation of software. As you know, YaST sometimes asks to install a given set of packages in order to continue working or to configure some particular settings.
Due to some race conditions when initializing the software management stack, YaST was sometimes checking and even installing the packages in the container instead of doing it so in the host system that was being configured. That is fixed now and it works as expected in any system in which immediate installation of packages is possible, no matter if YaST runs natively or in a container. But that still leaves one open question. What to do in a transactional system in which installing new software implies a reboot of the system? We still don’t have an answer to that but we are open to suggestions.
As you can imagine, we also invested quite some time checking the behavior of YaST containerized on
top of ALP. The good news is that, apart from the already mentioned challenge with software
installation, we found no big differences between running in a default (transactional) ALP system or
in a non-transaction version of it. The not-so-good news is that we found some issues related to
rendering of the ncurses interfaces, to sysctl configuration and to some other aspects. Most of
them look like relative easy to fix and we plan to work on that in the upcoming weeks.
Designing Network Configuration for D-Installer
Working on Cockpit and our containerized YaST is not preventing us to keep evolving D-Installer. If you have tested our next-generation installer you may have noticed it does not include an option to configure the network. We invested some time lately researching the topic and designing how network configuration should work in D-Installer from the architectural point of view and also regarding the user experience.
In the short term, we have decided to cover the two simplest use cases: regular cable and WiFi adapters. Bear in mind that Cockpit does not allow setting up WiFi adapters yet. We agreed to directly use the D-Bus interface provided by NetworkManager. At this point, there is no need to add our own network service. At the end of the installation, we will just copy the configuration from the system executing D-Installer to the new system.
The implementation will not be based on the existing Cockpit components for network configuration
since they present several weaknesses we would like to avoid. Instead, we will reuse the
components of the cockpit-wicked extension, whose architecture is better suited for the task.
D-Installer as a Set of Containers
In our previous report we announced D-Installer 0.4 as the first version with a modular architecture. So, apart from the already existing separation between the back-end and the two existing user interfaces (web UI and command-line), the back-end consists now on several interconnected components.
And we also presented Iguana, a minimal boot image capable of running containers.
Sure you already guessed then what the next logical step was going to be. Exactly, D-Installer running as a set of containers! We implemented three proofs of concept to check what was possible and to research implications like memory consumption:
- D-Installer running as a single container that includes the back-end and the web interface.
- Splitting the system in two containers, one for the back-end and the other for the web UI.
- Running every D-Installer component (software handling, users management, etc.) in its own container.
It’s worth mentioning that containers communicate with each other by means of D-Bus, using different techniques on each case. The best news if that all solutions seem to work flawlessly and are able to perform a complete installation.
And what about memory consumption? Well, it’s clearly bigger than traditional SLE installation with YaST, but that’s expected at this point in which no optimizations has been performed on D-Installer or the containers. On the other hand, we found no significant differences between the three mentioned proofs of concept.
- Single all-in-one container:
- RAM used at start: 230 MiB
- RAM used to complete installation: 505 MiB
- Two containers (back-end + web UI):
- At start: 221 MiB (191 MiB back-end + 30 MiB web UI)
- To complete installation: 514 MiB (478 MiB back-end + 36 MiB web UI)
- Everything as a separate container:
- At start: 272 MiB (92 MiB software + 74 MiB manager + 75 MiB users + 31 web)
- After installation: 439 MiB (245 MiB software + 86 manager + 75 users + 33 web)
Those numbers were measured using podman stats while executing D-Installer in a traditional
openSUSE system. We will have more accurate numbers as soon as we are able to run the three proofs
of concepts on top of Iguana. Which take us to…
Documentation About Testing D-Installer with Iguana
So far, Iguana can only execute a single container. Which means it already works with the all-in-one D-Installer container but cannot be used yet to test the other approaches. We are currently developing a mechanism to orchestrate several containers inspired by the one offered by GitHub Actions.
Meanwhile, we added some basic documentation to the project, including how to use it to test D-Installer.
Stay Tuned
Despite the reduced presence on this blog post, we also keep working on YaST beside containerization. Not only fixing bugs but also implementing new features and tools like our new experimental helper to inspect YaST logs. So stay tuned to get more updates about Cockpit, ALP, D-Installer, Iguana and, of course, YaST!
MicroOS Install Workshop, Feedback Sessions Planned
In an effort so gain more user insight and perspective for the development of the Adaptable Linux Platform (ALP), members of the openSUSE community workgroup will have a MicroOS Desktop install workshop on August 2.
There will be feedback sessions the following weeks during the community workgroup and community meeting.
Users are encouraged to install the MicroOS Desktop (ISO image 3.8 GiB) during the week of August 1. There will be a short Installation Presentation during the ALP Workgroup Meeting at 14:30 UTC on August 2 for those who need a little assistance.
During the next two weeks’ meetings, follow ups will be given with a final Lucid Presentation on August 16 during the regularly scheduled workgroup.
Users are encouraged to note how their use, installation and experience goes on a spreadsheet while also using it as a reference for tips and examples on what to try.
Workflow examples like tweaking the immutable system by running sudo transactional-update shell or podman basics with container port forwarding and podman volumes are also on the list.
The call for people to test spin MicroOS Desktop has received a lot of feedback like the need for documentations for Flatpak on flatpak user profiles (located at ~/.var/app) and many other key points, which is the whole point of the exercise to try MicroOS to gain some valuable lessons for ALP.
In managing expectations, people testing MicroOS should be aware that it is based on openSUSE Tumbleweed, that YaST is not available and that MicroOS’ release manager would appreciate contributions rather than bug reports. MicroOS currently supports two desktop environments. GNOME is currently supported as a Release Candidate and KDE’s Plasma is in Alpha. MicroOS Desktop is currently the closest representation of what users can expect from next generation Desktop by SUSE.
Users have download the MicroOS Desktop at https://get.opensuse.org/microos/ and see instructions and record comments on the spreadsheet.
Community to celebrate openSUSE Birthday
The openSUSE Project is preparing to celebrate its 17th Birthday on August 9.
The project will have a 24-hour social event with attendees visiting openSUSE’s virtual Bar.
Commonly referred to as the openSUSE Bar or slash bar (/bar), openSUSE’s Jitsi instance has become a frequent virtual hang out with regulars and newcomers.
People who like or use openSUSE distributions and tools are welcome to visit, hang out and chat with other attendees during the celebration.
This special 24-hour celebration has no last call and will have music and other social activities like playing a special openSUSE skribble.io edition; it’s a game where people draw and guess openSUSE and open source themed topics.
People can find the openSUSE Bar at https://meet.opensuse.org/bar.
There will be an Adaptable Linux Platform (ALP) Work Group (WG) feedback session on openSUSE’s Birthday on August 9 at 14:30 UTC taking place on https://meet.opensuse.org/meeting. The session follows an install workshop from August 2 that went over installing MicroOS Desktop. The session is designed to gain feedback on how people use their Operating System to help progress ALP development.
The openSUSE Project came into existence on August 9, 2005. An email about the launch of a community-based Linux distribution was sent out on August 3 and the announcement was made during LinuxWorld in San Francisco, which was at the Moscone Convention Center from August 8 to 11, 2005. The email processing the launch discussed upgrading to KDE 3.4.2 on SuSE 9.3.
Berlin Mini GUADEC
The Berlin hackfest and conference wasn't a polished, well organized experience like the usual GUADEC, it had the perfect Berlin flavor. The attendance topped my expectations and smaller groups formed to work on different aspects of the OS.
GNOME shell's quick settings, the mobile form factor, non-overlapped window management, Flathub and GNOME branding, video subsystems and others.
I've shot a few photos and edited the short video above. Even the music was done during the night sessions at C-Base.
Big thanks to the C-Base crew for taking good care of us in all aspects -- audio-visual support for the talks and following the main event in Mexico, refreshments and even outside seating. Last but not least the GNOME Foundation for sponsoring the travel (mostly on the ground).

Berlin Mini GUADEC
The Berlin hackfest and conference wasn’t a polished, well organized experience like the usual GUADEC, it had the perfect Berlin flavor. The attendance topped my expectations and smaller groups formed to work on different aspects of the OS.
GNOME shell’s quick settings, the mobile form factor, non-overlapped window management, Flathub and GNOME branding, video subsystems and others.
I’ve shot a few photos and edited the short video above. Even the music was done during the night sessions at C-Base.
Big thanks to the C-Base crew for taking good care of us in all aspects – audio-visual support for the talks and following the main event in Mexico, refreshments and even outside seating. Last but not least the GNOME Foundation for sponsoring the travel (mostly on the ground).

GNOME at 25: A health checkup
Around the end of 2020, I looked at GNOME's commit history as a proxy for the project's overall health. It was fun to do and hopefully not too boring to read. A year and a half went by since then, and it's time for an update.
If you're seeing these cheerful-as-your-average-wiphala charts for the first time, the previous post does a better job of explaining things. Especially so, the methodology section. It's worth a quick skim.
What's new
- Fornalder gained the ability to assign cohorts by file suffix, path prefix and repository.
- It filters out more duplicate authors.
- It also got better at filtering out duplicate and otherwise bogus commits.
- I added the repositories suggested by Allan and Federico in this GitHub issue (diff).
- Some time passed.
Active contributors, by generation

As expected, 2020 turned out interesting. First-time contributors were at the gates, numbering about 200 more than in previous years. What's also clear is that they mostly didn't stick around. The data doesn't say anything about why that is, but you could speculate that a work-from-home regime followed by a solid staycation is a state of affairs conductive to finally scratching some tangential — and limited — software-themed itch, and you'd sound pretty reasonable. Office workers had more time and workplace flexibility to ponder life's great questions, like "why is my bike shed the wrong shade of beige" or perhaps "how about those commits". As one does.
You could also argue that GNOME did better at merging pull requests, and that'd sound reasonable too. Whatever the cause, more people dipped their toes in, and that's unequivocally good. How to improve? Rope them into doing even more work! And never never let them go.
2021 brought more of the same. Above the 2019 baseline, another 200 new contributors showed up, dropped patches and bounced.
Active contributors, by affiliation

Unlike last time, I've included the Personal and Other affiliations for this one, since it puts corporate contributions in perspective; GNOME is a diverse, loosely coupled project with a particularly long and fuzzy tail. In terms of how spread out the contributor base is across the various domains, it stands above even other genuine community projects like GNU and the Linux kernel.
Commit count, by affiliation

To be fair, the volume of contributions matters. Paid developers punch way above their numbers, and as we've seen before, Red Hat throws more punches than anyone. Surely this will go on forever (nervous laugh).
Eazel barely made the top-15 cut the last time around. It's now off the list. That's what you get for pushing the cloud, a full decade ahead of its time.
Active contributors, by repository

Slicing the data per repository makes for some interesting observations:
- Speaking of Eazel… Nautilus may be somewhat undermaintained for what it is, but it's seen worse. The 2005-2007 collapse was bad. In light of this, the drive to reduce complexity (cf. eventually removing the compact view etc) makes sense. I may have quietly gnashed my teeth at this at one point, but these days, Nautilus is very pleasant to use for small tasks. And for Big Work, you've got your terminal, a friendly shell and the GNU Coreutils. Now and forever.
- Confirming what "everyone knows", the maintainership of Evolution dwindled throughout the 2010s to the point where only Milan Crha is heroically left standing. For those of us who drank long and deep of the kool-aid it's something to feel apprehensive (and somewhat guilty) about.
- Vala played an interesting part in the GNOME infrastructure revolution of 2009-2011. Then it sort of… waned? Sure, Rust's the hot thing now, but I don't think it could eat its entire lunch.
- GLib is seriously well maintained!
Commit count, by repository

With commit counts, a few things pop out that weren't visible before:
- There's the not at all conspicuously named Tracker, another reminder of how transformative the 2009-2011 time frame really was.
- The mid-2010s come off looking sort of uneventful and bland in most of the charts, but Builder bucked that trend bigly.
- Notice the big drop in commits from 2020 to 2021? It's mostly just the GTK team unwinding (presumably) after the 4.0 release.
Active contributors, by file suffix

I devised this one mainly to address a comment from Smeagain. It's a valid concern:
There are a lot of people translating with each getting a single commit for whatever has been translated. During the year you get larger chunks of text to translate, then shortly before the release you finish up smaller tasks, clean up translations and you end up with lots of commits for a lot of work but it's not code. Not to discount translations bit you have a lot of very small commits.
I view the content agnosticism as a feature: We can't tell the difference in work investment between two code commits (perhaps a one-liner with a week of analysis behind it vs. a big chunk of boilerplate being copied in from some other module/snippet database), so why would we make any assumptions about translations? Maybe the translator spent an hour reviewing their strings, found a few that looked suspicious, broke out their dictionary, called a friend for advice on best practice and finally landed a one-line diff.
Therefore we treat content type foo the same as content type bar, big commits the same as small commits, and when tallying authors, few commits the same as many — as long as you have at least one commit in the interval (year or month), you'll be counted.
However! If you look at the commit logs (and relevant infrastructure), it's pretty clear that hackers and translators operate as two distinct groups. And maybe there are more groups out there that we didn't know about, or the nature of the work changed over time. So we slice it by content type, or rather, file suffix (not quite as good, but much easier). For files with no suffix separator, the suffix is the entire filename (e.g. Makefile).
A subtlety: Since each commit can touch multiple file types, we must decide what to do about e.g. a commit touching 10 .c files and 2 .py files. Applying the above agnosticism principle, we identify it as doing something with these two file types and assign them equal weight, resulting in .5 c commits and .5 py commits. This propagates up to the authors, so if in 2021 you made the aforementioned commit plus another one that's entirely vala, you'll tally as .5 c + .5 py + 1.0 vala, and after normalization you'll be a ¼ c, ¼ py and ½ vala author that year. It's not perfect (sensitive to what's committed together), but there are enough commits that it evens out.
Anyway. What can we tell from the resulting chart?
- Before Git, commit metadata used to be maintained in-band. This meant that you had to paste the log message twice (first to the ChangeLog and then as CVS commit metadata). With everyone committing to ChangeLogs all the time, it naturally (but falsely) reads as an erstwhile focal point for the project. I'm glad that's over.
- GNOME was and is a C project. Despite all manner of rumblings, its position has barely budged in 25 years.
- Autotools, however, was attacked and successfully dethroned. Between 2017 and 2021,
acandamgradually yielded to Meson'sbuild. - Finally, translators (
po) do indeed make up a big part of the community. There's a buried surprise here, though: Comparing 2010 to 2021, this group shrank a lot. Since translations are never "done" — in fact, for most languages they are in a perpetual state of being rather far from it — it's a bit concerning.
The bigger picture
I've warmed to Philip's astute observation:
Thinking about this some more, if you chop off the peak around 2010, all the metrics show a fairly steady number of contributors, commits, etc. from 2008 through to the present. Perhaps the interpretation should not be that GNOME has been in decline since 2010, but more that the peak around 2010 was an outlier.
Some of the big F/OSS projects have trajectories that fit the following pattern:
f (x) = ∑ [n = 1:x] (R * (1 – a)(n – 1))
That is, each year R new contributors are being recruited, while a fraction a of the existing contributors leave. R and a are both fairly constant, but since attrition increases with project size while recruitment depends on external factors, they tend to find an equilibrium where they cancel each other out.
For GNOME, you could pick e.g. R = 130 and a = .15, and you'd come close. Then all you'd need is some sharpie magic, and…

Not a bad fit. Happy 25th, GNOME.
Playing with Common Expression Language
Common Expression Language (CEL) is an expression language created by Google. It allows to define constraints that can be used to validate input data.
This language is being used by some open source projects and products, like:
- Google Cloud Certificate Authority Service
- Envoy
- There’s even a Kubernetes Enhancement Proposal that would use CEL to validate Kubernetes’ CRDs.
I’ve been looking at CEL since some time, wondering how hard it would be to find a way to write Kubewarden validation policies using this expression language.
Some weeks ago SUSE Hackweek 21 took place, which gave me some time to play with this idea.
This blog post describes the first step of this journey. Two other blog posts will follow.
Picking a CEL runtime
Currently the only mature implementations of the CEL language are written in Go and C++.
Kubewarden policies are implemented using WebAssembly modules.
The official Go compiler isn’t yet capable of producing WebAssembly modules that can be run outside of the browser. TinyGo, an alternative implementation of the Go compiler, can produce WebAssembly modules targeting the WASI interface. Unfortunately TinyGo doesn’t yet support the whole Go standard library. Hence it cannot be used to compile cel-go.
Because of that, I was left with no other choice than to use the cel-cpp runtime.
C and C++ can be compiled to WebAssembly, so I thought everything would have been fine.
Spoiler alert: this didn’t turn out to be “fine”, but that’s for another blog post.
CEL and protobuf
CEL is built on top of protocol buffer types. That means CEL expects the input data (the one to be validated by the constraint) to be described using a protocol buffer type. In the context of Kubewarden this is a problem.
Some Kubewarden policies focus on a specific Kubernetes resource; for example,
all the ones implementing Pod Security Policies are inspecting only Pod resources.
Others, like the ones looking at labels or annotations attributes, are instead
evaluating any kind of Kubernetes resource.
Forcing a Kubewarden policy author to provide a protocol buffer definition of the object to be evaluated would be painful. Luckily, CEL evaluation libraries are also capable of working against free-form JSON objects.
The grand picture
The long term goal is to have a CEL evaluator program compiled into a WebAssembly module.
At runtime, the CEL evaluator WebAssembly module would be instantiated and would receive as input three objects:
- The validation logic: a CEL constraint
- Policy settings (optional): these would provide a way to tune the constraint. They would be delivered as a JSON object
- The actual object to evaluate: this would be a JSON object
Having set the goals, the first step is to write a C++ program that takes as input a CEL constraint and applies that against a JSON object provided by the user.
There’s going to be no WebAssembly today.
Taking a look at the code
In this section I’ll go through the critical parts of the code. I’ll do that to help other people who might want to make a similar use of cel-cpp.
There’s basically zero documentation about how to use the cel-cpp library. I had to learn how to use it by looking at the excellent test suite. Moreover, the topic of validating a JSON object (instead of a protocol buffer type) isn’t covered by the tests. I just found some tips inside of the GitHub issues and then I had to connect the dots by looking at the protocol buffer documentation and other pieces of cel-cpp.
TL;DR The code of this POC can be found inside of this repository.
Parse the CEL constraint
The program receives a string containing the CEL constraint and has to
use it to create a CelExpression object.
This is pretty straightforward, and is done inside of these lines
of the evaluate.cc file.
As you will notice, cel-cpp makes use of the Abseil
library. A lot of cel-cpp APIs are returning absl::StatusOr
// invoke API
auto parse_status = cel_parser::Parse(constraint);
if (!parse_status.ok())
{
// handle error
std::string errorMsg = absl::StrFormat(
"Cannot parse CEL constraint: %s",
parse_status.status().ToString());
return EvaluationResult(errorMsg);
}
// Obtain the actual result
auto parsed_expr = parse_status.value();
Handle the JSON input
cel-cpp expects the data to be validated to be loaded into a CelValue
object.
As I said before, we want the final program to read a generic JSON object as input data. Because of that, we need to perform a series of transformations.
First of all, we need to convert the JSON data into a protobuf::Value object.
This can be done using the protobuf::util::JsonStringToMessage
function.
This is done by these lines
of code.
Next, we have to convert the protobuf::Value object into a CelValue one.
The cel-cpp library doesn’t offer any helper. As a matter of fact, one of
the oldest open issue of cel-cpp
is exactly about that.
This last conversion is done using a series of helper functions I wrote inside
of the proto_to_cel.cc file.
The code relies on the introspection capabilities of protobuf::Value to
build the correct CelValue.
Evaluate the constraint
Once the CEL expression object has been created, and the JSON data has been converted into a `CelValue, there’s only one last thing to do: evaluate the constraint against the input.
First of all we have to create a CEL Activation object and insert the
CelValue holding the input data into it. This takes just few lines of code.
Finally, we can use the Evaluate method of the CelExpression instance
and look at its result. This is done by these lines of code,
which include the usual pattern that handles absl::StatusOr<T> objects.
The actual result of the evaluation is going to be a CelValue that holds
a boolean type inside of itself.
Building
This project uses the Bazel build system. I never used Bazel before, which proved to be another interesting learning experience.
A recent C++ compiler is required by cel-cpp. You can use either gcc (version 9+) or clang (version 10+). Personally, I’ve been using clag 13.
Building the code can be done in this way:
CC=clang bazel build //main:evaluatorThe final binary can be found under bazel-bin/main/evaluator.
Usage
The program loads a JSON object called request which is then embedded
into a bigger JSON object.
This is the input received by the CEL constraint:
{
"request": < JSON_OBJECT_PROVIDED_BY_THE_USER >
}The idea is to later add another top level key called
settings. This one would be used by the user to tune the behavior of the constraint.
Because of that, the CEL constraint must access the request values by
going through the request. key.
This is easier to explain by using a concrete example:
./bazel-bin/main/evaluator \
--constraint 'request.path == "v1"' \
--request '{ "path": "v1", "token": "admin" }'The CEL constraint is satisfied because the path key of the request
is equal to v1.
On the other hand, this evaluation fails because the constraint is not satisfied:
$ ./bazel-bin/main/evaluator \
--constraint 'request.path == "v1"' \
--request '{ "path": "v2", "token": "admin" }'
The constraint has not been satisfiedThe constraint can be loaded from file. Create a file
named constraint.cel with the following contents:
!(request.ip in ["10.0.1.4", "10.0.1.5", "10.0.1.6"]) &&
((request.path.startsWith("v1") && request.token in ["v1", "v2", "admin"]) ||
(request.path.startsWith("v2") && request.token in ["v2", "admin"]) ||
(request.path.startsWith("/admin") && request.token == "admin" &&
request.ip in ["10.0.1.1", "10.0.1.2", "10.0.1.3"]))Then create a file named request.json with the following contents:
{
"ip": "10.0.1.4",
"path": "v1",
"token": "admin",
}Then run the following command:
./bazel-bin/main/evaluator \
--constraint_file constraint.cel \
--request_file request.jsonThis time the constraint will not be satisfied.
Note: I find the
_symbols inside of the flags a bit weird. But this is what is done by the Abseil flags library that I experimented with. 🤷
Let’s evaluate a different kind of request:
./bazel-bin/main/evaluator \
--constraint_file constraint.cel \
--request '{"ip": "10.0.1.1", "path": "/admin", "token": "admin"}'This time the constraint will be satisfied.
Summary
This has been a stimulating challenge.
Getting back to C++
I didn’t write big chunks of C++ code since a long time! Actually, I never had a chance to look at the latest C++ standards. I gotta say, lots of things changed for the better, but I still prefer to pick other programming languages 😅
Building the universe with Bazel
I had prior experience with autoconf & friends, qmake and cmake, but I
never used Bazel before.
As a newcomer, I found the documentation of Bazel quite good. I appreciated
how easy it is to consume libraries that are using Bazel. I also like how
Bazel can solve the problem of downloading dependencies, something
you had to solve on your own with cmake and similar tools.
The concept of building inside of a sandbox, with all the dependencies vendored, is interesting but can be kinda scary. Try building this project and you will see that Bazel seems to be downloading the whole universe. I’m not kidding, I’ve spotted a Java runtime, a Go compiler plus a lot of other C++ libraries.
Bazel build command gives a nice progress bar. However, the number of tasks to
be done keeps growing during the build process. It kinda reminded me of the old
Windows progress bar!
I gotta say, I regularly have this feeling of “building the universe” with Rust, but Bazel took that to the next level! 🤯
Code spelunking
Finally, I had to do a lot of spelunking inside of different C++ code bases: envoy, protobuf’s c++ implementation, cel-cpp and Abseil to name a few. This kind of activity can be a bit exhausting, but it’s also a great way to learn from the others.
What’s next?
Well, in a couple of weeks I’ll blog about my next step of this journey: building C++ code to standalone WebAssembly!
Now I need to take some deserved vacation time 😊!
⛰️ 🚶👋
Community Work Group Discusses Next Edition
Members of openSUSE had a visitor for a recent Work Group (WG) session that provided the community an update from one of the leaders focusing on the development of the next generation distribution.
SUSE and the openSUSE community have a steering committee and several Work Groups (WG) collectively innovating what is being referred to as the Adaptable Linux Platform (ALP).
SUSE’s Frederic Crozat, who is one of ALP Architects and part of the ALP steering committee, joined in the exchange of ideas and opinions as well as provided some insight to the group about moving technical decisions forward.
The vision is to take step beyond of what SUSE does with modules like in SUSE LInux Enterprise (SLE) 15. This is not really seen on the openSUSE side. On the SLE side, it’s a bit different, but the point is to be more flexible and agile with development. The way to get there is not yet fully decided, but one thing that is certain is containerization is one of the easiest ways to ensure adaptability.
“If people have their own workloads in containers or VMs, the chance of breaking their system is way lower,” openSUSE Leap release manager Lubos Kocman pointed out in the session. “We’ll need to make sure that users can easily search and install ‘workloads’ from trusted parties.”
In some cases, people break their systems by consuming software outside of the distribution repository. Some characterizations of ALP have been referred to as strict use of containers such as flatpaks, but it’s better to reference the items as workload. There were some efforts planned for Hack Week to provide documentation regarding the workload builds.
There was confirmation that there will be no migration path from SLES to ALP, which most think would mean the same for Leap to ALP, but this does not necessarily apply for openSUSE as people are not being stopped from writing upgradeable scripts.
Working on the community edition is planned after the proof of concept, but the community is actively involved with developing the proof of concept. Some points brought up were that the intial build will aim at a build on Intel and then expand later with other architectures.
The community platform is being referred to as openSUSE ALP during its development cycle with SUSE to be concise with planning the next community edition.





