Work Group Shifts to Feedback Session
Members of openSUSE’s Adaptable Linux Platform (ALP) community workgroup had a successful install workshop on August 2 and are transitioning to two install feedback sessions.
The first feedback session is scheduled to take place on openSUSE’s Birthday on August 9 at 14:30 UTC. The second feedback session is scheduled to take place on August 11 during the community meeting at 19:00 UTC.
Attendees of the workshop were asked to install MicroOS Desktop and temporarily use it. This is being done to gain some feedback on how people use their Operating System, which allows the work groups to develop a frame of reference for how ALP can progress.
The call for people to test spin MicroOS Desktop has received a lot of feedback and the workshop also provided a lot of feedback. One of the comments in the virtual feedback session was “stable base + fresh apps? sign me up”.
“stable base + fresh apps? sign me up.” - listed in comments during workshop
Two install videos were posted to the openSUSETV YouTube channel to help get people started with installing and testing MicroOS.
The video Installing Workshop Video (MicroOS Desktop) went over the expectations for ALP and then discussed experiences going through a testing spreadsheet.
The other video, which was not shown during the workshop due to time limitations, was called Installing MicroOS on a Raspberry Pi 400 and gave an overview on how to get MicroOS Desktop with a graphical interface running on the Raspberry Pi.
A final Lucid Presentation is scheduled for August 16 during the regularly scheduled workgroup.
People are encouraged to send feedback to the ALP-community-wg mailing list and to attend the feedback sessions, which will be listed in the community meeting notes.
Users can download the MicroOS Desktop at https://get.opensuse.org/microos/ and see instructions and record comments on the spreadsheet.
YaST Development Report - Chapter 6 of 2022
Time for more news from the YaST-related ALP work groups. As the first prototype of our Adaptable Linux Platform approaches we keep working on several aspects like:
- Improving Cockpit integration and documentation
- Enhancing the containerized version of YaST
- Evolving D-Installer
- Developing and documenting Iguana
It’s a quite diverse set of things, so let’s take it bit by bit.
Cockpit on ALP - the Book
Cockpit has been selected as the default tool to perform 1:1 administration of ALP systems. Easing the adoption of Cockpit on ALP is, therefore, one of the main goals of the 1:1 System Management Work Group. Since clear documentation is key, we created this wiki page explaining how to setup and start using Cockpit on ALP.
The document includes several so-called “development notes” presenting aspects we want to work on in order to improve the user experience and make the process even more straightforward. So stay tuned for more news in that regard.
Improvements in Containerized YaST
As you already know, Cockpit will not be the only way to administer an individual ALP system. Our containerized version of YaST will also be an option for those looking for a more traditional (open)SUSE approach. So we took some actions to improve the behavior of YaST on ALP.
First of all, we reduced the size of the container images as shown in the following table.
| Container | Original Size | New size | Saved Space |
|---|---|---|---|
| ncurses | 433MB | 393MB | 40MB |
| qt | 883MB | 501MB | 382MB |
| web | 977MB | 650MB | 327MB |
We detected more opportunities to reduce the size of the container images, but in most cases they would imply relatively deep changes in the YaST code or a reorganization of how we distribute the YaST components into several images. So we decided to postpone those changes a bit to focus on other aspects in the short term.
We also adapted the Bootloader and Kdump YaST modules to work containerized and we added them to the available container images.
We took the opportunity to rework a bit how YaST handles the on-demand installation of software. As you know, YaST sometimes asks to install a given set of packages in order to continue working or to configure some particular settings.
Due to some race conditions when initializing the software management stack, YaST was sometimes checking and even installing the packages in the container instead of doing it so in the host system that was being configured. That is fixed now and it works as expected in any system in which immediate installation of packages is possible, no matter if YaST runs natively or in a container. But that still leaves one open question. What to do in a transactional system in which installing new software implies a reboot of the system? We still don’t have an answer to that but we are open to suggestions.
As you can imagine, we also invested quite some time checking the behavior of YaST containerized on
top of ALP. The good news is that, apart from the already mentioned challenge with software
installation, we found no big differences between running in a default (transactional) ALP system or
in a non-transaction version of it. The not-so-good news is that we found some issues related to
rendering of the ncurses interfaces, to sysctl configuration and to some other aspects. Most of
them look like relative easy to fix and we plan to work on that in the upcoming weeks.
Designing Network Configuration for D-Installer
Working on Cockpit and our containerized YaST is not preventing us to keep evolving D-Installer. If you have tested our next-generation installer you may have noticed it does not include an option to configure the network. We invested some time lately researching the topic and designing how network configuration should work in D-Installer from the architectural point of view and also regarding the user experience.
In the short term, we have decided to cover the two simplest use cases: regular cable and WiFi adapters. Bear in mind that Cockpit does not allow setting up WiFi adapters yet. We agreed to directly use the D-Bus interface provided by NetworkManager. At this point, there is no need to add our own network service. At the end of the installation, we will just copy the configuration from the system executing D-Installer to the new system.
The implementation will not be based on the existing Cockpit components for network configuration
since they present several weaknesses we would like to avoid. Instead, we will reuse the
components of the cockpit-wicked extension, whose architecture is better suited for the task.
D-Installer as a Set of Containers
In our previous report we announced D-Installer 0.4 as the first version with a modular architecture. So, apart from the already existing separation between the back-end and the two existing user interfaces (web UI and command-line), the back-end consists now on several interconnected components.
And we also presented Iguana, a minimal boot image capable of running containers.
Sure you already guessed then what the next logical step was going to be. Exactly, D-Installer running as a set of containers! We implemented three proofs of concept to check what was possible and to research implications like memory consumption:
- D-Installer running as a single container that includes the back-end and the web interface.
- Splitting the system in two containers, one for the back-end and the other for the web UI.
- Running every D-Installer component (software handling, users management, etc.) in its own container.
It’s worth mentioning that containers communicate with each other by means of D-Bus, using different techniques on each case. The best news if that all solutions seem to work flawlessly and are able to perform a complete installation.
And what about memory consumption? Well, it’s clearly bigger than traditional SLE installation with YaST, but that’s expected at this point in which no optimizations has been performed on D-Installer or the containers. On the other hand, we found no significant differences between the three mentioned proofs of concept.
- Single all-in-one container:
- RAM used at start: 230 MiB
- RAM used to complete installation: 505 MiB
- Two containers (back-end + web UI):
- At start: 221 MiB (191 MiB back-end + 30 MiB web UI)
- To complete installation: 514 MiB (478 MiB back-end + 36 MiB web UI)
- Everything as a separate container:
- At start: 272 MiB (92 MiB software + 74 MiB manager + 75 MiB users + 31 web)
- After installation: 439 MiB (245 MiB software + 86 manager + 75 users + 33 web)
Those numbers were measured using podman stats while executing D-Installer in a traditional
openSUSE system. We will have more accurate numbers as soon as we are able to run the three proofs
of concepts on top of Iguana. Which take us to…
Documentation About Testing D-Installer with Iguana
So far, Iguana can only execute a single container. Which means it already works with the all-in-one D-Installer container but cannot be used yet to test the other approaches. We are currently developing a mechanism to orchestrate several containers inspired by the one offered by GitHub Actions.
Meanwhile, we added some basic documentation to the project, including how to use it to test D-Installer.
Stay Tuned
Despite the reduced presence on this blog post, we also keep working on YaST beside containerization. Not only fixing bugs but also implementing new features and tools like our new experimental helper to inspect YaST logs. So stay tuned to get more updates about Cockpit, ALP, D-Installer, Iguana and, of course, YaST!
MicroOS Install Workshop, Feedback Sessions Planned
In an effort so gain more user insight and perspective for the development of the Adaptable Linux Platform (ALP), members of the openSUSE community workgroup will have a MicroOS Desktop install workshop on August 2.
There will be feedback sessions the following weeks during the community workgroup and community meeting.
Users are encouraged to install the MicroOS Desktop (ISO image 3.8 GiB) during the week of August 1. There will be a short Installation Presentation during the ALP Workgroup Meeting at 14:30 UTC on August 2 for those who need a little assistance.
During the next two weeks’ meetings, follow ups will be given with a final Lucid Presentation on August 16 during the regularly scheduled workgroup.
Users are encouraged to note how their use, installation and experience goes on a spreadsheet while also using it as a reference for tips and examples on what to try.
Workflow examples like tweaking the immutable system by running sudo transactional-update shell or podman basics with container port forwarding and podman volumes are also on the list.
The call for people to test spin MicroOS Desktop has received a lot of feedback like the need for documentations for Flatpak on flatpak user profiles (located at ~/.var/app) and many other key points, which is the whole point of the exercise to try MicroOS to gain some valuable lessons for ALP.
In managing expectations, people testing MicroOS should be aware that it is based on openSUSE Tumbleweed, that YaST is not available and that MicroOS’ release manager would appreciate contributions rather than bug reports. MicroOS currently supports two desktop environments. GNOME is currently supported as a Release Candidate and KDE’s Plasma is in Alpha. MicroOS Desktop is currently the closest representation of what users can expect from next generation Desktop by SUSE.
Users have download the MicroOS Desktop at https://get.opensuse.org/microos/ and see instructions and record comments on the spreadsheet.
Community to celebrate openSUSE Birthday
The openSUSE Project is preparing to celebrate its 17th Birthday on August 9.
The project will have a 24-hour social event with attendees visiting openSUSE’s virtual Bar.
Commonly referred to as the openSUSE Bar or slash bar (/bar), openSUSE’s Jitsi instance has become a frequent virtual hang out with regulars and newcomers.
People who like or use openSUSE distributions and tools are welcome to visit, hang out and chat with other attendees during the celebration.
This special 24-hour celebration has no last call and will have music and other social activities like playing a special openSUSE skribble.io edition; it’s a game where people draw and guess openSUSE and open source themed topics.
People can find the openSUSE Bar at https://meet.opensuse.org/bar.
There will be an Adaptable Linux Platform (ALP) Work Group (WG) feedback session on openSUSE’s Birthday on August 9 at 14:30 UTC taking place on https://meet.opensuse.org/meeting. The session follows an install workshop from August 2 that went over installing MicroOS Desktop. The session is designed to gain feedback on how people use their Operating System to help progress ALP development.
The openSUSE Project came into existence on August 9, 2005. An email about the launch of a community-based Linux distribution was sent out on August 3 and the announcement was made during LinuxWorld in San Francisco, which was at the Moscone Convention Center from August 8 to 11, 2005. The email processing the launch discussed upgrading to KDE 3.4.2 on SuSE 9.3.
Berlin Mini GUADEC
The Berlin hackfest and conference wasn’t a polished, well organized experience like the usual GUADEC, it had the perfect Berlin flavor. The attendance topped my expectations and smaller groups formed to work on different aspects of the OS.
GNOME shell’s quick settings, the mobile form factor, non-overlapped window management, Flathub and GNOME branding, video subsystems and others.
I’ve shot a few photos and edited the short video above. Even the music was done during the night sessions at C-Base.
Big thanks to the C-Base crew for taking good care of us in all aspects – audio-visual support for the talks and following the main event in Mexico, refreshments and even outside seating. Last but not least the GNOME Foundation for sponsoring the travel (mostly on the ground).

GNOME at 25: A health checkup
Around the end of 2020, I looked at GNOME's commit history as a proxy for the project's overall health. It was fun to do and hopefully not too boring to read. A year and a half went by since then, and it's time for an update.
If you're seeing these cheerful-as-your-average-wiphala charts for the first time, the previous post does a better job of explaining things. Especially so, the methodology section. It's worth a quick skim.
What's new
- Fornalder gained the ability to assign cohorts by file suffix, path prefix and repository.
- It filters out more duplicate authors.
- It also got better at filtering out duplicate and otherwise bogus commits.
- I added the repositories suggested by Allan and Federico in this GitHub issue (diff).
- Some time passed.
Active contributors, by generation

As expected, 2020 turned out interesting. First-time contributors were at the gates, numbering about 200 more than in previous years. What's also clear is that they mostly didn't stick around. The data doesn't say anything about why that is, but you could speculate that a work-from-home regime followed by a solid staycation is a state of affairs conductive to finally scratching some tangential — and limited — software-themed itch, and you'd sound pretty reasonable. Office workers had more time and workplace flexibility to ponder life's great questions, like "why is my bike shed the wrong shade of beige" or perhaps "how about those commits". As one does.
You could also argue that GNOME did better at merging pull requests, and that'd sound reasonable too. Whatever the cause, more people dipped their toes in, and that's unequivocally good. How to improve? Rope them into doing even more work! And never never let them go.
2021 brought more of the same. Above the 2019 baseline, another 200 new contributors showed up, dropped patches and bounced.
Active contributors, by affiliation

Unlike last time, I've included the Personal and Other affiliations for this one, since it puts corporate contributions in perspective; GNOME is a diverse, loosely coupled project with a particularly long and fuzzy tail. In terms of how spread out the contributor base is across the various domains, it stands above even other genuine community projects like GNU and the Linux kernel.
Commit count, by affiliation

To be fair, the volume of contributions matters. Paid developers punch way above their numbers, and as we've seen before, Red Hat throws more punches than anyone. Surely this will go on forever (nervous laugh).
Eazel barely made the top-15 cut the last time around. It's now off the list. That's what you get for pushing the cloud, a full decade ahead of its time.
Active contributors, by repository

Slicing the data per repository makes for some interesting observations:
- Speaking of Eazel… Nautilus may be somewhat undermaintained for what it is, but it's seen worse. The 2005-2007 collapse was bad. In light of this, the drive to reduce complexity (cf. eventually removing the compact view etc) makes sense. I may have quietly gnashed my teeth at this at one point, but these days, Nautilus is very pleasant to use for small tasks. And for Big Work, you've got your terminal, a friendly shell and the GNU Coreutils. Now and forever.
- Confirming what "everyone knows", the maintainership of Evolution dwindled throughout the 2010s to the point where only Milan Crha is heroically left standing. For those of us who drank long and deep of the kool-aid it's something to feel apprehensive (and somewhat guilty) about.
- Vala played an interesting part in the GNOME infrastructure revolution of 2009-2011. Then it sort of… waned? Sure, Rust's the hot thing now, but I don't think it could eat its entire lunch.
- GLib is seriously well maintained!
Commit count, by repository

With commit counts, a few things pop out that weren't visible before:
- There's the not at all conspicuously named Tracker, another reminder of how transformative the 2009-2011 time frame really was.
- The mid-2010s come off looking sort of uneventful and bland in most of the charts, but Builder bucked that trend bigly.
- Notice the big drop in commits from 2020 to 2021? It's mostly just the GTK team unwinding (presumably) after the 4.0 release.
Active contributors, by file suffix

I devised this one mainly to address a comment from Smeagain. It's a valid concern:
There are a lot of people translating with each getting a single commit for whatever has been translated. During the year you get larger chunks of text to translate, then shortly before the release you finish up smaller tasks, clean up translations and you end up with lots of commits for a lot of work but it's not code. Not to discount translations bit you have a lot of very small commits.
I view the content agnosticism as a feature: We can't tell the difference in work investment between two code commits (perhaps a one-liner with a week of analysis behind it vs. a big chunk of boilerplate being copied in from some other module/snippet database), so why would we make any assumptions about translations? Maybe the translator spent an hour reviewing their strings, found a few that looked suspicious, broke out their dictionary, called a friend for advice on best practice and finally landed a one-line diff.
Therefore we treat content type foo the same as content type bar, big commits the same as small commits, and when tallying authors, few commits the same as many — as long as you have at least one commit in the interval (year or month), you'll be counted.
However! If you look at the commit logs (and relevant infrastructure), it's pretty clear that hackers and translators operate as two distinct groups. And maybe there are more groups out there that we didn't know about, or the nature of the work changed over time. So we slice it by content type, or rather, file suffix (not quite as good, but much easier). For files with no suffix separator, the suffix is the entire filename (e.g. Makefile).
A subtlety: Since each commit can touch multiple file types, we must decide what to do about e.g. a commit touching 10 .c files and 2 .py files. Applying the above agnosticism principle, we identify it as doing something with these two file types and assign them equal weight, resulting in .5 c commits and .5 py commits. This propagates up to the authors, so if in 2021 you made the aforementioned commit plus another one that's entirely vala, you'll tally as .5 c + .5 py + 1.0 vala, and after normalization you'll be a ¼ c, ¼ py and ½ vala author that year. It's not perfect (sensitive to what's committed together), but there are enough commits that it evens out.
Anyway. What can we tell from the resulting chart?
- Before Git, commit metadata used to be maintained in-band. This meant that you had to paste the log message twice (first to the ChangeLog and then as CVS commit metadata). With everyone committing to ChangeLogs all the time, it naturally (but falsely) reads as an erstwhile focal point for the project. I'm glad that's over.
- GNOME was and is a C project. Despite all manner of rumblings, its position has barely budged in 25 years.
- Autotools, however, was attacked and successfully dethroned. Between 2017 and 2021,
acandamgradually yielded to Meson'sbuild. - Finally, translators (
po) do indeed make up a big part of the community. There's a buried surprise here, though: Comparing 2010 to 2021, this group shrank a lot. Since translations are never "done" — in fact, for most languages they are in a perpetual state of being rather far from it — it's a bit concerning.
The bigger picture
I've warmed to Philip's astute observation:
Thinking about this some more, if you chop off the peak around 2010, all the metrics show a fairly steady number of contributors, commits, etc. from 2008 through to the present. Perhaps the interpretation should not be that GNOME has been in decline since 2010, but more that the peak around 2010 was an outlier.
Some of the big F/OSS projects have trajectories that fit the following pattern:
f (x) = ∑ [n = 1:x] (R * (1 – a)(n – 1))
That is, each year R new contributors are being recruited, while a fraction a of the existing contributors leave. R and a are both fairly constant, but since attrition increases with project size while recruitment depends on external factors, they tend to find an equilibrium where they cancel each other out.
For GNOME, you could pick e.g. R = 130 and a = .15, and you'd come close. Then all you'd need is some sharpie magic, and…

Not a bad fit. Happy 25th, GNOME.
Playing with Common Expression Language
Common Expression Language (CEL) is an expression language created by Google. It allows to define constraints that can be used to validate input data.
This language is being used by some open source projects and products, like:
- Google Cloud Certificate Authority Service
- Envoy
- There’s even a Kubernetes Enhancement Proposal that would use CEL to validate Kubernetes’ CRDs.
I’ve been looking at CEL since some time, wondering how hard it would be to find a way to write Kubewarden validation policies using this expression language.
Some weeks ago SUSE Hackweek 21 took place, which gave me some time to play with this idea.
This blog post describes the first step of this journey. Two other blog posts will follow.
Picking a CEL runtime
Currently the only mature implementations of the CEL language are written in Go and C++.
Kubewarden policies are implemented using WebAssembly modules.
The official Go compiler isn’t yet capable of producing WebAssembly modules that can be run outside of the browser. TinyGo, an alternative implementation of the Go compiler, can produce WebAssembly modules targeting the WASI interface. Unfortunately TinyGo doesn’t yet support the whole Go standard library. Hence it cannot be used to compile cel-go.
Because of that, I was left with no other choice than to use the cel-cpp runtime.
C and C++ can be compiled to WebAssembly, so I thought everything would have been fine.
Spoiler alert: this didn’t turn out to be “fine”, but that’s for another blog post.
CEL and protobuf
CEL is built on top of protocol buffer types. That means CEL expects the input data (the one to be validated by the constraint) to be described using a protocol buffer type. In the context of Kubewarden this is a problem.
Some Kubewarden policies focus on a specific Kubernetes resource; for example,
all the ones implementing Pod Security Policies are inspecting only Pod resources.
Others, like the ones looking at labels or annotations attributes, are instead
evaluating any kind of Kubernetes resource.
Forcing a Kubewarden policy author to provide a protocol buffer definition of the object to be evaluated would be painful. Luckily, CEL evaluation libraries are also capable of working against free-form JSON objects.
The grand picture
The long term goal is to have a CEL evaluator program compiled into a WebAssembly module.
At runtime, the CEL evaluator WebAssembly module would be instantiated and would receive as input three objects:
- The validation logic: a CEL constraint
- Policy settings (optional): these would provide a way to tune the constraint. They would be delivered as a JSON object
- The actual object to evaluate: this would be a JSON object
Having set the goals, the first step is to write a C++ program that takes as input a CEL constraint and applies that against a JSON object provided by the user.
There’s going to be no WebAssembly today.
Taking a look at the code
In this section I’ll go through the critical parts of the code. I’ll do that to help other people who might want to make a similar use of cel-cpp.
There’s basically zero documentation about how to use the cel-cpp library. I had to learn how to use it by looking at the excellent test suite. Moreover, the topic of validating a JSON object (instead of a protocol buffer type) isn’t covered by the tests. I just found some tips inside of the GitHub issues and then I had to connect the dots by looking at the protocol buffer documentation and other pieces of cel-cpp.
TL;DR The code of this POC can be found inside of this repository.
Parse the CEL constraint
The program receives a string containing the CEL constraint and has to
use it to create a CelExpression object.
This is pretty straightforward, and is done inside of these lines
of the evaluate.cc file.
As you will notice, cel-cpp makes use of the Abseil
library. A lot of cel-cpp APIs are returning absl::StatusOr
// invoke API
auto parse_status = cel_parser::Parse(constraint);
if (!parse_status.ok())
{
// handle error
std::string errorMsg = absl::StrFormat(
"Cannot parse CEL constraint: %s",
parse_status.status().ToString());
return EvaluationResult(errorMsg);
}
// Obtain the actual result
auto parsed_expr = parse_status.value();
Handle the JSON input
cel-cpp expects the data to be validated to be loaded into a CelValue
object.
As I said before, we want the final program to read a generic JSON object as input data. Because of that, we need to perform a series of transformations.
First of all, we need to convert the JSON data into a protobuf::Value object.
This can be done using the protobuf::util::JsonStringToMessage
function.
This is done by these lines
of code.
Next, we have to convert the protobuf::Value object into a CelValue one.
The cel-cpp library doesn’t offer any helper. As a matter of fact, one of
the oldest open issue of cel-cpp
is exactly about that.
This last conversion is done using a series of helper functions I wrote inside
of the proto_to_cel.cc file.
The code relies on the introspection capabilities of protobuf::Value to
build the correct CelValue.
Evaluate the constraint
Once the CEL expression object has been created, and the JSON data has been converted into a `CelValue, there’s only one last thing to do: evaluate the constraint against the input.
First of all we have to create a CEL Activation object and insert the
CelValue holding the input data into it. This takes just few lines of code.
Finally, we can use the Evaluate method of the CelExpression instance
and look at its result. This is done by these lines of code,
which include the usual pattern that handles absl::StatusOr<T> objects.
The actual result of the evaluation is going to be a CelValue that holds
a boolean type inside of itself.
Building
This project uses the Bazel build system. I never used Bazel before, which proved to be another interesting learning experience.
A recent C++ compiler is required by cel-cpp. You can use either gcc (version 9+) or clang (version 10+). Personally, I’ve been using clag 13.
Building the code can be done in this way:
CC=clang bazel build //main:evaluatorThe final binary can be found under bazel-bin/main/evaluator.
Usage
The program loads a JSON object called request which is then embedded
into a bigger JSON object.
This is the input received by the CEL constraint:
{
"request": < JSON_OBJECT_PROVIDED_BY_THE_USER >
}The idea is to later add another top level key called
settings. This one would be used by the user to tune the behavior of the constraint.
Because of that, the CEL constraint must access the request values by
going through the request. key.
This is easier to explain by using a concrete example:
./bazel-bin/main/evaluator \
--constraint 'request.path == "v1"' \
--request '{ "path": "v1", "token": "admin" }'The CEL constraint is satisfied because the path key of the request
is equal to v1.
On the other hand, this evaluation fails because the constraint is not satisfied:
$ ./bazel-bin/main/evaluator \
--constraint 'request.path == "v1"' \
--request '{ "path": "v2", "token": "admin" }'
The constraint has not been satisfiedThe constraint can be loaded from file. Create a file
named constraint.cel with the following contents:
!(request.ip in ["10.0.1.4", "10.0.1.5", "10.0.1.6"]) &&
((request.path.startsWith("v1") && request.token in ["v1", "v2", "admin"]) ||
(request.path.startsWith("v2") && request.token in ["v2", "admin"]) ||
(request.path.startsWith("/admin") && request.token == "admin" &&
request.ip in ["10.0.1.1", "10.0.1.2", "10.0.1.3"]))Then create a file named request.json with the following contents:
{
"ip": "10.0.1.4",
"path": "v1",
"token": "admin",
}Then run the following command:
./bazel-bin/main/evaluator \
--constraint_file constraint.cel \
--request_file request.jsonThis time the constraint will not be satisfied.
Note: I find the
_symbols inside of the flags a bit weird. But this is what is done by the Abseil flags library that I experimented with. 🤷
Let’s evaluate a different kind of request:
./bazel-bin/main/evaluator \
--constraint_file constraint.cel \
--request '{"ip": "10.0.1.1", "path": "/admin", "token": "admin"}'This time the constraint will be satisfied.
Summary
This has been a stimulating challenge.
Getting back to C++
I didn’t write big chunks of C++ code since a long time! Actually, I never had a chance to look at the latest C++ standards. I gotta say, lots of things changed for the better, but I still prefer to pick other programming languages 😅
Building the universe with Bazel
I had prior experience with autoconf & friends, qmake and cmake, but I
never used Bazel before.
As a newcomer, I found the documentation of Bazel quite good. I appreciated
how easy it is to consume libraries that are using Bazel. I also like how
Bazel can solve the problem of downloading dependencies, something
you had to solve on your own with cmake and similar tools.
The concept of building inside of a sandbox, with all the dependencies vendored, is interesting but can be kinda scary. Try building this project and you will see that Bazel seems to be downloading the whole universe. I’m not kidding, I’ve spotted a Java runtime, a Go compiler plus a lot of other C++ libraries.
Bazel build command gives a nice progress bar. However, the number of tasks to
be done keeps growing during the build process. It kinda reminded me of the old
Windows progress bar!
I gotta say, I regularly have this feeling of “building the universe” with Rust, but Bazel took that to the next level! 🤯
Code spelunking
Finally, I had to do a lot of spelunking inside of different C++ code bases: envoy, protobuf’s c++ implementation, cel-cpp and Abseil to name a few. This kind of activity can be a bit exhausting, but it’s also a great way to learn from the others.
What’s next?
Well, in a couple of weeks I’ll blog about my next step of this journey: building C++ code to standalone WebAssembly!
Now I need to take some deserved vacation time 😊!
⛰️ 🚶👋
Community Work Group Discusses Next Edition
Members of openSUSE had a visitor for a recent Work Group (WG) session that provided the community an update from one of the leaders focusing on the development of the next generation distribution.
SUSE and the openSUSE community have a steering committee and several Work Groups (WG) collectively innovating what is being referred to as the Adaptable Linux Platform (ALP).
SUSE’s Frederic Crozat, who is one of ALP Architects and part of the ALP steering committee, joined in the exchange of ideas and opinions as well as provided some insight to the group about moving technical decisions forward.
The vision is to take step beyond of what SUSE does with modules like in SUSE LInux Enterprise (SLE) 15. This is not really seen on the openSUSE side. On the SLE side, it’s a bit different, but the point is to be more flexible and agile with development. The way to get there is not yet fully decided, but one thing that is certain is containerization is one of the easiest ways to ensure adaptability.
“If people have their own workloads in containers or VMs, the chance of breaking their system is way lower,” openSUSE Leap release manager Lubos Kocman pointed out in the session. “We’ll need to make sure that users can easily search and install ‘workloads’ from trusted parties.”
In some cases, people break their systems by consuming software outside of the distribution repository. Some characterizations of ALP have been referred to as strict use of containers such as flatpaks, but it’s better to reference the items as workload. There were some efforts planned for Hack Week to provide documentation regarding the workload builds.
There was confirmation that there will be no migration path from SLES to ALP, which most think would mean the same for Leap to ALP, but this does not necessarily apply for openSUSE as people are not being stopped from writing upgradeable scripts.
Working on the community edition is planned after the proof of concept, but the community is actively involved with developing the proof of concept. Some points brought up were that the intial build will aim at a build on Intel and then expand later with other architectures.
The community platform is being referred to as openSUSE ALP during its development cycle with SUSE to be concise with planning the next community edition.
YaST Development Report - Chapter 5 of 2022
We have been a bit silent lately in this blog, but there are good reasons for it. As you know, the YaST Team is currently involved in many projects and that implies we had to constantly adapt the way we work. That left us little time for blogging. But now we have reached enough stability and we hope to recover a more predictable and regular cadence of publications.
So let’s start with this post including:
- Some notes about the new scope of the blog
- The announcement of D-Installer 0.4
- A brief introduction to Iguana, a container-capable boot image
- Progress on containerized YaST
- Some Cockpit news
Adjusting the Scope
As you know, SUSE is developing the next generation of the SUSE Linux family under the code-name ALP (Adaptable Linux Platform). If you are following the activity in that front, you also know several so-called work groups has been constituted to work on different areas.
The YaST Team is deeply involved on two of those work groups, the ones named “1:1 System Management” and “Installation / Deployment”. You can read more details about the mission of each group and the technologies we are developing in the wiki page linked at the previous paragraph.
Since this blog is a well-established communication channel with the (open)SUSE users, we decided we will use it to report the progress on all the projects related to those work groups. That goes beyond the scope of YaST itself and even beyond the scope of the YaST Team, since the mentioned work groups also include other SUSE and openSUSE colleages. But we are sure our readers will equally enjoy the content.
D-Installer Reaches Version 0.4
Let’s start with an old acquaintance of this blog. In several previous posts we have already described D-Installer, our effort to expose the power of YaST through a more reusable and modern set of interfaces. We recently reached an important milestone in its development with the first version including a multi-process architecture. In previous versions, the user interface could not respond to user interaction if any of the D-Installer components was busy (reading the repositories metadata, installing packages, etc.). The new D-Installer v0.4 includes the first steps to definitely solve that problem and also other interesting features you can check at this release announcement. There are even a couple of videos to see it in action without risking your systems!
As you can see in the first of those videos, we improved the product selection screen and now D-Installer can download and install Tumbleweed, Leap 15.4 or Leap Micro 5.2.
But a new YaST-related piece of software cannot be really considered as fully released until it is submitted to openSUSE Tumbleweed. We plan to do that in the upcoming days, to ensure future development of D-Installer remains fully integrated with our beloved rolling distribution.
A New Reptile in the Family
Of course, D-Installer needs to run on top of a working Linux system. That could be the openSUSE
LiveCD (as we do to test the prototypes) or some kind of minimal installation media. What if that
minimal system is just a container completely tailored to execute D-Installer? It could work as
long as you have a certain initrd (ie. boot image) with the capability of grabbing and executing
containers. Say hello to
Iguana.
Iguana is at an early stage of development and we cannot guarantee it will keep its current form or name, but it’s already able to boot on a virtual machine (and likely on real hardware) and execute a predefined container. We tried to run a containerized version of D-Installer and it certanly works! We plan to evolve the concept and make Iguana able to orchestrate several containers, which will give us a very flexible tool for installing or fixing a system in all kind of situations.
Progress on Containerized YaST
Just as we are looking into D-Installer as a way to reuse the YaST capabilities for system installation, you know we are also looking into containerization as a way to expand the YaST scope regarding system configuration. And we also have some news in that front.
First of all, we adapted more YaST modules to make them work from a container. As a result, you can now access all this functionality:
- Configuration of timezone, keyboard layout and language
- Management of software and repositories
- Firewall setup
- Configuration of iSCSI devices (iSCSI LIO target)
- Management of system services
- Inspection of the systemd journal
- Management of users and groups
- Printers configuration
- Administration of DNS server
We are working on adapting more YaST modules as we write this. We hope to soon add boot loader or Kdump configuration to the portfolio. Maybe even the YaST Partitioner if everything goes fine.
On the other hand, we restructured the container images to rely on SLE BCI, which results on smaller and better supported images. But that was just a first step, we are now actively working on reducing the size of the current images even further. Stay tuned for more news and some numbers.
Cockpit with a Green Touch
Although using a containerized version of YaST will always be an option. Is expected that the main tool for performing interactive administration of an individual ALP host will be Cockpit. So we are also investing on improving the Cockpit experience on (open)SUSE systems. And nothing gives a better experience that a properly green user interface!
We improved the theming support for Cockpit (in an unofficial way, not adopted by upstream) and used
that new support to make it look better thanks to the new
cockpit-suse-theme package.
We took the opportunity to update the version of all Cockpit packages available at openSUSE Tumbleweed. So if you are a Tumbleweed user, it’s maybe a good time to give a first try to Cockpit!
I’ll Be Back!
As mentioned at the beginning of this post, we want to get back to the good habit of posting regular status updates, although they will not longer be so focused around (Auto)YaST. In exchange you will get news about Cockpit, D-Installer, Iguana and containers. So stay tuned for more fun!
MicroOS Desktop Use to Help with ALP Feedback
Participants from the openSUSE community working on the upcoming release of the Adaptable Linux Platform (ALP) encourage people to try openSUSE MicroOS Desktop to gain user perspectives on its applicability.
Users are encouraged to try out MicroOS Desktop by installing it and using it on a laptop or workstation for a week or so. By doing this, users develop a frame of reference for how ALP can progress; the community wants to gain feedback about what users think about ALP’s usability, how it fits user workflows and more. The community would like to see critiques and evaluations that work for users. People are encouraged to send feedback to the ALP-community-wg mailing list.
The temporary use of the MicroOS Desktop will help developers assess how to move forward with ALP’s Proof of Concept (PoC).
Currently MicroOS Desktop has both GNOME and KDE’s Plasma as an option.
ALP’s PoC does not currently have a Desktop requirement, but one is expected to land in a snapshot shortly after the PoC.
Several workgroups (WG) are involved and engineers are aiming for an initial PoC release of ALP in the fall. There are also plans to to do an ALP Workshop and Install Fest (as a hybrid in-person/virtual event) during the openSUSE.Asia Summit 2022 if the ALP PoC release becomes available before the summit, which is on Sept. 30 and Oct. 1.
Discussions related to flatpaks and how they will fit into the wider ALP ecosystem alongside RPMs and other packages continue to be discussed among community members and stakeholders.




