Skip to main content
openSUSE's Geeko chameleon's head overlayed on a cell-shaded planet Earth, rotated to show the continents of Europe and Africa

Welcome to English Planet openSUSE

This is a feed aggregator that collects what the contributors to the openSUSE Project are writing on their respective blogs
To have your blog added to this aggregator, please read the instructions

a silhouette of a person's head and shoulders, used as a default avatar

pgtwin — HA PostgreSQL: Configuration

in the previous blog pgtwin — HA PostgreSQL: VM Preparation we setup two VMs with KVM to prepare for a HA PostgreSQL setup. Now, we will configure the Corosync cluster engine, prepare PostgreSQL for synchronous streaming replication and finally configure Pacemaker to provide high availability.

Configure Corosync

Corosync has its main configuration file located at ‘/etc/corosync/corosync.conf’. Edit this file with the following content, change the IP Adresses according to your setup:

totem {
    version: 2
    cluster_name: pgtwin-devel
    transport: knet
    crypto_cipher: aes256
    crypto_hash: sha256
    token: 5000
    join: 60
    max_messages: 20
    token_retransmits_before_loss_const: 10

    # Dual ring configuration
    interface {
        ringnumber: 0
        mcastport: 5405
    }

    interface {
        ringnumber: 1
        mcastport: 5407
    }
}

nodelist {
    node {
        ring0_addr: 192.168.60.13
        ring1_addr: 192.168.61.233
        name: pgtwin1
        nodeid: 1
    }

    node {
        ring0_addr: 192.168.60.83
        ring1_addr: 192.168.61.253
        name: pgtwin2
        nodeid: 2
    }

}

quorum {
    provider: corosync_votequorum
    two_node: 1
    wait_for_all: 1
}

logging {
    to_logfile: yes
    logfile: /var/log/cluster/corosync.log
    to_syslog: yes
    timestamp: on
}

Next step is to create an authentication key for the cluster. Create the key on the first node, and then copy it to the other node:

corosync-keygen -l
scp /etc/corosync/authkey pgtwin2:/etc/corosync/authkey

Note, that by default you will not be allowed to access the remote node as root with ssh. This is a good standard for production sites. If you find that inconvenient, you can change the setting by adding a file to /etc/ssh/sshd_config.d. Don’t do this for production environments or externally reachable VMs though:

# cat /etc/ssh/sshd_config.d/10-permit-root.conf
PermitRootLogin=yes

On both nodes, make sure that the ownership and access rights are correct:

chmod 400 /etc/corosync/authkey
chown root:root /etc/corosync/authkey

Enable and start Corosync and Pacemaker:

systemctl enable corosync
systemctl start corosync

# Wait 10 seconds for Corosync to stabilize
sleep 10

# Check Corosync status
sudo corosync-cfgtool -s

# Enable and start Pacemaker
sudo systemctl enable pacemaker
sudo systemctl start pacemaker                                                                                           

Verify that the cluster is working with ‘crm status’

Configure PostgreSQL

PostgreSQL will only be configured on the first node. The second node will only need the data directory as well as the password file ‘.pgpass’ prepared, the pgtwin ocf agent itself will then perform the initial mirroring and final replication configuration of the database. Find the mentioned postgresql.custom.conf file at https://github.com/azouhr/pgtwin/blob/main/postgresql.custom.conf. This file holds the default configuration for use with pgtwin. You want to tweak the parameters according to your usage. Also make sure to use a password that is suitable for your environment.

# Initialize database
sudo -u postgres initdb -D /var/lib/pgsql/data

# Copy the provided PostgreSQL HA configuration
sudo cp /path/to/pgtwin/github/postgresql.custom.conf /var/lib/pgsql/data/postgresql.custom.conf
sudo chown postgres:postgres /var/lib/pgsql/data/postgresql.custom.conf

# Include custom config in main postgresql.conf
sudo -u postgres bash -c "echo \"include = 'postgresql.custom.conf'\" >> /var/lib/pgsql/data/postgresql.conf"

# Configure pg_hba.conf for replication
sudo -u postgres tee -a /var/lib/pgsql/data/pg_hba.conf <<EOF

# Replication connections
host    replication     replicator      192.168.60.0/24       scram-sha-256
host    postgres        replicator      192.168.60.0/24       scram-sha-256
EOF

# Start PostgreSQL manually (temporary)
sudo -u postgres pg_ctl -D /var/lib/pgsql/data start

# Create replication user
sudo -u postgres psql <<EOF
CREATE ROLE replicator WITH REPLICATION LOGIN PASSWORD 'SecurePassword123';
GRANT pg_read_all_data TO replicator;
GRANT EXECUTE ON FUNCTION pg_ls_dir(text, boolean, boolean) TO replicator;
GRANT EXECUTE ON FUNCTION pg_stat_file(text, boolean) TO replicator;
GRANT EXECUTE ON FUNCTION pg_read_binary_file(text) TO replicator;
GRANT EXECUTE ON FUNCTION pg_read_binary_file(text, bigint, bigint, boolean) TO replicator;
EOF

# Stop PostgreSQL (cluster will manage it)
sudo -u postgres pg_ctl -D /var/lib/pgsql/data stop

Also add the connection definition for your application connections to the pg_hba.conf file.

The PostgreSQL configuration only needs to prepare the password file now. This needs to be added to both nodes:

# cat /var/lib/pgsql/.pgpass
# Replication database entries (for streaming replication)
pgtwin1:5432:replication:replicator:SecurePassword123
pgtwin2:5432:replication:replicator:SecurePassword123
192.168.60.13:5432:replication:replicator:SecurePassword123
192.168.60.83:5432:replication:replicator:SecurePassword123

# Postgres database entries (required for pg_rewind and admin operations)
pgtwin1:5432:postgres:replicator:SecurePassword123
pgtwin2:5432:postgres:replicator:SecurePassword123
192.168.60.13:5432:postgres:replicator:SecurePassword123
192.168.60.83:5432:postgres:replicator:SecurePassword123

Also set correct permissions for this file, else PostgreSQL will refrain from using it:

chmod 600 /var/lib/pgsql/.pgpass
chown postgres:postgres /var/lib/pgsql/.pgpass

After adding .pgpass to the second node, you will only need an empty data directory on that node prepared:

mkdir -p /var/lib/pgsql/data
chown postgres:postgres /var/lib/pgsql/data
chmod 700 /var/lib/pgsql/data

Configure Pacemaker

The final step before starting the HA PostgreSQL for the first time is to configure pacemaker. For first time users of pacemaker, this is a daunting configuration, and it needs a lot of considerations. For now, retrieve the already prepared file https://github.com/azouhr/pgtwin/blob/main/pgsql-resource-config.crm and adopt it to your environment.

The values that you have to edit are:

  • VIP address (virtual IP, that is migrated between the cluster nodes and serves as access address for all applications)
  • Ping-Gateway address, that allows the cluster to prefer a node with access to the network
  • Node Names in several resources, the defaults psql1 and psql2 will be come pgtwin1 and pgtwin2 respectively

After editing the file, load it into the cluster with the ‘crm’ command. The configuration can be done on any node, and will be available immediately from any node:

crm configure < pgsql-resource-config.crm

Thats it. The cluster will now try to bring up the PostgreSQL Database on both nodes in a HA configuration. You can monitor the process with the command ‘crm_mon’. Note, that in the beginning, the secondary node will have failed resources. This is due to the fact, that pgtwin has to perform an initial basebackup on that node. After a while, the output should look similar to this:

Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: pgtwin1 (version 3.0.1+20250807.16e74fc4da-1.2-3.0.1+20250807.16e74fc4da) - partition WITHOUT quorum
  * Last updated: Tue Dec 30 12:55:12 2025 on pgtwin1
  * Last change:  Tue Dec 30 12:55:07 2025 by hacluster via hacluster on pgtwin2
  * 2 nodes configured
  * 5 resource instances configured

Node List:
  * Online: [ pgtwin1 pgtwin2 ]

Active Resources:
  * postgres-vip        (ocf:heartbeat:IPaddr2):         Started pgtwin1
  * Clone Set: postgres-clone [postgres-db] (promotable):
    * Promoted: [ pgtwin1 ]
    * Unpromoted: [ pgtwin2 ]
  * Clone Set: ping-clone [ping-gateway]:
    * Started: [ pgtwin1 pgtwin2 ]

After the cluster stabilized, you can perform a number of tests to check the state:

On pgtwin1 (primary):

# Check replication status
sudo -u postgres psql -x -c "SELECT * FROM pg_stat_replication;"

# Expected: One row showing pgtwin2 connected

On pgtwin2 (standby):

# Check if in recovery mode
sudo -u postgres psql -c "SELECT pg_is_in_recovery();"

# Expected: t (true)

Congratulations, you got a HA PostgreSQL Database running. To access the database on the primary, just use the command:

sudo -u postgres psql

Since this has direct socket access, you will have full access to the database without password that way. For further tests and more information, have a look at https://github.com/azouhr/pgtwin/blob/main/QUICKSTART_DUAL_RING_HA.md.

a silhouette of a person's head and shoulders, used as a default avatar

pgtwin — HA PostgreSQL: VM Preparation

In my last post Kubernetes on Linux on Z, I explained why I need a highly available PostgreSQL Database to operate K3s. Of course, a HA PostgreSQL that works with just two Datacenters has lots more usecases. Let me explain how to perform an initial setup like the one that I use for development.

Preparation of two VMs

The openSUSE Project releases readily prepared Tumbleweed images almost every day. Have a look at https://download.opensuse.org/tumbleweed/appliances/, I typically get an image from there that is named like ‘openSUSE-Tumbleweed-Minimal-VM.x86_64-1.0.0-kvm-and-xen-Snapshot20251222.qcow2’. The current image will have a different name, however lets go with this for now.

My typical KVM VMs use:

  • 2 CPUs
  • 2 GB memory
  • Raw disk image format
  • Two libvirt networks (ring0 and ring1)
  • Both, graphical (VNC) and serial console support

First, convert the image to a raw image. The reason why I like to use this is, that it is much easier to loop mount such an image also in the local operating system, and also to increase the image with standard commands like kpartx, losetup and dd. You can go with qcow2, if you prefer that format.

qemu-img convert openSUSE-Tumbleweed-Minimal-VM.x86_64-1.0.0-kvm-and-xen-Snapshot20251222.qcow2 pgtwin01.raw

We will need two images of that kind:

cp -a pgtwin01.raw pgtwin02.raw

Since I like to use two network rings for the HA Setup (I will go into details why this is a good thing in a concepts blog soon), lets create two libvirt networks. Attachment of real Linux bridges would also be possible. Create two files ring0.xml and ring1.xml:

# cat ring0.xml
<network>
  <name>ring0</name>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr10' stp='on' delay='0'/>
  <ip address='192.168.60.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.60.2' end='192.168.60.254'/>
    </dhcp>
  </ip>
</network>
# cat ring1.xml
<network>
  <name>ring1</name>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr11' stp='on' delay='0'/>
  <ip address='192.168.61.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.61.2' end='192.168.61.254'/>
    </dhcp>
  </ip>
</network>

After that, define the libvirt networks, and enable autostart:

virsh net-define ring0.xml
virsh net-define ring1.xml
virsh net-autostart ring0
virsh net-autostart ring1

Now, lets setup the two VMs. The following command will bring up a VM and ask a number of initial questions. This is just the basic setup of a VM, nothing really special there:

virt-install \
  --name pgtwin01 \
  --memory 2048 \
  --vcpus 2 \
  --disk path=/home/claude/images/pgtwin01.raw,format=raw \
  --import \
  --network network=ring0 \
  --network network=ring1 \
  --os-variant opensusetumbleweed \
  --graphics vnc,listen=0.0.0.0 \
  --console pty,target_type=serial

and the same with pgtwin02:

virt-install \
  --name pgtwin02 \
  --memory 2048 \
  --vcpus 2 \
  --disk path=/home/claude/images/pgtwin02,format=raw \
  --import \
  --network network=ring0 \
  --network network=ring1 \
  --os-variant opensusetumbleweed \
  --graphics vnc,listen=0.0.0.0 \
  --console pty,target_type=serial

In case you want to connect to Linux Bridges, use “bridge=” instead of “network=”. Typically, I configure ssh to the two VMs, this normally has been done during the virt-install process. The minimal image from openSUSE by default configures both network devices with dhcp. This is an issue, because it will have two default gateways defined. Let me explain how to fix this:

# nmcli c s
NAME                UUID                                  TYPE      DEVICE 
Wired connection 1  29df9468-975d-3944-91ca-355ed0c82a3c  ethernet  enp1s0 
Wired connection 2  1f45b334-b429-3823-80eb-a3aafeb33195  ethernet  enp2s0 
lo                  611124a1-fa8e-48d6-84ba-f75733093ca6  loopback  lo

There is two external interfaces configured here. If you check the routing, you will find two default gateway definitions:

ip r s

In this setup, only ring0 is used to connect to the world, and thus the default gateway of ring1 (connected over enp2s0) can be deleted:

nmcli connection modify 1f45b334-b429-3823-80eb-a3aafeb33195 \
    ipv4.gateway "" \
    ipv4.never-default yes

Adopt the UUID and requirements to your setup.

For the Pacemaker and PostgreSQL configuration later on, also setup your hostnames and resolving of the other nodes. The procedure to set the hostname seems to have changed recently, and it now uses hostnamectl instead of just writing the name to /etc/HOSTNAME.

On pgtwin01:
hostnamectl set-hostname pgtwin01
On pgtwin02:
hostnamectl set-hostname pgtwin02

The resolving is either over your standard DNS system or with /etc/hosts. Find the used IP Addresses with ‘ip a s’

echo "192.168.60.13   pgtwin01" >> /etc/hosts
echo "192.168.60.83   pgtwin02" >> /etc/hosts

Configure the firewall to allow communication between the two VMs:

# Corosync communication
firewall-cmd --permanent --add-port=5405/udp  # Corosync multicast
firewall-cmd --permanent --add-port=5404/udp  # Corosync multicast (alternative)

# Pacemaker communication
firewall-cmd --permanent --add-port=2224/tcp  # pcsd
firewall-cmd --permanent --add-port=3121/tcp  # Pacemaker

# PostgreSQL
firewall-cmd --permanent --add-port=5432/tcp

# Reload firewall
firewall-cmd --reload

The last step for preparing the VMs is installing the cluster software as well as the PostgreSQL database software.

zypper install -y \
    pacemaker \
    corosync \
    crmsh \
    sudo \
    resource-agents \
    fence-agents \
    postgresql18 \
    postgresql18-server \
    postgresql18-contrib

After that, you have two VMs readily installed with two network connections. The next steps will be the setup of Corosync, the initial configuration of the PostgreSQL Database, and finally the cluster resource definitions.

a silhouette of a person's head and shoulders, used as a default avatar

Kubernetes on Linux on Z

This year, I had the task of setting up a Kubernetes environment on a Linux Partition on a s390x system. At first sight, this sounds easy, there are offerings out there that you can purchase. The second look however can make you wonder. There is a structural mismatch between typical Linux on Z environments and Kubernetes:

While Linux on Z typically uses two datacenters as two high availability zones, Kubernetes requires you two have at least three.

This is a base foundation issue, that is not to overcome by just telling what you all did, you really have to get into the issue and find a solution. I might not know everything, however there is a solution that the rancher people developed, and it is called kine. This is an etcd shim, that allows to replace the etcd-database, which actually requires the three sites for its quorum mechanism, with an external sql database.

I am a little adventurous, and thus I told people, we can do that. The plan looked like this:

As you can see, Kubernetes talks to PostgreSQL over kine, and the HA functionality would be provided by PostgreSQL. This first thought was kind of naive, and needed a number of fixes.

  1. PostgreSQL can do streaming replication, however the standard version cannot run a Multi-Master.
  2. The only open source cluster solution that works for two nodes and I am aware of is corosync with pacemaker. However, the OCF Agents there are able to fail over, but after that, a DBA has to restore high availability. Patroni, as the standard solution for PostgreSQL in cloud environments today, does not solve the 2 Datacenter constraint for me.
  3. The Kubernetes of choice was k3s, however the rancher people stopped releasing for s390x
  4. All the needed containers are OpenSource, but many do not release s390x architecture, some even prevent from building that in their build scripts.

Together with a colleague I started to work on this project. Fortunately, we already had worked on zCX (Container Extensions on z/OS) and had provided many container images that were missing previously. To make development easier, we utilized OBS and worked on the images in a project that I created for that purpose. This can be found at “home:azouhr:d3v” for those interested. Thus, the fourth issue was just work, but not so much a real challenge.

The main challenges have been a Highly Available PostgreSQL that works on two nodes, as well as building k3s in a reasonable way for s390x. Let me get into some more details of using Corosync and Pacemaker with two node clusters, and what is needed to make that work with PostgreSQL.

Corosync and Pacemaker are a solution that is used in the HA product of the SUSE Enterprise Server, and it actually supports two nodes if you have a SBD (Storage-Based Death) device at hand. This is typically not an issue for Mainframe environments, because those machines normally do not have local disks anyways and always operate a SAN.

Pacemaker uses OCF-Agents, that operate certain programs. In my case, this would be PostgreSQL. I have been writing on such agents long ago, however it is kind of a daunting prospect to write an agent from scratch, especially when you have to learn the tasks of a PostgreSQL DBA along the process. After pushing the task off for some time, my colleague suggested to try an AI to get started, and what can I say, I was positively surprised with the result. I chose to give it a try. Since I did not want the KI to have too many rights on my home laptop. The setup I am using looks like this:

I know that many developers don’t like what they get from a AI, and I have to admit, I did not even try to run the first three or four versions that the AI produced. However after a while, the solution stabilized, and I could concentrate on smaller aspects of the OCF agent that “we” created. A recent state of what we produced can be found at https://github.com/azouhr/pgtwin. Note, that I did by far not publish all the different design documents, this would be more than 250 different documents, talking about different aspects of how the OCF agent should operate.

Some experiences with the KI:

  • KIs like to proceed, even if a thought is not ready. Working on the design is important, just don’t let a KI produce code, when you are not yet confident, that you are at the same level of understanding.
  • KIs can easily skim through massive amounts of log files, and also find and fix issues on their own. I personally like to challenge solutions to issues, when I feel that the solution is not perfect. This may lead to several iterations of new proposed solutions.
  • KIs sometimes solve issue A and break B, only to solve B and break A. They are happy to go on like this forever. Whenever you find a problem reoccurring, you have to get deeper into the issue. Let the KI explain what happens, create assumptions and let it explore different paths.
  • KIs sometimes stumble into the same issues that have been discussed earlier. I found that starting the discussion over again is tedious. Instead, ask the KI why it cannot use the solution from the previous location.
  • KIs sometimes try to figure things out without having enough data. Instead of adding debug information or tracing like any programmer would do, they just start experimenting. It often helps a great deal, just to tell them to switch on tracing, or to use things like strace to get more information.
  • Finally, you always have to manually review the result. My personal procedure is to add comments into the code that can be easily found with grep, and later tell the KI to fix the comments.
  • KIs have a date weakness. They like to confuse years and other numbers in dates. That’s why the release dates of pgtwin look confusing.
  • A little warning about documentation and promotion. Obviously KIs have been trained a lot with marketing procedures. They typically claim something is enterprise ready as soon as it did run once. After being “enterprise ready” I typically find quite a number of issues just by looking at the code.
  • Still it is impressive, how easy the code can be read, and how well it is documented. For someone who did read quite some code in history, it is really nice to look at the code. Also the amount of coding would not be possible in that short timeframe by a normal developer.

In my next post, I will go over the design of https://github.com/azouhr/pgtwin and explore the main features and concepts that I have been working on for the last weeks. I hope to create another bugfix release soon, however the agent already works quite well as it is now.

a silhouette of a person's head and shoulders, used as a default avatar

Tumbleweed – Review of the week 2025/52

Dear Tumbleweed users and hackers,

The final days of the year are upon us, and, as expected, the number of incoming requests is quite low. This low count can be processed even with most of the release team enjoying some time off. Five Snapshots (1220, 1222, 1223, 1224, and 1225) have been produced and passed openQA.

The most relevant changes in those snapshots were:

  • Mozilla Firefox 146.0.1
  • Flatpak 1.16.2
  • Linux kernel 6.18.2
  • PHP 8.4.16
  • Ruby 3.4.8
  • Fuse 3.18.0 & 3.18.1
  • ImageMagick 7.1.2.11
  • LVM 2.03.38
  • sshfs 3.7.5

Currently, the staging projects are testing these changes:

  • Systemd 258.3
  • Enablement of Python 3.14 modules, while removing Python 3.12.
  • Ruby 4.0: breaks build of vim (https://github.com/vim/vim/issues/18884)
  • Removal of LUA 5.1
  • Python 3.13.11 (breaks a few Python modules test suites)

the avatar of Robert Riemann

EU OS: Which Linux Distribution fits Europe best?

Logos of Fedora, openSUSE, Ubuntu and Debian surrounding the Logo of EU OS

Please note that views expressed in this post (and this blog in general) are only my own and not of my employer.

Dear opensuse planet, dear fedora planet, dear fediverse, dear colleagues,

Soon, the EU OS project celebrates its first anniversary. I want to seize the occasion (and your attention) before the Christmas holidays to share my personal view on the choice of the Linux distribution as a basis for EU OS. EU OS is so far a community-led initiative to offer a template solution for Linux on the Desktop deployments in corporate settings, specifically in the public sector.

Only few weeks ago, the EU OS collaborators tested together a fully functional Proof of Concept (PoC) with automatic provisioning of laptops and central user management. The documentation of this setup is to 90% complete and should be finalized in the coming weeks. This PoC relies on Fedora, which is the one aspect that triggered the most attention and criticism so far.

I recall that EU OS has so far no funding and only few contributors. Please check out the project Gitlab, join the Matrix channel, or send me an email to help or discuss funding. So in my view, EU OS can currently accomplish its mission best by bringing communities and organisations together to use their existing resources more strategically than now.

In 2025, digital sovereignty was much discussed in Europe. I had many opportunities to discuss EU OS with IT experts in the public sector. I am hopeful that eventually one or several European projects will emerge to bring Linux on the (public sector) Desktop more systematically as it is currently the case.

I also learnt more about public sector requirements for Linux on the server, in the VM, in Kubernetes. If the goal of EU OS is to leverage synergies with Cloud Native Computing technologies, those requirements must be considered as well by the Linux distribution powering EU OS.

Linux Use in the Public Sector

Let us map out briefly the obvious use cases of Linux in a public sector organisation, such as a ministry, a court, or the administration of a city/region. The focus is on uses that are directly managed1.

  • Linux on the desktop (rarely the case today, but that’s the ambition of the EU OS project)
  • Linux in a Virtual Machine (VM), a Docker/Podman Container, and for EU OS in a Flatpak Runtime
  • Linux on the server (including for Virtualisation/Kubernetes nodes)

Criteria for a Linux Distribution in the Public Sector

Given the exchanges I had so far, I would propose the following high-level criteria for the selection of a Linux Distribution:

Battle-tested Robustness
The public sector is very conservative and any change to the status-quo requires clear unique benefits.
Cloud Native Technology
The public sector reacted so far very positively to the promises of bootc technology (bootc in the EU OS FAQ). It is very recent technology, but the benefits for the management of Linux laptop fleets with teams knowing already container technology are recognised.
Enterprise Support
The public sector wants commercial support for a free Linux with an easy upgrade path to a managed enterprise Linux. Already existing companies, new companies, the public sector, or non-profit foundations could deliver such enterprise Linux. I expect that a mix with clear task allocations would work best in practice.
Enterprise Tools
The public sector needs tools for provisioning, configuration and monitoring of servers, VMs, Docker/Podman Containers, and laptops as well as for the management of users. Those tools must scale up to some ten thousands laptops/users. The EU OS project proposes to rely on FreeIPA for identity management and Foreman for Linux laptop fleet management.
Third-Party Support
The public sector wants that their existing possibly proprietary or legacy third-party hardware2 or appliances (think SAP) remain supported. This one is tricky, because it is each third party that decides what they support. Of course, any third-party vendor lock-in should be avoided eventually, but this takes time and some vendor lock-ins are less problematic than others.
Supply Chain Security and Consistency
The public sector must secure its supply chains. This becomes generally easier with less chains to secure. A Linux desktop based on Fedora and KDE requires about 1200 Source RPM packages3. A Linux server based on Fedora requires about 300 Source RPM packages. The flatpak runtime org.fedoraproject.KDE6Platform/x86_64/f43 requires about 100 Source RPM packages. I assume the numbers for Ubuntu/Debian/openSUSE are similar. So instead of securing all supply chains independently (possibly through outsourcing), the public sector can choose one, secure this one, and cover several use cases with the same packages at no or significant less extra effort. Also updates, testing, and certifications of those packages would benefit then all use cases.
Accreditation and Certifications
Some public sector organisations require a high level of compliance with cyber security, data protection, accessibility, records keeping, interoperability, etc. The more often a (similar) Linux distribution has passed such tests, the easier it should get.
Forward-looking Sovereignty and Sustainability
The public sector wants to work with stable vendors in stable jurisdictions that minimise the likelihood to interfere with the execution of its public mandate4. Companies can change ownership and jurisdiction. While not a bullet-proof solution, a multi-stakeholder non-profit organisation can offer more stability and alignment with public sector mandates. Such an organisation must then receive the resources to execute its mandate continuously over several years or decades. With several independent stakeholders, public tenders become more competitive and as such more meaningful (compare with procurement of Microsoft Windows 11).

Geographical Dimension

I have the impression that some governments would like to choose a Linux distribution that (a) local IT companies can support and (b) creates jobs in their country or region. In my view the only chance to offer such advantage while maintaining synergies across borders is to find a Linux distribution supported by IT companies active in many countries and regions.

While the project EU OS has EU in its name, I would be in favour to not stop at EU borders when looking for partners and synergies. It has already inspired MxOS in Mexico. Then, think of international organisations like OSCE, Council of Europe, OECD, CERN, UN (WHO, UNICEF, WFP, ICJ), ICC, Red Cross, Doctors Without Borders (MSF), etc. Also think of NATO. Those organisations are active in the EU, in Europe and in most other countries of the world. So if EU OS can rely on and stimulate investments in a Linux distribution that is truly an international project, international organisations would benefit likewise while upholding their mandated neutrality.

Diversity of Linux Distributions for EU OS

Douglas DeMaio (working for SUSE doing openSUSE community management) argues in his blog post from March 2025: Freedom Does Not Come From One Vendor. The motto of the European Union is ‘United in Diversity’. Diversity and decentralisation make systems more robust. However, when I see the small scale of on-going pilots, I find that as of December 2025, it is better to unify projects and choose one single Linux distribution to start with and progress quickly. EU OS proposes to achieve immutability with bootable containers (bootc). This is a cross-distribution technology under the umbrella of the Cloud Native Computing Foundation that makes switching Linux distributions later easier. Other Linux distributions could meanwhile implement bootc, FreeIPA, as well as Foreman support, and setup/grow their multi-stakeholder non-profit organisation, possibly with support of public funds they applied for.

The extend to which more Linux distributions in the public sector provide indeed more security requires an in-depth study. For example, consider the xz backdoor from 2024 (CVE-2024-3094).

Vendor Status
Redhat/Fedora/AlmaLinux Fedora Rawhide and 40 beta affected, RHEL and Almalinux unaffacted
SUSE/openSUSE openSUSE Tumbleweed and MicroOS affected, SUSE Linux Enterprise unaffected
Debian Debian testing and unstable affected
Kali Linux Kali Linux affected
ArchLinux unaffected
NixOS affected and unaffected, slow to roll out updates

Early adopters would have caught the vulnerability independently of the Linux distribution (except ArchLinux 👏). Larger distributions can possibly afford more testing. Older distributions with older build systems are more likely to offer tarball support (essential for the xz backdoor) as back then git was not yet around. To avert such supply chain attacks, implementing supply-chain hardening (e.g. SLSA Level 3) consistently is certainly important and diversification of distributions or supply chains makes it harder first.

Comparison of Linux Distributions

In the comparison here, I focus on Debian/Ubuntu, Fedora/RHEL/AlmaLinux and openSUSE/SUSE, because they are beyond doubt battle tested with many users in corporate environments already. They are also commonly supported by third parties. Note that I don’t list criteria for which all distributions perform equally.

Criterion Debian/Ubuntu Fedora/RHEL AlmaLinux openSUSE/SUSE
bootc 🟨 not yet yes yes 🟨 not yet (but Kalpa unstable and Aeon RC with snapshots)
Flatpak app support yes yes ✅ yes yes
Flatpak apps from own sources ❌ no yes 🟨 not yet, but adaptable from Fedora ❌ no
FreeIPA server for user management ✅ yes ✅ yes ✅ yes no5
Proxmox server for VMs yes ❌ no ❌ no ❌ no
Foreman server for laptop management ✅ yes ✅ yes ✅ yes ❌ no6
Non-Profit Foundation ✅ yes (US 🇺🇸 and France 🇫🇷) ❌ no ✅ yes (US 🇺🇸) ❌ no
3rd-party download mirrors in the EU ca. 1507 ca. 1008 ca. 2009 ca. 5010
3rd-party download mirrors worldwide ca. 3507 ca. 3258 ca. 3509 ca. 12510
Github Topics per distribution name ca. 17150 (6344+10803) ca. 2500 (1,943+478) ca. 150 ca. 550 (362+172)
world-wide adopted (based on mirrors) ✅ yes ✅ yes ✅ yes 🟨 not as much
annual revenue of backing company ca. 300m$ ca. 4500m$ only donations ca. 700m$
employees world-wide of backing company ca. 1k ca. 20k 11 < 500 (including CloudLinux) ca. 2.5k12
employees in Europe of backing company ≤ 1k ca. 4.5k < 500 (including CloudLinux) ≤ 2.5k
SAP-supported13 🙄 ❌ no ✅ yes 🟨 RHEL-compatible ✅ yes

I find it extremely difficult to find reliable public numbers on employees, revenues and donations. I list here what I was able to find in the Internet, because I think it helps to quantify the popularity of the enterprise Linux distribution in corporate settings. Numbers for Debian are not very expressive due to the many companies othen than Ubuntu involved. Let me know if you find better numbers.

Other than company figures, also the number of search queries (Google Trends) given an impression on the popularity of Linux distributions. Find here below the graph for the community Linux distributions as of December 2025.

Google Trends for Debian, Fedora and openSUSE worldwide 2025

Google Trends for Debian, Fedora and openSUSE worldwide 2025 (Source)

Conclusions as of December 2025

Obviously, it is challenging to propose comprehensive criteria and relevant metrics to compare Linux distributions for corporate environments for their suitability as base distribution for a project like EU OS. This blog post does not replace a more thorough study. It offers however some interesting insights to inform possible next steps.

  1. Debian is a multi-stakeholder non-profit organisation with legal entities in several jurisdictions. Unfortunately, its bootc support is in an early stage only and it lacks support for some third-party software vendors such as SAP. For corporate environments, Debian does not offer alternatives to FreeIPA and Foreman, which work best for Fedora/RHEL/AlmaLinux, but also support Debian.
  2. Fedora is in this comparison the 2nd largest community in terms of mirrors and Github repositories. Fedora has no legal entity independent from its main sponsor RHEL. However, AlmaLinux is a multi-stakeholder non-profit organisation, albeit very US-centered. With RHEL front running Linux in enterprise deployments for several years, most use cases are covered, including building Flatpak apps from Fedora sources. Fedora downstream distributions with bootc (ublue, Bazzite, Kinoite) run already on tens of thousands systems including in the EU public sector.
  3. openSUSE has most success in German-speaking countries and the US (possibly driven by SAP). Internationally, it is significantly less popular. openSUSE has no legal entity independent from its main sponsor SUSE registered in Luxembourg and headquartered in Germany. For corporate environments, openSUSE does not offer alternatives to FreeIPA and Foreman, which support openSUSE only as clients. While Uyuni6 offers infrastructure/configuration management, it remains unclear if it can replace Foreman for managing fleets of laptops. openSUSE’s bootc support is in an early stage only.

No Linux distribution fulfills all the criteria. Independently of the distribution, corporate environments would rely on FreeIPA, Foreman, Keycloak, podman, systemd, etc. that Red Hat sponsors. Debian is promising, but its work to support bootc is not receiving much attention. AlmaLinux is promising, but would need to proof its independence from politics yet as it is a fairly new project (1st release in 2021) and doubts remain on its capacity to support Fedora (as Red Hat does) in the long run. Microsoft blogged this week about their increasing contributions to Fedora. Maybe European and non-European companies can step up likewise in 2026, so that Fedora can become a multi-stakeholder non-profit organisation similar to AlmaLinux today.

Community Talk at Fosdem

My 30 min talk on this topic has been accepted at the community conference fosdem 2026 in Brussels, Belgium! Please consider to join if you are at fosdem and let me know your thoughts and questions. The organisers have not yet allocated timeslots yet, but I believe it will take place on Saturday, 31st January 2026.

Talk title
EU OS: learnings from 1 year advocating for a common Desktop Linux for the public sector
Track title
Building Europe’s Public Digital Infrastructure

All the best,
Robert

  1. I know that the public sector relies on vendors that ship embedded Linux on WiFi routers, traffic lights, fleet of cars, etc. If anyway you identified a relevant use case that is missing here, please feel free to let me know and I will consider to add it here. ↩︎

  2. During testing for EU OS, I learnt that Red Hat upgraded the instruction set architecture (ISA) baseline to x86-64-v3 microarchitecture level in RHEL 10. Consequently, my old Thinkpad x220 is not supported any longer. While this may not be an issue for resourceful public sector organisations with recent laptops, it is an issue for less resourceful organisations, including many schools world-wide, but also in the EU. ↩︎

  3. I counted Source RPM packages with rpm -qa --qf '%{SOURCERPM}\n' | sort -u | wc -l in each given environment. ↩︎

  4. The public sector also wants to avoid vendor lock-in, which is just one specific form to ‘interfere with the execution of its public mandate’. ↩︎

  5. FreeIPA does not run on openSUSE, but supports openSUSE clients. Alternative software for openSUSE may be available. Community members suggest Kanidm, but is lacks features and development seems stalled. ↩︎

  6. Foreman runs only on Debian/Fedora/RHEL/AlmaLinux, but supports openSUSE clients. SUSE offers Rancher, which is limited to Kubernetes clusters. Uyuni and its enterprise-supported downstream SUSE Multi-Linux Manager offers configuration and infrastructure management based on SALT↩︎ ↩︎2

  7. https://www.debian.org/mirror/list ↩︎ ↩︎2

  8. https://mirrormanager.fedoraproject.org/mirrors?page_size=500&page_number=1 ↩︎ ↩︎2

  9. https://mirrors.almalinux.org ↩︎ ↩︎2

  10. https://mirrors.opensuse.org ↩︎ ↩︎2

  11. https://www.redhat.com/en/about/company-details ↩︎

  12. https://fortune.com/2024/07/26/suse-software-ceo-championing-open-source-drives-innovation-purpose/ ↩︎

  13. https://pages.community.sap.com/topics/linux/supported-platforms ↩︎

a silhouette of a person's head and shoulders, used as a default avatar

Tumbleweed – Review of the week 2025/51

Dear Tumbleweed users and hackers,

The year is slowly coming to an end, and there are only a few days left in 2025. Naturally, in this period, things tend to slow down a bit. People start preparing to see their loved ones, take a vacation, or generally take time off from the online world. What does that mean for Tumbleweed? The good news: nothing. Tumbleweed keeps rolling; just some days will see slower response times, and requests might linger a bit longer. The process remains unchanged, though: unless openQA confirms the quality of a snapshot, we will prefer to hold a snapshot back in the absence of developers being able to validate its impact. I’m sure that’s in everybody’s best interest.

During the last week, we have been able to publish 4 snapshots (1212, 1215, 1216, and 1217), containing these changes:

  • Bash 5.3.9
  • KDE Gear 25.12.0
  • Linux kernel 6.18.1
  • Qemu 10.1.3
  • KDE Frameworks 6.21.0
  • LLVM 21.1.7
  • Samba 4.23.4
  • Changed the default for Node.js to version 24
  • Rust 1.92

As mentioned, the number of requests are getting a bit lower, but there are still things in the queue being handled at the moment. Most notably, these changes:

  • Ruby 3.4.8
  • PHP 8.4.15 & 8.4.16
  • Mozilla Firefox 146.0.1
  • Linux kernel 6.18.2
  • Enablement of Python 3.14 modules, while removing Python 3.12. This simple we would provide python 3.11, 3.13 (default) and 3.14
  • Ruby 4.0: early testing in staging. So far, this looks all good
  • Removal of LUA 5.1

a silhouette of a person's head and shoulders, used as a default avatar

the avatar of Greg Kroah-Hartman

Tracking kernel commits across branches

With all of the different Linux kerenl stable releases happening (at least 1 stable branch and multiple longterm branches are active at any one point in time), keeping track of what commits are already applied to what branch, and what branch specific fixes should be applied to, can quickly get to be a very complex task if you attempt to do this manually. So I’ve created some tools to help make my life easier when doing the stable kerrnel maintenance work, which ended up making the work of tracking CVEs much simpler to manage in an automated way.

the avatar of Sebastian Kügler

a silhouette of a person's head and shoulders, used as a default avatar

Tumbleweed – Review of the week 2025/50

Dear Tumbleweed users and hackers,

Hackweek was a blazing success, and many fun projects have been showcased at https://hackweek.opensuse.org. This week has seen Tumbleweed return to its old, boring, always-rolling self, with 4 solid snapshots published this week (1204, 1205, 1210, and 1211)

The most relevant changes contained in these snapshots were:

  • Mozilla Firefox 145.0.2 & 146.0
  • SDL 3.2.28
  • Bash 5.3.8
  • Linux kernel 6.18.0
  • Postgresql 18.1
  • SQLite 3.51.1
  • Mesa 25.3.1
  • Alsa 1.2.15 (including regression patches, as stock 1.2.15 showed issues in openQA)
  • Apache 2.4.66
  • GStreamer 1.26.9
  • KDE Plasma 6.5.4

Currently, we have received and are testing these changes:

  • KDE Gear 25.12.0
  • Switch to Node.js 24 by default (current default is Node.js 22)
  • Ruby 4.0: early testing in staging. So far, this looks all good
  • Removal of LUA 5.1