Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

Tumbleweed – Review of the week 2025/35

Dear Tumbleweed users and hackers,

Week 35 felt a little calmer. Our packagers delivered high-quality submissions that required little intervention from release engineering once staged. When things just work, they tend to go unnoticed. We published four snapshots this week, namely 0822, 0825, 0826, and 0827.

The most noteworthy changes in these snapshots were:

  • GNU Gettext 0.26
  • ImageMagick 7.1.2.1 & 7.1.2.2
  • bind 9.20.12
  • nftables 1.1.4
  • Python 3.13.7
  • kdump 2.1.6
  • colord 1.4.8 / colord-gtk 0.3.1
  • GStreamer 1.26.5
  • Linux kernel 6.16.3
  • mozjs 128.14.0
  • system-users: all system users are now configured to be ‘fully locked’ (as per man sysusers.d). This means not only is an invalid password set, but any login method is denied for these users

Staging projects and QA are currently busy on these updates:

  • Migrate to use ffmpeg-8 by default
  • Qt 6.9.2
  • Linux kernel 6.16.4
  • GCC 15.2
  • Mozilla Firefox 142: mozilla-nss 3.115.1 breaks the build of certmonger

the avatar of Sebastian Kügler

Back in Action on Plasma (Mobile)

After I took a longer break from KDE development, I’ve been back in action for a few months now. It’s really nice to be back among friends, hacking on what I like most: Plasma. My focus has been on Plasma Mobile with some work naturally bleeding over into other areas.

Plasma on more Devices

I’d like to share some bits and pieces that I’ve worked on in the past months. Most of my efforts have revolved around making Plasma Mobile suitable for a wider range of devices and use-cases. The purpose of this work is that I want to make Plasma Mobile a more viable base for all kinds of products, not just mobile phones. We have a really mature software stack and great tools and applications which make it relatively easy for companies to create amazing products without having to hire large teams and many years to get the product ready for their market. This is I think a very interesting and worthwhile niche for Plasma to get into and I’m sure that Valve is not the only company that understands this.

Convergence Improvements

Convergence, or rather being able to support and switch between formfactors and usage patterns has always been a pet-peeve of mine and still is.
One area was improving using the available screen real estate use landscape displays (Plasma Mobile has quite naturally been rather “portrait-focused”, though a few smaller patches go a long way.)

Configurable number of columns in the Quicksettings drawer

I also improve usability with different pixel densities in the mobile shell by making the size of the top panel configurable. Also, when plugging in a second monitor, Plasma Mobile now switches from “all apps are maximized” to normal window management. (I’m currently working on KWin supporting more fine-grained window management. Currently, we just maximize all windows which has problems especially with modal dialogs.)

One changeset I worked on earlier this year makes it possible to ship multiple user interfaces for settings modules (“kcms”). An example is the “remote desktop” kcm which now shows a mobile-focused UI in Plasma Mobile. What happens here is that we load a main_phone.qml file in Plasma Mobile (where “phone” is picked from a list of form factors set in the environment of the session, so basically the “main” QML file gets picked based on the device. This mechanism allows us to share components quite easily, reducing the delta between different device UIs.

Mobile and Desktop RDP settings

This actually builds on top of work that I’ve done ten years ago which added support for form factors to our plugin metadata system.
I’ve also made the “Display & Monitor” kcm usable on mobile, this is a pretty important thing to have working when you want to be able to plug in an external monitor into your device. I have a mobile version of the keyboard KCM in the pipeline, too, but this will need a bit more work before it’s ready for prime-time.

More Features

There’s a new page in the mobile Wi-fi settings module, showing connection details and tranfer speeds. The code for this was amazingly simple since I could lift most of the functionality from the desktop panel widget. A shared code-base across devices really speeds up development.

Connection details for the mobile wifi settings

Adding useful features here and there, such as having the list of available bluetooth devices now filtered by default and only showing devices which actually make sense to pair (with an option to “Show all devices” in good Plasma manner). This feature isn’t mobile-specific, so desktop and laptop users will benefit.

Welcome to Okular Mobile

Not all my work goes into infrastructural and “shell” bits. The mobile okular version has now kind of caught up with the desktop version since it got a nice welcome screen when opened. This allows the user to easily open a document either from the “Documents” directory on disk (this is actually configurable) or one of the recent files viewed.

Okular Mobile Welcome Screen

Going to Akademy ’25

After having missed our yearly world conference for a number of years, this year I will be at Akademy again. I’m really looking forward to seeing everybody in person again!

I’m going to Akademy!

See you in Berlin!

a silhouette of a person's head and shoulders, used as a default avatar

The core values of syslog-ng

Whenever I present syslog-ng at a conference or I stand next to a booth, people often ask me why should they use syslog-ng instead of one of its competitors. So let me summarize what the users and developers of syslog-ng typically consider as its most important values.

Documentation

Yes, I know, this is not syslog-ng itself. However, talking to some of our most active and loyal users, one common feedback was that they had chosen syslog-ng because of the quality of its documentation. Syslog-ng have always had very detailed and (usually) up-to-date documentation. Unfortunately though, there has been a period when our documentation has fallen victim of resource shortages for a while. However, as soon as these resource shortages have been taken care of, bringing our documentation up to pace has been at the top of our list.

Read about the rest at https://www.syslog-ng.com/community/b/blog/posts/the-core-values-of-syslog-ng

syslog-ng logo

the avatar of danigm's Blog

Log Detective: GSoC 2025 (part 2)

This week is the last week of Google Summer of Code for this year edition, and I'll do a summary of what we have been doing and future plans for the Log Detective integration in openSUSE.

I wrote a blog post about this project when it starts, so if you didn't read, you can take a look.

All the work and code was done in the logdetective-obs github repository.

Aazam, aka the intern, also wrote some blog post about his work.

Initial Research

The idea of the project was to explore ways of integration of LogDetective with the openSUSE dev workflow. So we started collecting some build failures in the openSUSE build service and testing with log detective to check if the output is smart or even relevant.

The intern creates a script to collect log from build failures and we store the LLM answer.

Doing this we detected that the log-detective.com explain is very slow and the local tool is less accurate.

The results that we got weren't too good, but at this point of the project is something expected. The model is being trained and will improve with every new log that users send.

Local vs Remote, model comparison

The logdetective command line tool is nice, but LLM requires a lot of resources to run locally. This tool also uses the models published by fedora in huggingface.co, so it's not as accurate as the remote instance that has a better trained model and is up to date.

In any case, the local logdetective tool is interesting and, at this moment, it's faster than the deployed log-detective.com.

Using this tool, Aazam did some research, comparing the output with different models.

logdetective apache-arrow_standard_x86_64.log\
  --model unsloth/Qwen2.5-Coder-7B-Instruct-128K-GGUF\
  -F Qwen2.5-Coder-7B-Instruct-Q4_K_M.gguf\
  --prompt ~/prompt.yaml

So using other specific models, the logdetective tool can return better results.

The plugin

openSUSE packagers uses the osc tool, to build packages in build.opensuse.org. This tool can be extended with plugins and we created a new plugin to use log-detective.

So a packager can get some AI explanation about a build failure with a simple command:

$ osc ld -r -p devel:languages:python --package python-pygame --repo openSUSE_Tumbleweed
🔍 Running logdetective for python-pygame (openSUSE_Tumbleweed/x86_64)...
Log url: https://api.opensuse.org/public/build/devel:languages:python/openSUSE_Tumbleweed/x86_64/python-pygame/_log
🌐 Sending log to LogDetective API...
 The RPM build process for the `python-pygame` package failed during the
`%check` phase, as indicated by the "Bad exit status from /var/tmp/rpm-
tmp.hiddzT (%check)" log snippet. This failure occurred due to seven test
failures, one error, and six skipped steps in the test suite of the package.



The primary issue causing the build failure seems to be the traceback error in
the `surface_test.py` file, which can be seen in the following log snippets:



[[  278s] Traceback (most recent call last):
[  278s]   File "/home/abuild/rpmbuild/BUILD/python-
pygame-2.6.1-build/BUILDROOT/usr/lib64/python3.11/site-
packages/pygame/tests/surface_test.py", line 284, in test_copy_rle
]
[[  278s] Traceback (most recent call last):
[  278s]   File "/home/abuild/rpmbuild/BUILD/python-
pygame-2.6.1-build/BUILDROOT/usr/lib64/python3.11/site-
packages/pygame/tests/surface_test.py", line 274, in
test_mustlock_surf_alpha_rle
]
[[  278s] Traceback (most recent call last):
[  278s]   File "/home/abuild/rpmbuild/BUILD/python-
pygame-2.6.1-build/BUILDROOT/usr/lib64/python3.11/site-
packages/pygame/tests/surface_test.py", line 342, in test_solarwolf_rle_usage
]



These traceback errors suggest that there are issues with the test functions
`test_copy_rle`, `test_mustlock_surf_alpha_rle`, and `test_solarwolf_rle_usage`
in the `surface_test.py` file. To resolve this issue, you should:



1. Identify the root cause of the traceback errors in each of the functions.
2. Fix the issues found, either by modifying the test functions or correcting
the underlying problem they are testing for.
3. Re-build the RPM package with the updated `surface_test.py` file.



It's also recommended to review the other test failures and skipped steps, as
they might indicate other issues with the package or its dependencies that
should be addressed.

Or if we are building locally, it's possible to run using the installed logdetective command line. But this method is less accurate, because the model used is not smart enough, yet:

$ osc build
[...]
$ osc ld --local-log --repo openSUSE_Tumbleweed
Found local build log: /var/tmp/build-root/openSUSE_Tumbleweed-x86_64/.build.log
🚀 Analyzing local build log: /tmp/tmpuu1sg1q0
INFO:logdetective:Loading model from fedora-copr/Mistral-7B-Instruct-v0.3-GGUF
...

Future plans

  1. Adding submit-log functionality to osc-ld-plugin. In-progress. This will make it easier to collaborate with log-detective.com, allowing openSUSE packagers to send new data for model training.
  2. Gitea bot. openSUSE development workflow is moving to git, so we can create a bot that comment on Pull Request with packages that fails to build. Something similar to what fedora people have for centos gitlab
  3. Add a new tab to log-detective.com website to submit logs directly from OBS

This was an interesting Summer of Code project. I learned a bit about LLM. I think that this project has a lot of potential, this can be integrated in different workflows and an expert LLM can be a great tool to help packagers, summarizing big logs, tagging similar failures, etc.

We have a lot of packages and a lot of data in OBS, so we should start to feed the LLM with this data, and in combination with the fedora project, the log-detective could be a real expert in open source RPM packaging.

a silhouette of a person's head and shoulders, used as a default avatar

Releasing version 17

Summer is vacation season, at least in Europe, and that usually means nothing seems to move too much. But since there is an exception to every rule, here comes a new Agama version to prove that not even heat can stop Free Software!

Agama 17 represents an important milestone for the Agama project. Taking into account how close the release of SUSE Linux Enterprise 16.0 is, we foresee this version of Agama (or a very similar one) becoming the endorsed installer for that release of SUSE's flagship distribution.

So let's see what's new.

Better representation of wired network connections

Starting with the web user interface, the page to display and configure a concrete wired interface was heavily reorganized to improve clarity and to correctly represent the situation in which several devices share the connection.

Several network devices using the same connection

Improvements in the storage user interface

The page to configure the storage setup was also slightly restructured. The information displayed at the "Installation Devices" section was reorganized with the goal of making its usage more understandable at first sight.

Several network devices using the same connection

This is just an intermediate step to our envisioned user interface. But that is how goals are reached - one step at a time.

Apart from reorganizing the information, the new user interface offers the possibility to directly use a disk (or a pre-existing RAID device) without creating partitions.

There is also a new option to re-scan the system in case new hardware devices has been plugged in, new logical devices (line RAIDs or LVM volume groups) have been created or the user needs a second chance to enter an encryption password.

New options at the storage user interface

Last but not least, the user interface can now detect when an Agama configuration has been loaded affecting the storage setup.

Alert the user if storage configuration changed

More options at the registration page

To finish the recap of the main changes at the user interface, we must mention the registration page. Apart from registering the system on the SUSE Customer Center (SCC), now the user interface can be used to register the system on a custom instance of RMT (Repository Mirroring Tool).

Registering on an instance of RMT

Users of such RMTs do not longer need to use a configuration profile or Agama's command-line tools, like in previous versions of the installer.

Skipping SELinux configuration

The default installations of SUSE Linux Enterprise Server 16.0 and openSUSE Leap 16.0 will use SELinux as the default Linux kernel security module (LSM). But it is possible to adjust the software selection to not install SELinux or to install an alternative LSM (like AppArmor).

Previous versions of Agama did not manage that situation correctly. Agama always enforced the configuration of the default LSM for the product (SELinux in the mentioned cases).

Now the configuration of the LSM is based on the software selection. At the end of the installation process, Agama configures the installed security module. If several ones are installed, Agama configures one of them, with the default one having precedence over the other candidates.

New options in the JSON configuration

As most of our readers know, the best way to access the full potential of Agama is to use a JSON (or Jsonnet) configuration. That goes far beyond the options offered by the web interface and is the default mechanism to configure unattended installations.

Agama 17 adds the possibility to configure VLAN interfaces. See the following example.

{
"id": "vlan10",
"method4": "manual",
"method6": "disabled",
"addresses": ["192.168.1.28/24"],
"gateway4": "192.168.1.1",
"nameservers": ["192.168.1.1"],
"vlan": {
"id": 10,
"parent": "eth0"
}
}

Now it is also possible to activate zFCP devices by specifying the corresponding section at the JSON configuration, like in this example.

{
"zfcp": {
"devices": [
{
"channel": "0.0.fa00",
"wwpn": "0x500507630300c562",
"lun": "0x4010403300000000"
}
]
}
}

This version also adds more flexibility to define the list of software patterns to be installed. In previous versions, it was only possible to specify a whole list that would replace the default list of optional patterns defined by the product. Now it is possible to ask Agama to modify that list, just adding or removing certain patterns.

{
"software": {
"patterns": {
"remove": ["selinux"],
"add": ["apparmor", "gnome"]
}
}
}

But sometimes it is Agama that wants to ask something to the user. For example, when it finds an encrypted device and needs the password to inspect its content. If that happens, Agama's user interface can show a dialog for the user to provide the answer. But there are more options to communicate those answers to Agama, like using the command agama questions to load a file containing answers. Agama 17 introduces a third way - the new questions section at the JSON configuration. Again, let's illustrate it with an example.

{
"questions": {
"policy": "auto",
"answers": [
{
"class": "storage.luks_activation",
"password": "nots3cr3t"
}
]
}
}

Improvements for unattended installation

Users of the unattended installation will of course benefit from the mentioned new options added to the JSON configuration. But the improvements for automated installations go beyond that.

On the one hand, Agama does not longer proceed silently if it fails to fetch or process a configuration provided via the inst.auto boot argument. Instead, this new version displays an error message asking the user whether or not to retry. If the user decides to not retry, Agama does not start the auto-installation process. This is just a first step in the right direction, in the future Agama will offer better diagnostics and the possibility to specify a new URL.

On the other hand, the new boot arguments inst.auto_insecure and inst.script_insecure allow to disable the SSL checks when fetching the configuration or the installation script for an unattended installation. That simplifies setting up the corresponding infrastructure in controlled environments.

More possibilities to patch the installer

Traditionally, the (open)SUSE installation process could be modified using RPM packages or a so-called DUD (driver update disk). Despite the name, the possibilities go way further than updating the drivers. A DUD might contain files to be copied to the installation media, scripts to be executed during the process, new or updated packages to be installed, etc. You can learn more at this classic document.

Previous versions of Agama already had some limited compatibility that allowed to patch the installation process. But that compatibility improves a lot with Agama 17.

At Agama, the URL of an update is specified through the boot option inst.dud. As mentioned, an update can be a simple rpm package or a more complex DUD, created with the mkdud tool. You can specify as many updates as you wish. Do not forget the rd.neednet option if you need the network to retrieve the DUD.

inst.dud=http://192.168.122.1/agama.dud rd.neednet

Additionally, the inst.dud_insecure boot argument allows to ignore SSL certificate problems (eg. a self-signed certificate) when fetching updates.

The family grows

As you know, Agama allows to install different distributions that are called "products" in Agama's jargon. Recently Lubos Kocman contributed a new product definition for openSUSE Leap Micro 6.2. That means you will now find a new option on the initial page that allows to select the distribution to install.

Welcome aboard, Leap Micro!

Web page reorganization

And last but not least, a news that is not about Agama itself but about the site that hosts this blog. As a first step to extend Agama's documentation, we reorganized agama-project.github.io quite a lot. Check the new additions like the "Get started" and "Guides & HowTos" sections. The second one is at an early stage, but we are already working on growing it with more and more content. Of course, contributions are more than welcome.

As a drawback, many of the links you may have from previous sources may now be broken. We rearranged the content a lot and it was not feasible to define a redirection for every moved piece of information. Sorry about that.

We keep moving

As mentioned, the upcoming SUSE Linux Enterprise 16.0 will be distributed with the current version of Agama or with a very similar one. But that does not imply Agama development will slow down, quite the opposite. There is still a long road ahead and we will keep implementing fixes and new features that you could always test with our constantly updated testing ISO.

Of course, we will keep you updated on this blog. But if you got questions or want to get involved further, do not hesitate to contact us at the Agama project at GitHub and our #yast channel at Libera.chat.

Have a lot of fun!

the avatar of openSUSE News

Building Edge AI Infrastructure with KVM, openSUSE, and Ollama

Edge AI infrastructure is transforming how we deploy machine learning workloads, bringing computation closer to data sources while maintaining privacy and reducing latency. This comprehensive guide demonstrates building an edge analytics platform using KVM virtualization, openSUSE Leap (15.6), K3s, and Ollama for local AI inference.

Our architecture leverages a KVM homelab infrastructure originally set-up by my Google Summer of Code Mentor. This set-up was built to create specialized AI nodes in a distributed cluster, with Longhorn providing shared storage for models and application data. Each component is chosen for reliability, scalability, and edge-specific requirements.

Prerequisites and Architecture Overview

This setup requires:

  • KVM hypervisor with existing infrastructure
  • Minimum 8GB RAM per VM (16GB recommended for larger models)
  • Network storage for distributed file system
  • Basic Kubernetes and networking knowledge

The final architecture includes multiple specialized nodes, distributed storage, monitoring, and load balancing for high-availability AI inference.

VM Foundation Setup

Creating the Edge AI Node

Start with a clean VM deployment using established automation tools:

cd ~geeko/bin/v-i
sudo ./viDeploy -c ./VM-K3s.cfg -n edge-ai-01

System Configuration

Complete the openSUSE installation with consistent settings across all nodes:

Installation Settings:

  • Keyboard: US
  • Timezone: UTC
  • Root password: gsoc (consistent across cluster)

Network Configuration:

# Configure static networking
cd /etc/sysconfig/network
cp ifcfg-eth1 ifcfg-eth0
vi ifcfg-eth0

Edit network configuration:

STARTMODE=auto
BOOTPROTO=static
IPADDR=172.xx.xxx.xx/24

Set hostname and disable firewall for edge deployment:

hostnamectl set-hostname edge-ai-01
echo "172.xx.xxx.xx edge-ai-01.local edge-ai-01" >> /etc/hosts
systemctl disable --now firewalld
systemctl restart network

Essential Package Installation

Install required components for Kubernetes and distributed storage:

zypper refresh
zypper install -y open-iscsi kernel-default e2fsprogs xfsprogs apparmor-parser
systemctl enable --now iscsid

Storage Configuration for Longhorn

Prepare dedicated storage for distributed AI workloads:

lsblk
fdisk /dev/vdb
# Create new GPT partition table and primary partition
mkfs.ext4 /dev/vdb1
mkdir -p /var/lib/longhorn
echo "/dev/vdb1 /var/lib/longhorn ext4 defaults 0 0" >> /etc/fstab
mount -a
systemctl reboot

Kubernetes Cluster Integration

Joining the Edge AI Cluster

Access your Rancher management interface to create a dedicated AI cluster:

  1. Navigate to Rancher WebUI: http://172.16.200.15
  2. Create → Custom cluster
  3. Name: edge-ai-cluster
  4. Select K3s version
  5. Copy and execute registration command:
curl -fsSL https://get.k3s.io | K3S_URL=https://172.xx.xxx.xx:6443 K3S_TOKEN=your-token sh -

Verify cluster connectivity:

kubectl get nodes
kubectl get pods --all-namespaces

Ollama Installation and Configuration

Installing Ollama

Deploy Ollama for local LLM inference:

curl -fsSL https://ollama.com/install.sh | sh
systemctl enable --now ollama

Cluster Access Configuration

Configure Ollama for distributed access:

mkdir -p /etc/systemd/system/ollama.service.d
vi /etc/systemd/system/ollama.service.d/override.conf

Add cluster binding:

[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"

Apply configuration:

systemctl daemon-reload
systemctl restart ollama

Model Deployment Strategy

Deploy models based on hardware capabilities:

# For 8GB RAM nodes - quantized models
ollama pull phi3

# For 16GB+ RAM nodes - higher precision
ollama pull phi3

# Verify installation
ollama list

Quantized models (q4_K_M) reduce memory usage by ~75% while maintaining performance for edge analytics.

Edge Analytics Platform Deployment

Repository Setup

Clone the Edge Analytics ecosystem:

git clone https://github.com/rudrakshkarpe/Edge-analytics-ecosystem-workloads-openSUSE.git
cd Edge-analytics-ecosystem-workloads-openSUSE

Configuration for Cluster Deployment

Update Kubernetes manifests for distributed deployment:

vi k8s-deployment/streamlit-app-deployment.yaml

Modify Ollama endpoint:

- name: OLLAMA_BASE_URL
  value: "http://172.xx.xxx.xx:11434"

Application Deployment

Deploy in correct order for dependency resolution:

kubectl apply -f k8s-deployment/namespace.yaml
kubectl apply -f k8s-deployment/storage.yaml
kubectl apply -f k8s-deployment/streamlit-app-deployment.yaml
kubectl apply -f k8s-deployment/ingress.yaml

Verify deployment status:

kubectl get pods -n edge-analytics
kubectl logs -f deployment/edge-analytics-app -n edge-analytics

Distributed Storage with Longhorn

Longhorn Deployment

Deploy distributed storage system:

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml

Wait for all pods to be running:

kubectl get pods -n longhorn-system -w

Configure Default Storage Class

Set Longhorn as default for persistent volumes:

kubectl patch storageclass longhorn -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Multi-Node Scaling and Specialization

Additional Node Deployment

Scale the cluster with specialized nodes:

Node IP Assignment:

  • edge-ai-02: 172.16.220.11
  • edge-ai-03: 172.16.220.12

Node Labeling for Workload Distribution

Label nodes based on capabilities:

# GPU-enabled nodes
kubectl label node edge-ai-02 node-type=gpu-inference

# CPU-optimized nodes
kubectl label node edge-ai-03 node-type=cpu-inference

Specialized Model Deployment

Deploy appropriate models per node type:

# GPU nodes - larger models
ssh root@172.16.220.11
ollama pull phi3

# CPU nodes - optimized quantized models
ssh root@172.16.220.12
ollama pull phi3

Production Monitoring and Operations

Monitoring Stack Deployment

Deploy comprehensive observability:

kubectl apply -f k8s-deployment/monitoring.yaml

Service Access

For development and testing access:

# Edge Analytics application
kubectl port-forward svc/edge-analytics-service 8501:8501 -n edge-analytics

# Prometheus metrics
kubectl port-forward svc/prometheus 9090:9090 -n monitoring

# Grafana dashboards  
kubectl port-forward svc/grafana 3000:3000 -n monitoring

Operational Commands

Model Management:

# Check model status across cluster
kubectl exec -it daemonset/ollama -n edge-analytics -- ollama list

# Update models cluster-wide
kubectl exec -it daemonset/ollama -n edge-analytics -- ollama pull llama3:latest

Scaling Operations:

# Horizontal scaling
kubectl scale deployment edge-analytics-app --replicas=3 -n edge-analytics

# Resource monitoring
kubectl top nodes
kubectl top pods -n edge-analytics

Access Points and Integration

Service URLs:

  • Edge Analytics UI: http://172.xx.xxx.xx:8501
  • Rancher Management: http://172.16.200.15
  • Prometheus Metrics: http://172.xx.xxx.xx:9090
  • Grafana Dashboards: http://172.xx.xxx.xx:3000 (admin/admin)

Key Advantages of This Architecture

  1. Privacy-First: All AI inference happens locally, ensuring data never leaves your infrastructure
  2. Scalable: Kubernetes orchestration enables easy horizontal scaling as workloads grow
  3. Resilient: Distributed storage and multi-node deployment provide high availability
  4. Cost-Effective: Utilizes existing hardware infrastructure without cloud dependencies
  5. Flexible: Support for various model sizes and quantization levels based on hardware

Troubleshooting Common Issues

VM Connectivity:

virsh list --all
virsh console edge-ai-01

Kubernetes Issues:

kubectl describe node edge-ai-01
kubectl get events --sort-by=.metadata.creationTimestamp

Ollama Service Problems:

systemctl status ollama
journalctl -u ollama -f
curl http://172.xx.xxx.xx:11434/api/tags

This edge AI infrastructure provides a robust foundation for deploying local LLMs with enterprise-grade reliability, enabling organizations to leverage AI capabilities while maintaining complete control over their data and compute resources.

For advanced configurations and additional features, explore the complete repository documentation and consider integrating with external tools like vector databases for enhanced RAG capabilities.

Huge shoutout to my mentors Bryan Gartner, Terry Smith, Ann Davis for making this set-up possible.

the avatar of Open Build Service

Request Workflow Redesign: RPM Lint Results Filtering

In the effort to bring the full release of this feature sooner, we continued our work on request workflow redesign with a small batch of changes and improvements. You can try these enhancements right now and bring us much needed feedback, as we inch closer to replacing the workflow with the new design. In this roundup, we bring one more feature to the RPM Lint page introduced earlier this month. We started the redesign of...
a silhouette of a person's head and shoulders, used as a default avatar

Quick Install: Kubernets Cluster (RKE2) with Warewulf

In a previous blog post we've learned how we can leverage Warewulf - a node deployment tool used in High Performance Computing (HPC) - to deploy a K8s cluster using RKE2. The deployment was described step-by step explaining the rationale behind each step.
This post supplements the previous post by providing a quick install guide taking advantage of pre-build containers. With the help of these we are able to perform the deployment of an RKE2 cluster with a few simple commands.
Warewulf uses PXE boot to bring up the machines, so your cluster nodes need to be configured for this and the network needs to be configure so that the nodes will PXE-boot from the Warewulf deployment server. How to do this and how to make node known to Warewulf is covered in a post about the basic Warewulf setup.

Prerequisite: Prepare Configuration Overlay Template for RKE2

K8s agents and servers use a shared secret for authentication - the connection token. Moreover, agents need to know the host name of the server to contact.
This information is provided in a config file: /etc/rancher/rke2/config.yaml We let Warewulf provide this config file as a configuration overlay. A suitable overlay template can be installed from the package warewulf4-overlay-rk2:

 zypper -n install -y warewulf4-overlay-rke2

Set Up the Environment

First we set a few environment variables which we will use later.

cat > /tmp/rke2_environment.sh <<"EOF"
server=<add the server node here>
agents="<list of agents>"
container_url=docker://registry.opensuse.org/network/cluster/containers/containers-15.6
server_container=$container_url/warewulf-rke2-server:v1.32.7_rke2r1
agent_container=$container_url/warewulf-rke2-agent:v1.32.7_rke2r1
token="$(for n in {1..41}; do printf %2.2x $(($RANDOM & 0xff));done)"
EOF

Here we assume we have one server node and multiple agent nodes whose host names will have to replace in the above script. Lists of nodes can be specified as a comma-seperated list, but also as a range of nodes using square brackets (for example k8s-agent[00-15] would refer to k8s-agent00 to k8s-agent15) or lists and ranges combined.

Also, the script generates a connection token for the entire cluster. This token is used to allow agents (and secondary servers) to connect to the primary server1. If we set up the server persistently outside of Warewulf we either need to add this token to the config file /etc/rancher/rke2/config.yaml on the server before we start the rke2-server service for the first time or grab it from the server once it has booted.
In this example, we are using a 'short token' which does not contain the fingerprint of the root CA of the cluster. For production environments you are encouraged to create certificates for your K8s cluster beforehand and calculate the fingerprint from the root CA for use in the agent token. Refer to the appendix below how to do so.
In case your server is already running you will have to grab the token from it instead and set the variable token in the script above accordingly. It can be found in the file /var/lib/rancher/rke2/server/node-token.

Finally, we set the environment:

source /tmp/rke2_environment.sh

Obtain the Node Images

The openSUSE registry has pre-built image for K8s agents. Taking advantage of these removes the tedious task of preparing the base images. We pull these and build the container:

wwctl container import ${agent_container} leap15.6-RKE2-agent
wwctl container build leap15.6-RKE2-agent

In the previous blog we've talked about the implications deploying K8s servers using Warewulf. If we plan to deploy the server using Warewulf as well, we pull the server container from the registry also and build it:

wwctl container import ${server_container} leap15.6-RKE2-server
wwctl container build leap15.6-RKE2-server

Set up Profiles

We utilize Warewulf profiles to configure different aspects of the setup and assign these profiles to the different types of nodes as needed.

Set up rootfs

The settings in this profile will ensure that a rootfs type is set so that a call to pivot_root() performed by the container runtime will succeed.

wwctl profile add container-host
wwctl profile set --root tmpfs -A "crashkernel=no net.ifnames=1 rootfstype=ramfs" container-host

Set up Container Storage

Since the deployed nodes are ephemeral - that is they run out of a RAM disk, we may want to to store container images as well as administrative data on disk. Luckily, Warewulf is capable of configuring disks on the deployed nodes. For this example, we assumes that the container storage should reside on disk /dev/sdb, we are using the entire disk and we want to scrap the data at every boot. Of course other setups are possible, check the upstream documentation for details.

wwctl profile add container-storage
wwctl profile set --diskname /dev/sdb --diskwipe "true" \
      --partnumber 1 --partcreate --partname container_storage \
      --fsname container_storage --fswipe --fsformat ext4 \
	  --fspath /var/lib/rancher \
      container-storage

Note: When booting the node from Warewulf, the entire disk (/dev/sdb in this example) will be wiped. Make sure it does not contain any valuable data!

Set up the Connection Key

When we've set up our environment, we generated a unique connection key which we will use throughout this K8s cluster to allow agents (and secondary servers) to connect to the primary server.

wwctl profile add rke2-config-key
wwctl profile set --tagadd="connectiontoken=${token}" \
              -O rke2-config rke2-config-key

Set up the Server Profile

Here, we set up a profile which is used to point agents - and secondary servers - to the primary server:

wwctl profile add rke2-config-first-server
wwctl profile set --tagadd="server=${server}" -O rke2-config rke2-config-first-server

Set up and Start the Nodes

Now, we are ready to deploy our nodes.

Set up and deploy the Server Node

If applicable, set up the server node

wwctl node set \
      -P default,container-host,container-storage,rke2-config-key \
      -C leap15.6-RKE2-server ${server}
wwctl overlay build ${server}

Now, we are ready to PXE-boot the first server.

Set up and deploy the Agent Nodes

We now configure the agent nodes and build the overlays:

wwctl node set \
      -P default,container-host,container-storage,rke2-config-key,rke2-config-first-server \
      -C leap15.6-RKE2-agent ${agents}
wwctl overlay build ${agents}

Once the first server node is up and running we can now start PXE-booting the agents. They should connect to the server automatically.

Check Node Status

To check the status of the nodes, we connect to the server through ssh and run

kubectl get nodes

to check the status of the available agents.

Appendix: Set Up CA Certificates and caluculate Node Access Token

Rancher provides a script (enerate-custom-ca-certs.sh) to generate the different certificates required for a K8s cluster. First, download this script to current directory:

curl -sOL --output-dir
. https://github.com/k3s-io/k3s/raw/master/contrib/util/generate-custom-ca-certs.sh

and run the following commands:

export DATA_DIR=$(mktemp -d $(pwd)/tmp-XXXXXXXX)
chmod go-rwx $DATA_DIR
./generate-custom-ca-certs.sh
wwctl overlay rm -f rke2-ca
wwctl overlay create rke2-ca
files="server-ca.crt server-ca.key client-ca.crt client-ca.key \
	request-header-ca.crt request-header-ca.key \
    etcd/peer-ca.crt etcd/peer-ca.key etcd/server-ca.crt etcd/server-ca.key"
for d in $files; do
	d=${DATA_DIR}/server/tls/$d
    wwctl overlay import --parents rke2-ca $d /var/lib/rancher/rke2/${d#${DATA_DIR}/}.ww
    wwctl overlay chown rke2-ca /var/lib/rancher/rke2/${d#${DATA_DIR}/}.ww 0
    wwctl overlay chmod rke2-ca /var/lib/rancher/rke2/${d#${DATA_DIR}/}.ww $(stat -c %a $d)
done
ca_hash=$(openssl x509 -in $DATA_DIR/server/tls/root-ca.crt -outform DER \
	|sha256sum)
token="$( printf "K10${ca_hash%  *}::server:"; \
         for n in {1..41}; do printf %2.2x $(($RANDOM & 0xff));done )"

The certificates will be passed to the K8s in the system overlay. If you are setting up mulitple K8s clusters you may want to create separate certificates for each cluster and place the overlay name into the clusters namespace instead of using rke2-ca.
Before you delete $DATA_DIR, you may want to save the root and and intermiate certificates and keys ($DATADIR/server/tls/root-ca.*, ($DATADIR/server/tls/intermediate-ca.*) to a safe location. If you have a root and intermediate CA you want to reuse, you may copy these into the directory $DATADIR/server/tls before running generate-custom-ca-certs.sh. Use $token as the agent token instead.

  1. Secondary servers need to find the primary server endpoint like nodes. Therefore, they recieve the access token like nodes.

the avatar of Nathan Wolf

the avatar of openSUSE News

Planet News Roundup

This is a roundup of articles from the openSUSE community listed on planet.opensuse.org.

The below featured highlights listed on the community’s blog feed aggregator from August 16 to 22. Some of the most recent blogs involve updates on Tumbleweed developments, KDE and an extension of the Call for Hosts for the 2026 openSUSE.Asia Summit.

Here is a summary and links for each post:

openSUSE Welcome Receiving Makeover

The familiar opensuse-welcome-launcher is being phased out in favor of a new openSUSE-welcome greeter. Instead of relying on its own autostart, the greeter will coordinate desktop-specific greeters like gnome-tour and plasma-welcome. This approach allows welcome screens to appear not only on first boot but also after major updates and will help the project retire its legacy Qt5-based greeter.

DeepSeek V3.1: Why the Update Matters for the Market

A look at the newly announced DeepSeek V3.1 model from the Chinese AI startup. While technical details remain limited, the update promises a longer context window and incremental improvements at a fraction of the training cost of Western rivals. The release highlights the growing pressure on OpenAI and Anthropic as DeepSeek continues its fast-paced, cost-efficient development strategy.

Podcast Linux #27 – Slimbook One Special

The KDE Blog continues its archive project for Podcast Linux by revisiting the 27th episode, where Juan Febles reviews the Slimbook One desktop. Although the podcast is no longer active, its episodes remain a valuable resource for Linux enthusiasts.

KDE Gear 25.08 – Productivity Updates

This post highlights productivity-focused applications in the KDE Gear 25.08 release. Akonadi has seen major memory optimizations, KOrganizer introduces an improved date selector and search tooltips, and Kleopatra now offers independent encrypted notepad windows for better multitasking.

Accessibility with Free Technologies – Episode 6

The sixth episode of the podcast series explores accessibility topics with Eva Marco, Vicent Sanchis, and others. Discussions cover OpenStreetMap data visualization for accessibility, inclusive talent initiatives, testing adaptive hardware, and how AI is contributing to more inclusive design.

Cloud Containers: What They Are and How to Optimize Them

An overview of cloud container management, explaining how containers streamline deployment, reduce resource usage, and improve security. The article also details strategies for efficient management, from resource allocation to monitoring, and highlights OVHcloud’s solutions.

KDE Gear 25.08 – Itinerary Updates

KDE’s travel planner app, Itinerary, now allows manual input of bus and train journeys, provides live station maps, improved delay alerts, and expanded alternative connection searches to include ferries and flights. Enhanced live maps now display train infrastructure details via OpenRailwayMap.

Tumbleweed Review of the Week 33

This week brought eight new snapshots to openSUSE Tumbleweed, delivering updates such as Firefox 141.0.2, Linux kernel 6.16.0, KDE Plasma 6.4.4, KDE Frameworks 6.17.0, Postfix 3.10.3, OpenSSL 3.5.2, and more. Preparations are underway for upcoming updates including KDE Gear 25.08.0, kernel 6.16.1, and Python 3.13.7.

Lots of Polish – This Week in Plasma

Nate Graham’s weekly Plasma update, translated into Spanish, emphasizes bug fixes, performance improvements, and UI polish over new features. Highlights include accessibility improvements in notifications, refined Flatpak permissions, removal of outdated Breeze app icons, and numerous fixes across Plasma 6.4.5 and Frameworks 6.18.

Deadline Extended – Call for Host openSUSE.Asia Summit 2026

The deadline for proposals to host the 2026 openSUSE.Asia Summit has been extended to August 27, 2025. The selected host will be announced during this year’s Summit in Faridabad. Communities are encouraged to take this opportunity to showcase their region and bring openSUSE contributors together.

KDE Gear 25.08 – Dolphin Updates

The Dolphin file manager adds new search options with both indexed and simple search, tighter integration with Filelight for disk usage visualization, and improved view mode controls. These enhancements make Dolphin even more powerful for file management on KDE.

View more blogs or learn to publish your own on planet.opensuse.org.