Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

Releasing version 18

Almost three months have passed since our previous blog post, but that does not mean Agama development has stalled. The team is actually working on two parallel lines. On the one hand we are working to revamp several Agama internals and to greatly improve its HTTP API. On the other hand we keep implementing several small improvements and fixes on top of the codebase of Agama 17, to the point that it deserves a new version number. So please welcome Agama 18.

This new version delivers several features that you may experience if you install SUSE Linux Enterprise 16.0 or Leap 16.0 using Agama, since the official installation media for those two distributions include an intermediate version between Agama 17 and 18.

But let's take a look to the whole set of new features, starting with the changes introduced in the user interface.

Rearranged information in the storage section

The most potentially complex part of the web interface is the storage configuration page. Due to its flexibility and the variety of options it offers, we are in constant search for the best way to present the information there. Agama 18 is our latest step in that journey.

Reorganized storage page

The ultimate goal is to offer a more self-explanatory interface that allows newbies to understand what is going to happen to their disks, while making it possible for expert users to find the advanced options they need. See more information about the changes in the following pull request.

Enhanced interface to manage DASD devices

Storage management can even be more challenging for users of System/390 mainframes. Even before tweaking the configuration in the already mentioned storage page, they usually need to manage the DASD devices in order to activate and maybe format the disks that will be configured.

Agama's web interface to handle those DASD devices was greatly improved in version 18, allowing to work with a big number of devices in a more convenient way.

Revamped page to manage DASD

Not many of our readers have access to a S/390 system, but you can still get a very good overview of the new page thanks to the many screenshots available at the description of the corresponding pull request.

Improvements when using the JSON configuration

As most of our readers know, the web interface is just one of the possible ways to manage the installation. The full potential of Agama can be unleashed by describing the configuration using JSON (or Jsonnet), like it is done in the unattended installation process.

Agama 18 introduces several improvements in the way the configuration can be specified and managed, but we would like to highlight three headlines in that regard.

  • Ability to validate a JSON profile locally, with no Agama instance running.
  • Support to manage answers to Agama questions directly in the profile.
  • Possibility to add and remove patterns to install, in addition to replacing the full list.

The starter guide and the documentation page offer more information about how to use the power of JSON to get the most out of Agama.

Installer self-update functionality

As you may know, YaST has the ability to update itself at the beginning of the installation process getting an up-to-date version of the installer and the installation medium from a special dedicated repository. That feature debuted at SUSE Linux Enterprise 12 SP2 and, of course, we don't want to lose it in SLE 16.X.

The SLE 16 family will also offer self-update repositories and the Agama installation media are now able to use them to update the installation environment very early in the process.

Hello SLE 16.1

SUSE Linux Enterprise 16 is almost ready to be served. But the activity never ceases at the Open Source kitchen and the first ingredients of SLE 16.1 are already being selected and mixed. Any serious cook knows that tasting the intermediate steps is key to reach the desired result at the end. For that purpose, Agama 18 already offers the possibility to install the early alpha versions of SLE 16.1 and surely openSUSE Leap 16.1 will follow shortly.

See you soon, Kalpa

But not everything are good news about the list of distributions supported by Agama. Recently the development team behind openSUSE Kalpa contacted us to clarify they have no desire nor intention to support Kalpa installations with advanced custom partitioning schemas, like the ones that can be achieved using Agama.

When installing openSUSE Kalpa, they would like the Agama storage page to be reduced to a single disk selector in which the distribution would be installed as a single partition. Since there is currently no way for the distributions to configure the Agama web interface in that way, it was decided to remove openSUSE Kalpa from the list of distributions installable with Agama (so-called "products" in Agama jargon).

We hope this is not a definitive farewell since, as you can see in Agama's Roadmap, we plan to improve support for transactional distributions in the mid-term future.

Officially dropped support for i586

And talking about future, in order to submit Agama 18 to openSUSE Tumbleweed we had to exclude the i586 architecture. Unfortunately, there are several other parts of Tumbleweed that are not built for such a 32bit architecture, starting with Mozilla Firefox or Chromium. Now we have followed the same path, so it can be said that with Agama 18 we officially dropped support for i586.

We are not against re-enabling the builds for that architecture, but someone would need to create an alternate version of the installation media that can be built with software included in the i586 version of Tumbleweed (which excludes SUSE Connect and Mozilla Firefox). In the short team, the Agama development team has no human resources to invest on that front.

Working in the future

As mentioned at the beginning of this post, the Agama team is actively working on a big revamp of several internals and a new version of the HTTP API that should be the cornerstone for any future development. As you can see in the roadmap, that new version will keep us busy for several months. As a side effect, we expect very little activity in this blog in the short term.

Of course you can always reach us at the Agama project at GitHub and our #yast channel at Libera.chat.

Don't forget to have a lot of fun!

a silhouette of a person's head and shoulders, used as a default avatar
a silhouette of a person's head and shoulders, used as a default avatar
darix posted at

Declarative RPM

Declarative packaging is all the rage lately. I stopped counting how often I was asked “Do you have something like nix packages?”

Back in 2019 Florian Festi gave a nice talk called Re-Thinking Spec Files. It showed many nice ideas on how packaging could be easier.

Since RPM 4.20 we actually have declarative builds. Let us explore this a bit.

Exploring the possibilities

One of the first packages in openSUSE, which switched to declarative builds was gnome-text-editor. But to focus on the basics we will use my converted minisign package.

a silhouette of a person's head and shoulders, used as a default avatar

Tumbleweed – Review of the week 2025/46

Dear Tumbleweed users and hackers,

The release cadence of snapshots was substantially higher than last week. A whole set of six snapshots (1106, 1108, 1109, 1110, 1111, and 1112) have passed through the ether, out into the wild, and onto your systems. Quality Assurance (QA) quality took a bit of a dive—as we ‘accidentally’ cleaned up the default GNOME installation—and we found many jobs relying on those legacy tools.

Specifically, we have dropped the X11 pattern from a standard GNOME installation. This matches what GNOME has done since version 49: making it Wayland-only.

The—from our perspective—very positive side effect is that legacy tools like xterm are no longer installed. The negative aspect, however, is that openQA relied on xterm to do all kinds of tests (all console interaction inside the graphical user interface was handled by xterm).

That’s, of course, just one of the changes done. There were also a large number of other changes, like:

  • GRUB2-BLS as the default bootloader selected by the installer on UEFI-based systems;
  • KDE Plasma 6.5.2
  • KDE Gear 25.08.3
  • Linux kernel 6.17.7
  • Qt 5.15.18
  • openSSH 10.2p1
  • Qemu 10.1.2
  • Coreutils 9.9
  • LLVM 21.1.5
  • LXQt 2.3.0
  • kernel hardening: prevent normal users from seeing dmesg

Staging looks quite poor and empty at the moment. Keep the submissions comming! Currently, we are testing:

  • GStreamer 1.26.8
  • Mesa 25.2.7
  • Mozilla Firefox 145.0
  • Linux kernel 6.17.8
  • Systemd 258.2
  • openSSL 3.6.0
the avatar of openSUSE News

Planet News Roundup

This is a roundup of articles from the openSUSE community listed on planet.opensuse.org.

The below featured highlights listed on the community’s blog feed aggregator are from November 7 to 14.

Blog posts this week highlight KDE Plasma improvements, a translators’ meetup at LinuxDays, a new community-recognition app from Hack Week, a security advisory from the SUSE Security Team and more.

Here is a summary and links for each post:

GRUB2-BLS in openSUSE Tumbleweed is now the default

The GRUB2‑BLS boot loader variant, incorporating Fedora-derived patches and BLS entry support, is now the default for openSUSE Tumbleweed when installed via YaST. The change streamlines the boot configuration process and removes the need for manual tools like grub2-mkconfig. The change also enables improved full-disk encryption workflows using systemd tools together with TPM 2 or FIDO2 tokens.

Privilege Escalation from lightdm Service User to root in KAuth Helper Service (CVE-2025-62876)

The lightdm‑kde‑greeter’s D-Bus/KAuth helper allowed the lightdm service user to escalate privileges to root due to unchecked file copy operations. The upstream fix adds a safer file-descriptor passing and drops root privileges before writing into lightdm’s directory.

The new features of Plasma 6.5

The KDE Blog post describes several visual improvements for Plasma 6.5, including rounded corners on all four corners of windows and automatic switching between light and dark themes depending on the time of day. It also introduces the ability to pin items to the clipboard for frequently used text, and has improvements for drawing tablets in the system settings.

Hack Week Project Seeks to Launch Kudos

A new Hack Week 25 project within the openSUSE Project aims to create an application called “Kudos” to publicly recognise all kinds of contributor efforts. The project emphasises recognising behind-the-scenes contributions such as translations, documentation, infrastructure maintenance, moderation and more.

Meeting of Software Translators from English to Czech at LinuxDays

The blog post reports on a meetup of Czech translators at LinuxDays 2025 where participants discussed the shortage of active translators and outdated documentation in free software projects. L10N.cz was highlighted as a hub for Czech localization efforts and as a place where collaboration could be improved. The article calls for making translation tasks easier for newcomers.

Sixth Update of Plasma 6.4

The KDE Blog announced the sixth maintenance release of KDE Plasma 6.4, which focuses on stability, translation improvements and bug fixes. It improves the desktop experience without altering functionality. Key fixes include better Bluetooth device identification, improved keyboard navigation across widgets, enhanced Wayland stability and updates to the Discover software center.

Episode 57 of KDE Express: How to Report a Bug So Albert Can Fix It

The KDE Blog covers a podcast and takes the audio of a talk given by Albert Astals Cid at AkademyES 2025 in Málaga, which walks through best and worst practices for filing bugs in the KDE Plasma ecosystem. It stresses that you should use the correct bug tracker at Bugzilla (https://bugs.kde.org), ensure crash reports include symbols, be precise if it’s a functional error, and think twice before filing a “wish” or feature request.

The Printer of Stallman on Compilando Podcast

This KDE Blog highlights episode 63 of Compilando Podcast, titled “La impresora de Stallman”, which recounts how a simple printer jam at the Richard Stallman’s workplace sparked the free-software movement. It reflects the more than forty-year history of the GNU Project and Stallman’s ethical-technical vision that changed computing.

Virtual Desktops Only on the Primary Screen: This Week in Plasma

The KDE Blog discusses the upcoming version of KDE Plasma 6.6 and how changes will help users with multi-monitor setups by keeping desktop switching confined to one screen while other monitors stay fixed. The blog also highlights other forthcoming features such as QR-code network connection via the network widget and expanded crash reporting in DrKonqi.

openSUSE Tumbleweed review of week 45 of 2025

Victorhck’s blog recaps updates to openSUSE Tumbleweed for week 45 and notes the slow pace of new snapshots starting off in November. A major upcoming change highlights the switch to grub2‑bls as the default boot loader for UEFI installations.

Afternoon Session: Part 1 of the XI Jornadas Anuales de Wikimedia España

The KDE Blog post covers sessions from the first afternoon of the Wikimedia España annual conference, which gathered participants to discuss open knowledge, cultural heritage and digital collaboration. The write-up underlines the importance of building inclusive communities and improving coordination among translation and outreach efforts.

Tumbleweed – Review of the week 2025/45

Last week, DimStar blog summarised the previous week’s software package updates in openSUSE Tumbleweed. It highlighted the major change of switching to GRUB2‑BLS as covered above.

View more blogs or learn to publish your own on planet.opensuse.org.

the avatar of danigm's Blog

openSUSE: The new git workflow

openSUSE is migrating the package source management from osc to git. I will try to explain here what it is the "git workflow" in openSUSE and give some practical information for contributors and maintainers.

You can find some documentation in the openSUSE wiki

Why?

openSUSE uses Open Build Service to store package sources (.spec files and patches). The source control is similar to subversion, so you can osc checkout, osc commit, osc log, etc. to work with any package source.

I don't know all the reasons to do the change, and different people will find different reasons, to support the change or to be against it. I will just write here what I consider a good reason to migrate:

  • Almost everyone has moved to git now, so today, git is the default source code management (scm) tool, almost all the people knows about github, so it's good to be "standard".
  • With the concept of "cheap" branches in git, it could be easier to maintain different versions of the same package, same repository, different branches, instead of actual forks.
  • Decouple the package source management from build. OBS does a lot of things, moving source code and review process to gitea could reduce the complexity (but requires some integration).

From OBS to gitea

All the development happens in OBS, so the classic workflow happens entirely there:

  • developer: branch -> checkout -> modify -> commit -> submit request
  • maintainer: review -> accept/decline

And everything is integrated in OBS that does the corresponding build of the sources, checks with services, forward, etc.

The new workflow changes almost all the interaction from OBS to gitea. OBS is still a really important piece of software in the process, but in the new workflow it's a build backend, so the user interaction will be with the git repo in gitea.

The new git workflow happens in gitea:

  • developer: fork -> clone -> modify -> commit -> push -> pull request
  • maintainer: review -> approve/decline

As you may notice, the "default" workflow is very similar, the difference is in the tools, the classic workflow occurs in build.opensuse.org and the new workflow in src.opensuse.org. And the tools to work with the sources are also different, osc in the first one and git in the new workflow.

And with this new workflow, there are some bots in gitea that sends the changes to OBS, so for any pull request created you will have the build results and approval from the bot if everything is building.

How to modify a package (leap 16.0)

Notice that you should have a openSUSE account to work with the gitea instance, and don't forget to configure your ssh key to be able to push using the gitea@src.opensuse.org remote.

So as a contributor, this is the basic workflow to follow to update a package in openSUSE:

  1. Look for the source package in the pool (https://src.opensuse.org/pool)
  2. Fork it in your user space, click the top right button. Skip this step if you have forked before
  3. Clone your fork and work on that repo, create a new branch, modify the spec file, add new soures, etc. And make sure to update the .changes file accordingly, you can still use osc vc to do that.
  4. You can build locally your package with osc build, as usual, but you will need to specify the build project:
$ git obs meta pull
$ git obs meta set --project openSUSE:Backports:SLE-16.0
$ osc build
  1. Once you are happy with your changes, you can commit, push to your fork and then you can create a Pull Request. The pull request should target the pool repo and desired product branch. Right now you can just target leap-16.0, as factory is not migrated yet.

Devel projects

Some devel projects have been migrated now to git, so a similar workflow is available to modify these packages in Tumbleweed. The development of these packages is not happening in the pool, so you need to find first the devel project.

There's no simple way to discover the actual package devel source, as far as I know the easier way is:

  1. Look for the devel project of the package in the Factory list, (ex python-pytest)
  2. Go to the devel project in OBS (ex devel:languages:python:pytest)
  3. Click on the link This project is managed in SCM

Once you have located the desired package to modify, the workflow is similar to what I explained before, but instead of forking from pool, you should fork from the devel project. For example if you want to modify python-pytest you should fork from https://src.opensuse.org/python-pytest/python-pytest, and create the pull request for that repo, to the main branch.

Adding a new package follows a different process that's documented in the wiki

What's migrated?

As I said, the migration is happening right now, so not everything is migrated. The first project migrated was the Leap 16.0. And we are slowly migrating some devel projects.

Right now from the python-maintainers team, we've started the migration of two subprojects:

You can find more information about the migration current state in the wiki: https://en.opensuse.org/opensuse:obs_to_git#codestream_project_status_table

a silhouette of a person's head and shoulders, used as a default avatar
a silhouette of a person's head and shoulders, used as a default avatar
darix posted at

bcond and defaults

A while ago we looked at conditional building in rpm. One point we did not cover yet was how we can control build conditionals within the open buildservice.

Back in the good old days

When using osc build --with=build_docs --without=run_tests rpm internally defines 2 variables:

_with_build_docs
_without_run_tests

Which is also something we used when integrate it with the buildservice:

%define _with_ruby34 1
%define _without_apparmor 1
Macros:
%_with_ruby34 1
%_without_apparmor 1
:Macros

This was working mostly well. We could have one spec file which then had features turned on and off depending on the _with(out)_something defines. But it completely failed if an user later wanted to toggle the decision made on the base distro.

the avatar of openSUSE News

GRUB2-BLS in openSUSE Tumbleweed is now the default

openSUSE Tumbleweed recently changed the default boot loader from GRUB2 to GRUB2-BLS when installed via YaST.

This follows the trend started by MicroOS of adopting boot loaders that are compatible with the boot loader specification. MicroOS is using systemd-boot, which is a very small and fast boot loader from the systemd project.

One of the reasons for this change is to simplify the integration of new features. Among them is full disk encryption based on systemd tools, which will make use of TPM2 or FIDO2 tokens if they are available.

What is GRUB2-BLS

GRUB2-BLS is just GRUB2 but with some patches on top ported from the Fedora project, which includes some compatibility for the boot loader specification for Type #1 boot entries. Those are small text files stored in /boot/efi/loader/entries that the boot loader reads to present the initial menu.

Each file contains a reference to the kernel, the initrd, and the kernel command line that will be used to boot the system. It can be edited directly by the user or managed by tools like bootctl and sdbootutil.

In the next version of GRUB2 (2.14), those patches will be included as part of the project itself, and the upgrade process will be transparent for the final user.

It should be noted that the way openSUSE deploys GRUB2-BLS is different from the classical GRUB2. GRUB2-BLS is deployed as a single EFI binary installed (copied) in /boot/efi/EFI/opensuse that will have embedded all the resources (like the modules, configuration file, fonts, themes and graphics), which were previously placed in /boot/grub2.

Installation

The good news is that with the latest version of YaST the process is automatic. The user just needs to follow the default steps and the system will be based on GRUB2-BLS at the end.

The installer will first propose a large ESP partition of about 1GB. This is required because all the kernel and initrds will now be placed in the FAT32 ESP partition located in /boot/efi/opensuse-tumbleweed.

Of course the user can select a different boot loader during the installation like the classical GRUB2 or systemd-boot. This can be done in the “Installation Settings” screen presented at the end of the installation proposal. Just select the “Booting” header link and choose your boot loader from there.

Full disk encryption

When using a BLS boot loader, we can now install the system with full disk encryption (FDE) based on systemd. This can be done from the “Suggested Partitioning” screen. Just press “Guided Setup” and in the “Partitioning Scheme” select “Enable Disk Encryption”.

From there, you can set a LUKS2 password and, optionally, enroll a security device like a TPM2 or a FIDO2 key. For laptops, it is recommended to enroll the system with a TPM2+PIN. The TPM2 will first assert that the system is in a healthy (known) state. Than means that elements used during the boot process (from the firmware until the kernel) are the expected ones, and no one tampered with them. After that, the TPM2 will ask for a PIN or password, which YaST will set as the one entered for the LUKS2 key slot.

Usage

With GRUB2-BLS, we will no longer have grub2 tools like grub2-mkconfig or grub2-install. Most of them are not required anymore. Boot entries are generated dynamically by the boot loader, so there is no longer any need to generate GRUB2 configuration files, and the installation is just copying the new EFI file into the correct location.

The upgrade process is also done by automatically calling sdbootutil update from the snapper plugins or the SUSE module tools, so if btrfs is used, all the management will be done transparently by this infrastructure, as was done in the traditional boot loader.

Updating the kernel command line can now be done by editing the boot loader, or the /etc/kernel/cmdline and calling sdbootutil update-all-entries to propagate the change into the boot entries of the current snapshot.

To manage the FDE configuration, you can learn more in the openSUSE wiki.

a silhouette of a person's head and shoulders, used as a default avatar

How to Create an MCP Server

How to Create an MCP Server

This guide is for developers who want to build a MCP server. It describes how to implement an MCP for listing and adding users.

What Is an MCP Server?

An MCP server is a wrapper that sits between a Large Language Model (LLM) and an application, wrapping calls from the LLM to the application in JSON. You might be tempted to wrap your application’s existing APIs via fastapi and fastmcp, but as described by mostly harmless, this is a bad idea.

The main reason for this is that an LLM performs text completion based on the “downloaded” internet and can focus on a topic for no more than approximately 100 pages of text. It’s hard to fill these pages with chat, and you may have never encountered this limit. This also means that you need a user story or tasks to fill this book, including all possible failures and dead ends. In our example, we will add a user "tux" to the system.

The first pages of this imaginary book are already filled by the system prompt and the description of the MCP tool and its parameters. This description is provided by the tool’s author, so you can be very descriptive when writing the tool descriptions. A few more lines of text won’t hurt.

Every tool call has a JSON overlay, so you also want to avoid too many tool calls. Try to minimize the number of tools and combine similar operations into a single tool. For example, if you had a tool interacting with systemd, you would have just one tool that combines enabling, disabling, starting, and restarting the service, rather than one tool for each operation.

For the tool output, don’t hesitate to combine as much information as possible. A good tool’s output shouldn’t just return the group ID (GID) but also the group name.

The caveat here is that you can easily oversaturate the LLM with too much information, such as returning the output of find /. This would completely fill the imaginary book of the LLM conversation. In such cases, trim the information and provide parameters for tools, like filtering the output.

This boils down to the following points:

  • Have a user story for the tools.
  • Provide extensive descriptions for tools and their parameters.
  • Condense tools into sensible operations and don’t hesitate to add many parameters.
  • A tool call can have several API calls.
  • Avoid overload: LLMs can’t ignore output, so you are responsible for trimming information. And also the following bonus point, which I learned along the way:
  • Avoid a verbose parameter; an LLM will always use it.

== Always remember: ==

== “Context is King” ==

Build a Sample MCP Server

User Story

First, we have to come up with a user story. We have to decide what the user should be capable of doing with the tool.

Our user story is quite simple: “I want to add a user to the system.”

First Step

We will use Go for this project and start with this simple boilerplate code, which adds the tool “Foo”:

package main

import (
	"context"
	"flag"
	"log/slog"
	"net/http"

	"github.com/modelcontextprotocol/go-sdk/mcp"
)

// Input struct for the Foo tool.
type FooInput struct {
	Message string `json:"message,omitempty" jsonschema:"a message for the Foo tool"`
}

// Output struct for the Foo tool.
type FooOutput struct {
	Response string `json:"response" jsonschema:"the response from the Foo tool"`
}

// Foo function implements the Foo tool.
func Foo(ctx context.Context, req *mcp.CallToolRequest, input FooInput) (
	*mcp.CallToolResult, FooOutput, error,
) {
	slog.Info("Foo tool called", "message", input.Message)
	return nil, FooOutput{Response: "Foo received your message: " + input.Message}, nil
}

func main() {
	listenAddr := flag.String("http", "", "address for http transport, defaults to stdio")
	flag.Parse()

	server := mcp.NewServer(&mcp.Implementation{Name: "useradd", Version: "v0.0.1"}, nil)
	mcp.AddTool(server, &mcp.Tool{
		Name:        "Foo",
		Description: "A simple Foo tool",
	}, Foo)

	if *listenAddr == "" {
		// Run the server on the stdio transport.
		if err := server.Run(context.Background(), &mcp.StdioTransport{}); err != nil {
			slog.Error("Server failed", "error", err)
		}
	} else {
		// Create a streamable HTTP handler.
		handler := mcp.NewStreamableHTTPHandler(func(*http.Request) *mcp.Server {
			return server
		}, nil)

		// Run the server on the HTTP transport.
		slog.Info("Server listening", "address", *listenAddr)
		if err := http.ListenAndServe(*listenAddr, handler); err != nil {
			slog.Error("Server failed", "error", err)
		}
	}
}

To run the server, we first have to initialize the Go dependencies with:

  go mod init github.com/mslacken/mcp-useradd
  go mod tidy

Now the server can be run with the command:

  go run main.go -http localhost:8666

And we can run a JavaScript-based explorer in an additional terminal via:

  npx @modelcontextprotocol/inspector http://localhost:8666 --transport http

This gives us the following screen after the ‘Foo’ tool with the input ‘Baar’ was called.

Tool Foo was called input "Baar" response is {"response": "Foo received your message: Baar"}

Let’s break down our Go code. After the imports, we immediately have two structs that manage the input and output for our tool. Go has a built-in serializer for data structures. The keyword json:"message,omitempty" tells the serialization library to use “message” as the variable’s name. More important is the second option, “omitempty,” which marks this as an optional input parameter; if empty, the variable won’t be in the output. The “jsonschema” parameter describes what this parameter does and what input is expected. Although the parameter’s type is deduced from the struct, the description is crucial. The method for the tool returns the message by constructing the output struct and returning it. The method itself is added to the MCP server instance and also needs to have a name and a description. The description of the tool is also highly important and is the only way for the LLM to know what the tool is doing.

Concretize the tool

The full code of this section can be found in the git commit simple user list

As we don’t want to change the system at this early phase and the whole project would need a tool to get the actual users of the system. So let’s add the tool get_users. To keep it simple we just use we will just use the output of getent passwd to fullfill this task. A function which can do this would look like

// User struct represents a single user account.
type User struct {
	Username string `json:"username"`
	Password string `json:"password"`
	UID      int    `json:"uid"`
	GID      int    `json:"gid"`
	Comment  string `json:"comment"`
	Home     string `json:"home"`
	Shell    string `json:"shell"`
}
func ListUsers(ctx context.Context, req *mcp.CallToolRequest, _ ListUsersInput) (
	*mcp.CallToolResult, ListUsersOutput, error,
) {
	slog.Info("ListUsers tool called")
	cmd := exec.Command("getent", "passwd")
	var out bytes.Buffer
	cmd.Stdout = &out
	err := cmd.Run()
	if err != nil {
		return nil, ListUsersOutput{}, err
	}
	var users []User
	scanner := bufio.NewScanner(&out)
	for scanner.Scan() {
		line := scanner.Text()
		parts := strings.Split(line, ":")
		if len(parts) != 7 {
			continue
		}
		uid, _ := strconv.Atoi(parts[2])
		gid, _ := strconv.Atoi(parts[3])
		users = append(users, User{
			Username: parts[0],
			Password: parts[1],
			UID:      uid,
			GID:      gid,
			Comment:  parts[4],
			Home:     parts[5],
			Shell:    parts[6],
		})
	}
	return nil, ListUsersOutput{Users: users}, nil
}

When you check this method you see that output is just a list (called slice in go) of the users and their porperties.

Although this looks correct this method is missing some important things

  • the user type, is it a system or user an user account of a human
  • in which groups is the user part of

Asking such type of questions and then providing that information is the most important part when writing an MCP tool. This information isn’t known to the LLM but might define the input paramters when adding an user.

In opposite to a real implementation of such a tool will just treat all users with a ‘gid < 1000’ as system users. Also we all add a call to getent group to get all groups in which the user is part of. When now this tools is called it’s also sensible to output the group information, as this would enable the LLM to fullfill the taks like “Add the user chris to the system and make him part of the witcher group”. The full code of this section can be found in the git commit better user list With that information the tool call now likes

// ListUsers function implements the ListUsers tool.
func ListUsers(ctx context.Context, req *mcp.CallToolRequest, _ ListUsersInput) (
	*mcp.CallToolResult, ListUsersOutput, error,
) {
	slog.Info("ListUsers tool called")
	cmd := exec.Command("getent", "passwd")
	var out bytes.Buffer
	cmd.Stdout = &out
	err := cmd.Run()
	if err != nil {
		return nil, ListUsersOutput{}, err
	}
	var users []User
	scanner := bufio.NewScanner(&out)
	for scanner.Scan() {
		line := scanner.Text()
		parts := strings.Split(line, ":")
		if len(parts) != 7 {
			continue
		}
		uid, _ := strconv.Atoi(parts[2])
		gid, _ := strconv.Atoi(parts[3])
		users = append(users, User{
			Username:     parts[0],
			Password:     parts[1],
			UID:          uid,
			GID:          gid,
			Comment:      parts[4],
			Home:         parts[5],
			Shell:        parts[6],
			IsSystemUser: gid < 1000,
			Groups:       []string{},
		})
	}

	cmd = exec.Command("getent", "group")
	var groupOut bytes.Buffer
	cmd.Stdout = &groupOut
	err = cmd.Run()
	if err != nil {
		return nil, ListUsersOutput{}, err
	}
	var groups []Group
	groupScanner := bufio.NewScanner(&groupOut)
	for groupScanner.Scan() {
		line := groupScanner.Text()
		parts := strings.Split(line, ":")
		if len(parts) != 4 {
			continue
		}
		gid, _ := strconv.Atoi(parts[2])
		members := strings.Split(parts[3], ",")
		groups = append(groups, Group{
			Name:     parts[0],
			Password: parts[1],
			GID:      gid,
			Members:  members,
		})
		groupName := parts[0]
		for _, member := range members {
			for i, user := range users {
				if user.Username == member {
					users[i].Groups = append(users[i].Groups, groupName)
				}
			}
		}
	}

	return nil, ListUsersOutput{Users: users, Groups: groups}, nil
}
	

The result looks now like Tool ListUsers was called and the output is a structured list of users

This is gives now the LLM much more sensible information about the system and e.g. if I would aks “Add chris to the system and add him to the witcher group” and on the system wouldn’t be the group ‘witcher’ but on called ‘hexer’ it could find out that it’s german system and perhaps ‘hexer’ would be right group then.

As icing of the cake we now refactor the list method so that a username can be passed as an optional paramater. In such way the output of the tool can be limited. This is important for many subsequent tools calls. For real production garde software the paramter would then also regular expressions or even fuzzy matching. The full code of this section can be found in the git commit filter with a username This transforms the tool call to

// ListUsers function implements the ListUsers tool.
func ListUsers(ctx context.Context, req *mcp.CallToolRequest, input ListUsersInput) (
	*mcp.CallToolResult, ListUsersOutput, error,
) {
	slog.Info("ListUsers tool called")

	users, err := getUsers(input.Username)
	if err != nil {
		return nil, ListUsersOutput{}, err
	}

	if input.Username != "" {
		return nil, ListUsersOutput{Users: users}, nil
	}

	groups, err := getGroups()
	if err != nil {
		return nil, ListUsersOutput{}, err
	}

	for _, group := range groups {
		for _, member := range group.Members {
			for i, user := range users {
				if user.Username == member {
					users[i].Groups = append(users[i].Groups, group.Name)
				}
			}
		}
	}

	return nil, ListUsersOutput{Users: users, Groups: groups}, nil
}

func getUsers(username string) ([]User, error) {
	args := []string{"passwd"}
	if username != "" {
		args = append(args, username)
	}
	cmd := exec.Command("getent", args...)
	var out bytes.Buffer
	cmd.Stdout = &out
	err := cmd.Run()
	if err != nil {
		return nil, err
	}
	var users []User
	scanner := bufio.NewScanner(&out)
	for scanner.Scan() {
		line := scanner.Text()
		parts := strings.Split(line, ":")
		if len(parts) != 7 {
			continue
		}
		uid, _ := strconv.Atoi(parts[2])
		gid, _ := strconv.Atoi(parts[3])
		users = append(users, User{
			Username:     parts[0],
			Password:     parts[1],
			UID:          uid,
			GID:          gid,
			Comment:      parts[4],
			Home:         parts[5],
			Shell:        parts[6],
			IsSystemUser: gid < 1000,
			Groups:       []string{},
		})
	}
	if username != "" && len(users) > 0 {
		groups, err := getUserGroups(username)
		if err == nil {
			users[0].Groups = groups
		}
	}
	return users, nil
}

Still there are many things we could add here as parameter like filtering for non system users only, check for ‘pam.d’ options which interacts with users…

Adding a user

Just for completeness we now add a tool for adding user, which does this by the SUSE specific useradd call. The full code of this section can be found in the git commit added user add method A tool can look like

// Input struct for the AddUser tool.
type AddUserInput struct {
	Username     string   `json:"username" jsonschema:"the username of the new account"`
	BaseDir      string   `json:"base_dir,omitempty" jsonschema:"the base directory for the home directory of the new account"`
	Comment      string   `json:"comment,omitempty" jsonschema:"the GECOS field of the new account"`
	HomeDir      string   `json:"home_dir,omitempty" jsonschema:"the home directory of the new account"`
	ExpireDate   string   `json:"expire_date,omitempty" jsonschema:"the expiration date of the new account"`
	Inactive     int      `json:"inactive,omitempty" jsonschema:"the password inactivity period of the new account"`
	Gid          string   `json:"gid,omitempty" jsonschema:"the name or ID of the primary group of the new account"`
	Groups       []string `json:"groups,omitempty" jsonschema:"the list of supplementary groups of the new account"`
	SkelDir      string   `json:"skel_dir,omitempty" jsonschema:"the alternative skeleton directory"`
	CreateHome   bool     `json:"create_home,omitempty" jsonschema:"create the user's home directory"`
	NoCreateHome bool     `json:"no_create_home,omitempty" jsonschema:"do not create the user's home directory"`
	NoUserGroup  bool     `json:"no_user_group,omitempty" jsonschema:"do not create a group with the same name as the user"`
	NonUnique    bool     `json:"non_unique,omitempty" jsonschema:"allow to create users with duplicate (non-unique) UID"`
	Password     string   `json:"password,omitempty" jsonschema:"the encrypted password of the new account"`
	System       bool     `json:"system,omitempty" jsonschema:"create a system account"`
	Shell        string   `json:"shell,omitempty" jsonschema:"the login shell of the new account"`
	Uid          int      `json:"uid,omitempty" jsonschema:"the user ID of the new account"`
	UserGroup    bool     `json:"user_group,omitempty" jsonschema:"create a group with the same name as the user"`
	SelinuxUser  string   `json:"selinux_user,omitempty" jsonschema:"the specific SEUSER for the SELinux user mapping"`
	SelinuxRange string   `json:"selinux_range,omitempty" jsonschema:"the specific MLS range for the SELinux user mapping"`
}
func AddUser(ctx context.Context, req *mcp.CallToolRequest, input AddUserInput) (
	*mcp.CallToolResult, AddUserOutput, error,
) {
	slog.Info("AddUser tool called")
	args := []string{}
	if input.BaseDir != "" {
		args = append(args, "-b", input.BaseDir)
	}
	/*
	Cutted a lot of command line parameter settings
	*/
	if input.SelinuxUser != "" {
		args = append(args, "-Z", input.SelinuxUser)
	}
	args = append(args, input.Username)

	cmd := exec.Command("useradd", args...)
	var out bytes.Buffer
	cmd.Stdout = &out
	cmd.Stderr = &out
	err := cmd.Run()
	if err != nil {
		return nil, AddUserOutput{Success: false, Message: out.String()}, err
	}
	return nil, AddUserOutput{Success: true, Message: out.String()}, nil
}

For a real MCP server the tool could also be aware of the standard home location and provide the btrfs related option on ly on these and the same holds true if SELinux is enabled or not and then just add these options. But I think it’s now clear how the coockie crumbles for MCP tools.

a silhouette of a person's head and shoulders, used as a default avatar

lightdm-kde-greeter: Privilege Escalation from lightdm Service User to root in KAuth Helper Service (CVE-2025-62876)

Table of Contents

1) Introduction

lightdm-kde-greeter is a KDE-themed greeter application for the lightdm display manager. At the beginning of September one of our community packagers asked us to review a D-Bus service contained in lightdm-kde-greeter for addition to openSUSE Tumbleweed.

In the course of the review we found a potential privilege escalation from the lightdm service user to root which is facilitated by this D-Bus service, among some other shortcomings in its implementation.

The next section provides a general overview of the D-Bus service. Section 3 discusses the security problems in the service’s implementation. Section 4 takes a look at the bugfix upstream arrived at.

This report is based on lightdm-kde-greeter release 6.0.3.

2) Overview of the D-Bus Helper

lightdm-kde-greeter includes a D-Bus service which enables regular users to configure custom themes to be used by the greeter application. The D-Bus service is implemented as a KDE KAuth helper service, running with full root privileges.

The helper implements a single API method, protected by Polkit action org.kde.kcontrol.kcmlightdm.save, which requires auth_admin_keep by default, i.e. users need to provide root credentials to perform this action. The method takes a map of key/value pairs which allow to fully control the contents of lightdm.conf and lightdm-kde-greeter.conf.

From a security point of view such a generic interface is sub-optimal, since the scope of the operation is not restricted to changing theme settings, but also allows to change all the rest of lightdm’s configuration, providing less control over who may do what in the system. From an application’s point of view this approach is understandable, however, as this makes it easy to support any future features.

Another Polkit action org.kde.kcontrol.kcmlightdm.savethemedetails is declared in kcm_lightdm.actions, which is unused, maybe a remnant of former versions of the project.

3) Problems in the D-Bus Helper

The problems in the D-Bus service start in helper.cc line 87, where we can find this comment:

// keys starting with "copy_" are handled in a special way, in fact,
// this is an instruction to copy the file to the greeter's home
// directory, because the greeter will not be able to read the image
// from the user's home folder

To start with it is rather bad API design to abuse the key/value map, which is supposed to contain configuration file entries, for carrying “secret” copy instructions. Even worse, in the resulting copy operation three different security contexts are mixed:

  • the helper, which runs with full root privileges.
  • the unprivileged D-Bus client, which specifies a path to be opened by the helper.
  • the lightdm service user; the helper will copy the user-specified file into a directory controlled by it.

The helper performs this copy operation with full root privileges without taking precautions, reading input data from one unprivileged context and writing it into another unprivileged context. This is done naively using the Qt framework’s QFile::copy() and similar APIs, leading to a range of potential local attack vectors:

  • Denial-of-Service (e.g. passing a named FIFO pipe as source file path, causing the D-Bus helper to block indefinitely).
  • information leak (e.g. passing a path to private data as source file like /etc/shadow, which will then become public in /var/lib/lightdm).
  • creation of directories in unexpected locations (the helper attempts to create /var/lib/lightdm/.../<theme>, thus the lightdm user can place symlinks there which will be followed).
  • overwrite of unexpected files (similar as before, symlinks can be placed as destination file name, which will be followed and overwritten with client data).

If this action would ever be set to yes Polkit authentication requirements, then this would be close to a local root exploit. Even in its existing form it allows the lightdm service user to escalate privileges to root.

Interestingly these problems are quite similar to issues in sddm-kcm6, which we covered in a previous blog post.

4) Upstream Bugfix

We suggested the following changes to upstream to address the problems:

  • the copy operation should be implemented using D-Bus file descriptor passing, this way opening client-controlled paths as root is already avoided.
  • for creating the file in the target directory of lightdm, a privilege drop to the lightdm service user should be performed to avoid any symlink attack surface.

We are happy to share that the upstream maintainer of lightdm-kde-greeter followed our suggestions closely and coordinated the changes with us before the publication of the bugfix. With these changes, this KAuth helper is now kind of a model implementation which can serve as a positive example for other KDE components. Upstream also performed some general cleanup, like the removal of the unused savethemedetails Polkit action from the repository.

Upstream released version 6.0.4 of lightdm-kde-greeter which contains the fixes.

5) CVE Assignment

In agreement with upstream, we assigned CVE-2025-62876 to track the lightdm service user to root privilege escalation aspect described in this report. The severity of the issue is low, since it only affects defense-in-depth (if the lightdm service user were compromised) and the problematic logic can only be reached and exploited if triggered interactively by a privileged user.

6) Coordinated Disclosure

We reported these issues to KDE security on 2025-09-04 offering coordinated disclosure, but we initially had difficulties setting up the process with them. Upstream did not clearly express the desire to practice coordinated disclosure, no (preliminary) publication date could be set and no confirmation of the issues was received.

Things took a turn for the better when a lightdm-kde-greeter developer contacted us directly on 2025-10-16 and the publication date and fixes were discussed. The ensuing review process for the bugfixes was very helpful in our opinion, leading to a major improvement of the KAuth helper implementation in lightdm-kde-greeter.

7) Timeline

2025-09-04 We received the review request for the lightdm-kde-greeter D-Bus service.
2025-09-10 We privately reported the findings to KDE security.
2025-09-17 We received an initial reply from KDE security stating that they would get back to us.
2025-09-29 We asked for at least a confirmation of the report and a rough disclosure date, but upstream was not able to provide this.
2025-10-01 KDE security informed us that an upstream developer planned to release fixes by mid-November.
2025-10-16 An upstream developer contacted us to discuss the publication date, since the bugfixes were ready.
2025-10-20 We asked the developer to share the bugfixes for review.
2025-10-21 The developer shared a patch set with us.
2025-10-24 We agreed on 2025-10-31 for coordinated disclosure date.
2025-10-28 After a couple of email exchanges discussing the patches, upstream arrived at an improved patch set. We suggested to assign a CVE for the ligthdm to root attack surface.
2025-10-29 We assigned CVE-2025-62876.
2025-11-03 We asked when the bugfix release would be published, with the disclosure date already passed.
2025-11-03 Upstream agreed to publish on the same day.
2025-11-03 Upstream released version 6.0.4 containing the bugfixes. We published our Bugzilla bug on the topic.
2025-11-13 Publication of this report.

8) References

the avatar of openSUSE News

openSUSE Tumbleweed

openSUSE Tumbleweed recently changed the default boot loader from GRUB2 to GRUB2-BLS when installed via YaST.

This follows the trending started by MicroOS of adopting boot loaders that are compatible with the boot loader specification. MicroOS is using systemd-boot, a very small and fast boot loader from the systemd project.

One of the reasons of this change is that simplify the integration of new features, like a full disk encryption based on the systemd tools, that will make use of the TPM2 or FIDO2 tokens if they are available.

What is GRUB2-BLS

GRUB2-BLS is just GRUB2 but with some patches on top ported from the Fedora project, that includes some compatibility for the boot loader specification for Type #1 boot entries. Those are small text files stored in /boot/efi/loader/entries that the boot loader reads to present the initial menu.

Each file contains a reference to the kernel, the initrd and the kernel command line that will be used to boot the system, and can be edited directly by the user or managed by tools like bootctl and sdbootutil.

The next version of GRUB2 (2.14) those patches will be included as part of the project itself, and the upgrade process will be transparent for the final user.

Should be noted that the way that openSUSE deploy GRUB2-BLS is different from the classical GRUB2. GRUB2-BLS is deployed as a single EFI binary installed (copied) in /boot/efi/EFI/opensuse that will have embedded all the resources (like the modules, configuration file, fonts, themes and graphics) that previously where placed in /boot/grub2.

Installation

The good news is that with the last version of YaST the process is automatic. The used just needs to follow the default steps and the system will be based on GRUB2-BLS at the end.

The installed will propose first a large ESP partition of about 1GB. This is required because now all the kernel and initrds will be placed in the FAT32 ESP partition, in /boot/efi/opensuse-tumbleweed.

Of course the user can select during the installation a different boot loader, like the classical GRUB2 or systemd-boot. This can be done in the “Installation Settings” screen presented at the end of the installation proposal. Just select the “Booting” header link and choose your boot loader from there.

Usage

With GRUB2-BLS we will not have anymore the grub2 tools, like grub2-mkconfig or grub2-install. Most of them are not required anymore. The boot entries are generated dynamically by the boot loader, so there is no need anymore of generating GRUB2 configuration files, and the installation is just copying the new EFI file into the correct place.

The upgrade process is also done automatically calling sdbootutil update by the snapper plugins or the SUSE module tools, so if btrfs is used all the management will be done transparently by this infrastructure, as was done in the traditional boot loader.

Updating the kernel command line can be now be done by editing the boot loader, or the /etc/kernel/cmdline and calling sdbootutil update-all-entries to propagate the change into the boot entries of the current snapshot.