GRUB2-BLS in openSUSE Tumbleweed is now the default
openSUSE Tumbleweed recently changed the default boot loader from GRUB2 to GRUB2-BLS when installed via YaST.
This follows the trend started by MicroOS of adopting boot loaders that are compatible with the boot loader specification. MicroOS is using systemd-boot, which is a very small and fast boot loader from the systemd project.
One of the reasons for this change is to simplify the integration of new features. Among them is full disk encryption based on systemd tools, which will make use of TPM2 or FIDO2 tokens if they are available.
What is GRUB2-BLS
GRUB2-BLS is just GRUB2 but with some patches on top ported from the Fedora project, which includes some compatibility for the boot loader specification for Type #1 boot entries. Those are small text files stored in /boot/efi/loader/entries that the boot loader reads to present the initial menu.
Each file contains a reference to the kernel, the initrd, and the kernel command line that will be used to boot the system. It can be edited directly by the user or managed by tools like bootctl and sdbootutil.
In the next version of GRUB2 (2.14), those patches will be included as part of the project itself, and the upgrade process will be transparent for the final user.
It should be noted that the way openSUSE deploys GRUB2-BLS is different from the classical GRUB2. GRUB2-BLS is deployed as a single EFI binary installed (copied) in /boot/efi/EFI/opensuse that will have embedded all the resources (like the modules, configuration file, fonts, themes and graphics), which were previously placed in /boot/grub2.
Installation
The good news is that with the latest version of YaST the process is automatic. The user just needs to follow the default steps and the system will be based on GRUB2-BLS at the end.
The installer will first propose a large ESP partition of about 1GB. This is required because all the kernel and initrds will now be placed in the FAT32 ESP partition located in /boot/efi/opensuse-tumbleweed.
Of course the user can select a different boot loader during the installation like the classical GRUB2 or systemd-boot. This can be done in the “Installation Settings” screen presented at the end of the installation proposal. Just select the “Booting” header link and choose your boot loader from there.
Full disk encryption
When using a BLS boot loader, we can now install the system with full disk encryption (FDE) based on systemd. This can be done from the “Suggested Partitioning” screen. Just press “Guided Setup” and in the “Partitioning Scheme” select “Enable Disk Encryption”.
From there, you can set a LUKS2 password and, optionally, enroll a security device like a TPM2 or a FIDO2 key. For laptops, it is recommended to enroll the system with a TPM2+PIN. The TPM2 will first assert that the system is in a healthy (known) state. Than means that elements used during the boot process (from the firmware until the kernel) are the expected ones, and no one tampered with them. After that, the TPM2 will ask for a PIN or password, which YaST will set as the one entered for the LUKS2 key slot.
Usage
With GRUB2-BLS, we will no longer have grub2 tools like grub2-mkconfig or grub2-install. Most of them are not required anymore. Boot entries are generated dynamically by the boot loader, so there is no longer any need to generate GRUB2 configuration files, and the installation is just copying the new EFI file into the correct location.
The upgrade process is also done by automatically calling sdbootutil update from the snapper plugins or the SUSE module tools, so if btrfs is used, all the management will be done transparently by this infrastructure, as was done in the traditional boot loader.
Updating the kernel command line can now be done by editing the boot loader, or the /etc/kernel/cmdline and calling sdbootutil update-all-entries to propagate the change into the boot entries of the current snapshot.
To manage the FDE configuration, you can learn more in the openSUSE wiki.
How to Create an MCP Server
How to Create an MCP Server
This guide is for developers who want to build a MCP server. It describes how to implement an MCP for listing and adding users.
What Is an MCP Server?
An MCP server is a wrapper that sits between a Large Language Model (LLM) and an application, wrapping calls from the LLM to the application in JSON. You might be tempted to wrap your application’s existing APIs via fastapi and fastmcp, but as described by mostly harmless, this is a bad idea.
The main reason for this is that an LLM performs text completion based on the “downloaded” internet and can focus on a topic for no more than approximately 100 pages of text. It’s hard to fill these pages with chat, and you may have never encountered this limit. This also means that you need a user story or tasks to fill this book, including all possible failures and dead ends. In our example, we will add a user "tux" to the system.
The first pages of this imaginary book are already filled by the system prompt and the description of the MCP tool and its parameters. This description is provided by the tool’s author, so you can be very descriptive when writing the tool descriptions. A few more lines of text won’t hurt.
Every tool call has a JSON overlay, so you also want to avoid too many tool calls. Try to minimize the number of tools and combine similar operations into a single tool. For example, if you had a tool interacting with systemd, you would have just one tool that combines enabling, disabling, starting, and restarting the service, rather than one tool for each operation.
For the tool output, don’t hesitate to combine as much information as possible. A good tool’s output shouldn’t just return the group ID (GID) but also the group name.
The caveat here is that you can easily oversaturate the LLM with too much information, such as returning the output of find /. This would completely fill the imaginary book of the LLM conversation. In such cases, trim the information and provide parameters for tools, like filtering the output.
This boils down to the following points:
- Have a user story for the tools.
- Provide extensive descriptions for tools and their parameters.
- Condense tools into sensible operations and don’t hesitate to add many parameters.
- A tool call can have several API calls.
- Avoid overload: LLMs can’t ignore output, so you are responsible for trimming information. And also the following bonus point, which I learned along the way:
- Avoid a
verboseparameter; an LLM will always use it.
== Always remember: ==
== “Context is King” ==
Build a Sample MCP Server
User Story
First, we have to come up with a user story. We have to decide what the user should be capable of doing with the tool.
Our user story is quite simple: “I want to add a user to the system.”
First Step
We will use Go for this project and start with this simple boilerplate code, which adds the tool “Foo”:
package main
import (
"context"
"flag"
"log/slog"
"net/http"
"github.com/modelcontextprotocol/go-sdk/mcp"
)
// Input struct for the Foo tool.
type FooInput struct {
Message string `json:"message,omitempty" jsonschema:"a message for the Foo tool"`
}
// Output struct for the Foo tool.
type FooOutput struct {
Response string `json:"response" jsonschema:"the response from the Foo tool"`
}
// Foo function implements the Foo tool.
func Foo(ctx context.Context, req *mcp.CallToolRequest, input FooInput) (
*mcp.CallToolResult, FooOutput, error,
) {
slog.Info("Foo tool called", "message", input.Message)
return nil, FooOutput{Response: "Foo received your message: " + input.Message}, nil
}
func main() {
listenAddr := flag.String("http", "", "address for http transport, defaults to stdio")
flag.Parse()
server := mcp.NewServer(&mcp.Implementation{Name: "useradd", Version: "v0.0.1"}, nil)
mcp.AddTool(server, &mcp.Tool{
Name: "Foo",
Description: "A simple Foo tool",
}, Foo)
if *listenAddr == "" {
// Run the server on the stdio transport.
if err := server.Run(context.Background(), &mcp.StdioTransport{}); err != nil {
slog.Error("Server failed", "error", err)
}
} else {
// Create a streamable HTTP handler.
handler := mcp.NewStreamableHTTPHandler(func(*http.Request) *mcp.Server {
return server
}, nil)
// Run the server on the HTTP transport.
slog.Info("Server listening", "address", *listenAddr)
if err := http.ListenAndServe(*listenAddr, handler); err != nil {
slog.Error("Server failed", "error", err)
}
}
}
To run the server, we first have to initialize the Go dependencies with:
go mod init github.com/mslacken/mcp-useradd
go mod tidy
Now the server can be run with the command:
go run main.go -http localhost:8666
And we can run a JavaScript-based explorer in an additional terminal via:
npx @modelcontextprotocol/inspector http://localhost:8666 --transport http
This gives us the following screen after the ‘Foo’ tool with the input ‘Baar’ was called.

Let’s break down our Go code. After the imports, we immediately have two structs that manage the input and output for our tool. Go has a built-in serializer for data structures. The keyword json:"message,omitempty" tells the serialization library to use “message” as the variable’s name. More important is the second option, “omitempty,” which marks this as an optional input parameter; if empty, the variable won’t be in the output. The “jsonschema” parameter describes what this parameter does and what input is expected. Although the parameter’s type is deduced from the struct, the description is crucial. The method for the tool returns the message by constructing the output struct and returning it.
The method itself is added to the MCP server instance and also needs to have a name and a description. The description of the tool is also highly important and is the only way for the LLM to know what the tool is doing.
Concretize the tool
The full code of this section can be found in the git commit simple user list
As we don’t want to change the system at this early phase and the whole project would need a tool to get the actual users of the system. So let’s add the tool get_users.
To keep it simple we just use we will just use the output of getent passwd to fullfill this task.
A function which can do this would look like
// User struct represents a single user account.
type User struct {
Username string `json:"username"`
Password string `json:"password"`
UID int `json:"uid"`
GID int `json:"gid"`
Comment string `json:"comment"`
Home string `json:"home"`
Shell string `json:"shell"`
}
func ListUsers(ctx context.Context, req *mcp.CallToolRequest, _ ListUsersInput) (
*mcp.CallToolResult, ListUsersOutput, error,
) {
slog.Info("ListUsers tool called")
cmd := exec.Command("getent", "passwd")
var out bytes.Buffer
cmd.Stdout = &out
err := cmd.Run()
if err != nil {
return nil, ListUsersOutput{}, err
}
var users []User
scanner := bufio.NewScanner(&out)
for scanner.Scan() {
line := scanner.Text()
parts := strings.Split(line, ":")
if len(parts) != 7 {
continue
}
uid, _ := strconv.Atoi(parts[2])
gid, _ := strconv.Atoi(parts[3])
users = append(users, User{
Username: parts[0],
Password: parts[1],
UID: uid,
GID: gid,
Comment: parts[4],
Home: parts[5],
Shell: parts[6],
})
}
return nil, ListUsersOutput{Users: users}, nil
}
When you check this method you see that output is just a list (called slice in go) of the users and their porperties.
Although this looks correct this method is missing some important things
- the user type, is it a system or user an user account of a human
- in which groups is the user part of
Asking such type of questions and then providing that information is the most important part when writing an MCP tool. This information isn’t known to the LLM but might define the input paramters when adding an user.
In opposite to a real implementation of such a tool will just treat all users with a ‘gid < 1000’ as system users.
Also we all add a call to getent group to get all groups in which the user is part of.
When now this tools is called it’s also sensible to output the group information, as this would enable the LLM to fullfill the taks like “Add the user chris to the system and make him part of the witcher group”.
The full code of this section can be found in the git commit better user list
With that information the tool call now likes
// ListUsers function implements the ListUsers tool.
func ListUsers(ctx context.Context, req *mcp.CallToolRequest, _ ListUsersInput) (
*mcp.CallToolResult, ListUsersOutput, error,
) {
slog.Info("ListUsers tool called")
cmd := exec.Command("getent", "passwd")
var out bytes.Buffer
cmd.Stdout = &out
err := cmd.Run()
if err != nil {
return nil, ListUsersOutput{}, err
}
var users []User
scanner := bufio.NewScanner(&out)
for scanner.Scan() {
line := scanner.Text()
parts := strings.Split(line, ":")
if len(parts) != 7 {
continue
}
uid, _ := strconv.Atoi(parts[2])
gid, _ := strconv.Atoi(parts[3])
users = append(users, User{
Username: parts[0],
Password: parts[1],
UID: uid,
GID: gid,
Comment: parts[4],
Home: parts[5],
Shell: parts[6],
IsSystemUser: gid < 1000,
Groups: []string{},
})
}
cmd = exec.Command("getent", "group")
var groupOut bytes.Buffer
cmd.Stdout = &groupOut
err = cmd.Run()
if err != nil {
return nil, ListUsersOutput{}, err
}
var groups []Group
groupScanner := bufio.NewScanner(&groupOut)
for groupScanner.Scan() {
line := groupScanner.Text()
parts := strings.Split(line, ":")
if len(parts) != 4 {
continue
}
gid, _ := strconv.Atoi(parts[2])
members := strings.Split(parts[3], ",")
groups = append(groups, Group{
Name: parts[0],
Password: parts[1],
GID: gid,
Members: members,
})
groupName := parts[0]
for _, member := range members {
for i, user := range users {
if user.Username == member {
users[i].Groups = append(users[i].Groups, groupName)
}
}
}
}
return nil, ListUsersOutput{Users: users, Groups: groups}, nil
}
The result looks now like

This is gives now the LLM much more sensible information about the system and e.g. if I would aks “Add chris to the system and add him to the witcher group” and on the system wouldn’t be the group ‘witcher’ but on called ‘hexer’ it could find out that it’s german system and perhaps ‘hexer’ would be right group then.
As icing of the cake we now refactor the list method so that a username can be passed as an optional paramater. In such way the output of the tool can be limited. This is important for many subsequent tools calls. For real production garde software the paramter would then also regular expressions or even fuzzy matching. The full code of this section can be found in the git commit filter with a username This transforms the tool call to
// ListUsers function implements the ListUsers tool.
func ListUsers(ctx context.Context, req *mcp.CallToolRequest, input ListUsersInput) (
*mcp.CallToolResult, ListUsersOutput, error,
) {
slog.Info("ListUsers tool called")
users, err := getUsers(input.Username)
if err != nil {
return nil, ListUsersOutput{}, err
}
if input.Username != "" {
return nil, ListUsersOutput{Users: users}, nil
}
groups, err := getGroups()
if err != nil {
return nil, ListUsersOutput{}, err
}
for _, group := range groups {
for _, member := range group.Members {
for i, user := range users {
if user.Username == member {
users[i].Groups = append(users[i].Groups, group.Name)
}
}
}
}
return nil, ListUsersOutput{Users: users, Groups: groups}, nil
}
func getUsers(username string) ([]User, error) {
args := []string{"passwd"}
if username != "" {
args = append(args, username)
}
cmd := exec.Command("getent", args...)
var out bytes.Buffer
cmd.Stdout = &out
err := cmd.Run()
if err != nil {
return nil, err
}
var users []User
scanner := bufio.NewScanner(&out)
for scanner.Scan() {
line := scanner.Text()
parts := strings.Split(line, ":")
if len(parts) != 7 {
continue
}
uid, _ := strconv.Atoi(parts[2])
gid, _ := strconv.Atoi(parts[3])
users = append(users, User{
Username: parts[0],
Password: parts[1],
UID: uid,
GID: gid,
Comment: parts[4],
Home: parts[5],
Shell: parts[6],
IsSystemUser: gid < 1000,
Groups: []string{},
})
}
if username != "" && len(users) > 0 {
groups, err := getUserGroups(username)
if err == nil {
users[0].Groups = groups
}
}
return users, nil
}
Still there are many things we could add here as parameter like filtering for non system users only, check for ‘pam.d’ options which interacts with users…
Adding a user
Just for completeness we now add a tool for adding user, which does this by the SUSE specific useradd call.
The full code of this section can be found in the git commit added user add method
A tool can look like
// Input struct for the AddUser tool.
type AddUserInput struct {
Username string `json:"username" jsonschema:"the username of the new account"`
BaseDir string `json:"base_dir,omitempty" jsonschema:"the base directory for the home directory of the new account"`
Comment string `json:"comment,omitempty" jsonschema:"the GECOS field of the new account"`
HomeDir string `json:"home_dir,omitempty" jsonschema:"the home directory of the new account"`
ExpireDate string `json:"expire_date,omitempty" jsonschema:"the expiration date of the new account"`
Inactive int `json:"inactive,omitempty" jsonschema:"the password inactivity period of the new account"`
Gid string `json:"gid,omitempty" jsonschema:"the name or ID of the primary group of the new account"`
Groups []string `json:"groups,omitempty" jsonschema:"the list of supplementary groups of the new account"`
SkelDir string `json:"skel_dir,omitempty" jsonschema:"the alternative skeleton directory"`
CreateHome bool `json:"create_home,omitempty" jsonschema:"create the user's home directory"`
NoCreateHome bool `json:"no_create_home,omitempty" jsonschema:"do not create the user's home directory"`
NoUserGroup bool `json:"no_user_group,omitempty" jsonschema:"do not create a group with the same name as the user"`
NonUnique bool `json:"non_unique,omitempty" jsonschema:"allow to create users with duplicate (non-unique) UID"`
Password string `json:"password,omitempty" jsonschema:"the encrypted password of the new account"`
System bool `json:"system,omitempty" jsonschema:"create a system account"`
Shell string `json:"shell,omitempty" jsonschema:"the login shell of the new account"`
Uid int `json:"uid,omitempty" jsonschema:"the user ID of the new account"`
UserGroup bool `json:"user_group,omitempty" jsonschema:"create a group with the same name as the user"`
SelinuxUser string `json:"selinux_user,omitempty" jsonschema:"the specific SEUSER for the SELinux user mapping"`
SelinuxRange string `json:"selinux_range,omitempty" jsonschema:"the specific MLS range for the SELinux user mapping"`
}
func AddUser(ctx context.Context, req *mcp.CallToolRequest, input AddUserInput) (
*mcp.CallToolResult, AddUserOutput, error,
) {
slog.Info("AddUser tool called")
args := []string{}
if input.BaseDir != "" {
args = append(args, "-b", input.BaseDir)
}
/*
Cutted a lot of command line parameter settings
*/
if input.SelinuxUser != "" {
args = append(args, "-Z", input.SelinuxUser)
}
args = append(args, input.Username)
cmd := exec.Command("useradd", args...)
var out bytes.Buffer
cmd.Stdout = &out
cmd.Stderr = &out
err := cmd.Run()
if err != nil {
return nil, AddUserOutput{Success: false, Message: out.String()}, err
}
return nil, AddUserOutput{Success: true, Message: out.String()}, nil
}
For a real MCP server the tool could also be aware of the standard home location and provide the btrfs related option on ly on these and the same holds true if SELinux is enabled or not and then just add these options. But I think it’s now clear how the coockie crumbles for MCP tools.
lightdm-kde-greeter: Privilege Escalation from lightdm Service User to root in KAuth Helper Service (CVE-2025-62876)
Table of Contents
- 1) Introduction
- 2) Overview of the D-Bus Helper
- 3) Problems in the D-Bus Helper
- 4) Upstream Bugfix
- 5) CVE Assignment
- 6) Coordinated Disclosure
- 7) Timeline
- 8) References
1) Introduction
lightdm-kde-greeter is a KDE-themed greeter application for the lightdm display manager. At the beginning of September one of our community packagers asked us to review a D-Bus service contained in lightdm-kde-greeter for addition to openSUSE Tumbleweed.
In the course of the review we found a potential privilege escalation from the
lightdm service user to root which is facilitated by this D-Bus service,
among some other shortcomings in its implementation.
The next section provides a general overview of the D-Bus service. Section 3 discusses the security problems in the service’s implementation. Section 4 takes a look at the bugfix upstream arrived at.
This report is based on lightdm-kde-greeter release 6.0.3.
2) Overview of the D-Bus Helper
lightdm-kde-greeter includes a D-Bus service which enables regular users to configure custom themes to be used by the greeter application. The D-Bus service is implemented as a KDE KAuth helper service, running with full root privileges.
The helper implements a single API method, protected by
Polkit action org.kde.kcontrol.kcmlightdm.save, which requires
auth_admin_keep by default, i.e. users need to provide root credentials to
perform this action. The method takes a map of key/value pairs which allow to
fully control the contents of lightdm.conf and lightdm-kde-greeter.conf.
From a security point of view such a generic interface is sub-optimal, since the scope of the operation is not restricted to changing theme settings, but also allows to change all the rest of lightdm’s configuration, providing less control over who may do what in the system. From an application’s point of view this approach is understandable, however, as this makes it easy to support any future features.
Another Polkit action org.kde.kcontrol.kcmlightdm.savethemedetails is
declared in kcm_lightdm.actions, which is unused, maybe
a remnant of former versions of the project.
3) Problems in the D-Bus Helper
The problems in the D-Bus service start in helper.cc line 87, where we can find this comment:
// keys starting with "copy_" are handled in a special way, in fact,
// this is an instruction to copy the file to the greeter's home
// directory, because the greeter will not be able to read the image
// from the user's home folder
To start with it is rather bad API design to abuse the key/value map, which is supposed to contain configuration file entries, for carrying “secret” copy instructions. Even worse, in the resulting copy operation three different security contexts are mixed:
- the helper, which runs with full root privileges.
- the unprivileged D-Bus client, which specifies a path to be opened by the helper.
- the
lightdmservice user; the helper will copy the user-specified file into a directory controlled by it.
The helper performs this copy operation with full root privileges without
taking precautions, reading input data from one unprivileged context and
writing it into another unprivileged context. This is done naively using
the Qt framework’s QFile::copy() and similar APIs, leading to a range of
potential local attack vectors:
- Denial-of-Service (e.g. passing a named FIFO pipe as source file path, causing the D-Bus helper to block indefinitely).
- information leak (e.g. passing a path to private data as source file like
/etc/shadow, which will then become public in/var/lib/lightdm). - creation of directories in unexpected locations (the helper attempts to
create
/var/lib/lightdm/.../<theme>, thus the lightdm user can place symlinks there which will be followed). - overwrite of unexpected files (similar as before, symlinks can be placed as destination file name, which will be followed and overwritten with client data).
If this action would ever be set to yes Polkit authentication requirements,
then this would be close to a local root exploit. Even in its existing form it
allows the lightdm service user to escalate privileges to root.
Interestingly these problems are quite similar to issues in sddm-kcm6, which
we covered in a previous blog post.
4) Upstream Bugfix
We suggested the following changes to upstream to address the problems:
- the copy operation should be implemented using D-Bus file descriptor
passing, this way opening client-controlled paths as
rootis already avoided. - for creating the file in the target directory of
lightdm, a privilege drop to thelightdmservice user should be performed to avoid any symlink attack surface.
We are happy to share that the upstream maintainer of lightdm-kde-greeter
followed our suggestions closely and coordinated the changes with us before
the publication of the bugfix. With these changes, this KAuth helper is now
kind of a model implementation which can serve as a positive example for other
KDE components. Upstream also performed some general cleanup, like the removal
of the unused savethemedetails Polkit action from the repository.
Upstream released version 6.0.4 of lightdm-kde-greeter which contains the fixes.
5) CVE Assignment
In agreement with upstream, we assigned CVE-2025-62876 to track the lightdm
service user to root privilege escalation aspect described in this report.
The severity of the issue is low, since it only affects defense-in-depth (if
the lightdm service user were compromised) and the problematic logic can
only be reached and exploited if triggered interactively by a privileged user.
6) Coordinated Disclosure
We reported these issues to KDE security on 2025-09-04 offering coordinated disclosure, but we initially had difficulties setting up the process with them. Upstream did not clearly express the desire to practice coordinated disclosure, no (preliminary) publication date could be set and no confirmation of the issues was received.
Things took a turn for the better when a lightdm-kde-greeter developer contacted us directly on 2025-10-16 and the publication date and fixes were discussed. The ensuing review process for the bugfixes was very helpful in our opinion, leading to a major improvement of the KAuth helper implementation in lightdm-kde-greeter.
7) Timeline
| 2025-09-04 | We received the review request for the lightdm-kde-greeter D-Bus service. |
| 2025-09-10 | We privately reported the findings to KDE security. |
| 2025-09-17 | We received an initial reply from KDE security stating that they would get back to us. |
| 2025-09-29 | We asked for at least a confirmation of the report and a rough disclosure date, but upstream was not able to provide this. |
| 2025-10-01 | KDE security informed us that an upstream developer planned to release fixes by mid-November. |
| 2025-10-16 | An upstream developer contacted us to discuss the publication date, since the bugfixes were ready. |
| 2025-10-20 | We asked the developer to share the bugfixes for review. |
| 2025-10-21 | The developer shared a patch set with us. |
| 2025-10-24 | We agreed on 2025-10-31 for coordinated disclosure date. |
| 2025-10-28 | After a couple of email exchanges discussing the patches, upstream arrived at an improved patch set. We suggested to assign a CVE for the ligthdm to root attack surface. |
| 2025-10-29 | We assigned CVE-2025-62876. |
| 2025-11-03 | We asked when the bugfix release would be published, with the disclosure date already passed. |
| 2025-11-03 | Upstream agreed to publish on the same day. |
| 2025-11-03 | Upstream released version 6.0.4 containing the bugfixes. We published our Bugzilla bug on the topic. |
| 2025-11-13 | Publication of this report. |
8) References
openSUSE Tumbleweed
openSUSE Tumbleweed recently changed the default boot loader from GRUB2 to GRUB2-BLS when installed via YaST.
This follows the trending started by MicroOS of adopting boot loaders that are compatible with the boot loader specification. MicroOS is using systemd-boot, a very small and fast boot loader from the systemd project.
One of the reasons of this change is that simplify the integration of new features, like a full disk encryption based on the systemd tools, that will make use of the TPM2 or FIDO2 tokens if they are available.
What is GRUB2-BLS
GRUB2-BLS is just GRUB2 but with some patches on top ported from the Fedora project, that includes some compatibility for the boot loader specification for Type #1 boot entries. Those are small text files stored in /boot/efi/loader/entries that the boot loader reads to present the initial menu.
Each file contains a reference to the kernel, the initrd and the kernel command line that will be used to boot the system, and can be edited directly by the user or managed by tools like bootctl and sdbootutil.
The next version of GRUB2 (2.14) those patches will be included as part of the project itself, and the upgrade process will be transparent for the final user.
Should be noted that the way that openSUSE deploy GRUB2-BLS is different from the classical GRUB2. GRUB2-BLS is deployed as a single EFI binary installed (copied) in /boot/efi/EFI/opensuse that will have embedded all the resources (like the modules, configuration file, fonts, themes and graphics) that previously where placed in /boot/grub2.
Installation
The good news is that with the last version of YaST the process is automatic. The used just needs to follow the default steps and the system will be based on GRUB2-BLS at the end.
The installed will propose first a large ESP partition of about 1GB. This is required because now all the kernel and initrds will be placed in the FAT32 ESP partition, in /boot/efi/opensuse-tumbleweed.
Of course the user can select during the installation a different boot loader, like the classical GRUB2 or systemd-boot. This can be done in the “Installation Settings” screen presented at the end of the installation proposal. Just select the “Booting” header link and choose your boot loader from there.
Usage
With GRUB2-BLS we will not have anymore the grub2 tools, like grub2-mkconfig or grub2-install. Most of them are not required anymore. The boot entries are generated dynamically by the boot loader, so there is no need anymore of generating GRUB2 configuration files, and the installation is just copying the new EFI file into the correct place.
The upgrade process is also done automatically calling sdbootutil update by the snapper plugins or the SUSE module tools, so if btrfs is used all the management will be done transparently by this infrastructure, as was done in the traditional boot loader.
Updating the kernel command line can be now be done by editing the boot loader, or the /etc/kernel/cmdline and calling sdbootutil update-all-entries to propagate the change into the boot entries of the current snapshot.
Hack Week Project Seeks to Launch Kudos
A new Hack Week 25 project aims to display appreciation and recognition for contributors across the openSUSE Project.
Called Kudos, the application is designed to give members of the project an easy way to acknowledge contributions beyond code submissions alone.
Lubos Kocman joked during the naming discussion that “We also had an option to call it ReKognize and have a Mortal Kombat announcer behind the menu items.”
Kudos began shortly after the release of Leap 16.0 with the goal of creating a simple and friendly way for community members to thank one another for their efforts and contributions.
As a Release Manager, Kocman and other release managers have long recognized core Leap contributors after each release.
“We used to send people Leap DVD boxes,” he said, but these DVDs are no longer produces.
An internal SUSE recognition platform allows for employees to acknowledge one another since DVDs were no longer an option. Kocman began sending emails mimicing SUSE’s recognition messages.
“But at some point I was fed up and thought to myself, we can do better,” Kocman said, who created the Kudos Hack Week project.
He notes that openSUSE relies on extensive, often invisible efforts. These include contributions with translations, documentation updates, wiki curation, infrastructure maintenance, moderation, booth staffing, testing, talks, and countless other efforts that rarely show up in a traditional form of recongition.
Kudos aims to bring those contributions into the light.
The visablity of these recogniztion will take place in areas where members of the project communicate. Sharing could take place on chat.opensuse.org, on social media through bots, Slack, and other options, which will be explored during Hack Week and beyond.
Kudos resources include the application’s source code, a dedicated badge repository, and an issue tracker. Existing badge templates can be used, but the maintainer asks participants not to create new badge requests during Hack Week unless they also plan to complete them during the event.
Developers and contributors interested in participating can join or follow the project’s progress through the GitHub repositories.

Hack Week, which began in 2007, has become a cornerstone of the project’s open-source culture. Hack Week has produced tools that are now integral to the openSUSE ecosystem, such as openQA, Weblate and Aeon Desktop. Hack Week has also seeded projects that later grew into widely used products; the origins of ownCloud and its fork Nextcloud derive from a Hack Week project started more than a decade ago.
For more information, visit hackweek.opensuse.org.
Tumbleweed – Review of the week 2025/45
Dear Tumbleweed users and hackers,
This week has been really slow for Tumbleweed snapshots—at least as far as the published ones are concerned. Since my last review, only two snapshots were shipped, and 5 more were passed on to openQA for testing and discarded. This was not really a surprise to us; we somewhat expected this to happen. The major change (in the next snapshot to be published) is the switch to grub2-bls on UEFI-based systems. The technical change itself was ok for a while, but it took a while to get a good feel for the openQA results, without masking other errors behind the ‘bootloader does not look as expected by QA’ step. And to confirm: it was good for us to hold back some snapshots, as there were indeed some fun bugs hiding behind closed doors.
Enough of the history, let’s look at what the two snapshots (1030 and 1031) brought you this week:
- Mozilla Firefox 144.0.2
- ImageMagick 7.1.2.8
- pam-mount 2.22
- Mesa 25.2.6
- Linux kernel 6.17.6
The next snapshot to be published will hopefully bring a few more changes. Staging areas and QA are currently busy testing integration of:
- GRUB2-BLS as the default bootloader selected by the installer on UEFI-based systems; This will only impact new installs. An automatic migration from grub2-efi to grub2-bls is not planned
- Linux kernel 6.17.7
- KDE Plasma 6.5.2
- Qt5 5.15.18
- openSSH 10.2p1
- LXQt 2.3.0
- Mesa will bring Vulkan support to WSL
- transactional-update 5.5.0: enables soft-reboot if possible
- openSSL 3.6.0: regression detected, which causes nodejs22 testsuite to fail
- kernel hardening: prevent normal users from seeing dmesg
Planet News Roundup
This is a roundup of articles from the openSUSE community listed on planet.opensuse.org.
The below featured highlights listed on the community’s blog feed aggregator are from November 1 to 6.
Blog posts this week highlight a post-mortem with the Open Build Service, a Tenerife LanParty, a security issue announcement by SUSE Security Team, Quantum computing in open source and much more.
Here is a summary and links for each post:
Hack Week Project Aims to Rebuild Classic Games
Hack Week 25 is Dec. 1 - 5 and it has a project that is reverse-engineering 1990s PC games like Master of Orion II: Battle at Antares, Chaos Overlords and others to build clean versions capable of running the game. The goal is to have playable prototypes built during the week.
Third update of KDE Gear 25.08
The KDE Blog highlights the third update of Gear 25.08 and introduces several bug-fixes across apps, libraries and widgets to improve stability and translations for Konqi users. Some fixes were made for Kdenlive, KWalletManager and more.
scx: Unauthenticated scx_loader D-Bus Service can lead to major Denial-of-Service
Posts like this don’t happen too often, but this SUSE Security Team Blog should get people’s attention. This security blog post highlights scx project’s scx_loader. The D-Bus service could run with root privileges and no authentication, which could allow any unprivileged user to manipulate system schedulers and potentially cause a denial-of-service.
Tenerife LanParty (Podcast Linux #31)
The KDE Blog post revisits episode #31 of the Podcast Linux series, which is a special feature recorded at the Tenerife LanParty (TLP 2017).
Severe Service Degradation: OBS Unreliable/Unavailable
The Open Build Service team published a service post-mortem discussing recent operational issues and outlining improvements to monitoring and response workflows. Lessons learned from the post-mortem are that the team set sensible limits for build service requests (and possibly other areas) as serving all results at once for a request with 6,000+ actions is unsustainable.
Plasma 6.5 Second Update
The KDE Blog and Plasma 6.5 team released its second update for the desktop environment on Nov. 4. It focuses on improving stability, better translations and bug fixes. A couple fixes included having a smoother automatic shift between light and dark themes and enhancing global WiFi password storage.
Compilation of the Free Software Foundation newsletter – November 2025
Victorhck puts out a new post collecting major FSF updates including their 40-year anniversary projects, the upcoming hackathon (Nov. 21-23), and recent advocacy on software freedom and patents. Victorhck also goes into some details about the announcement of the FSF’s Librephone project.
SUSE delivers Raspberry Pi 5 support
The openSUSE Project announces support for the Raspberry Pi 5, which includes enabling boot to graphical desktop with working Ethernet, WiFi, and USB. The blog acknowledges the extensive contributions by the SUSE Hardware Enablement team related to U-Boot, PCIe and kernel driver support.
LliureX as a tool for educational innovation
The post introduces the UjiLliurex 2025 programme, which leverages the LliureX Linux distribution as a tool for educational innovation in higher education. Its aim is promoting ICT skills and collaborative coordination across the academic community.
Leap Fuels Hands-On Learning, Exploration
The openSUSE Project highlights how users can turn their setups into home labs. It highlights the use case of learning. From tracking aircraft with SDR to deploying Kubernetes clusters, Leap can be used to learn through experimentation on a stable, production-grade base.
Quantum computing and open source: the future is already in open source on Compilando Podcast
The hosts dive into the world of quantum computing during this episode of the Compilando Podcast. What qubits are? Why do ultra-cold conditions matter? The hosts explore how open-source tools like Qiskit and Cirq are helping researchers collaborate globally.
Framework Laptop 13 Protective Bumper
Tech-enthusiast Nathan Wolf goes into a custom 3D printed TPU 98A bumper for the Framework Laptop 13 to improve durability during travels. After 13 major design revisions, he settles on a 10 mm frame that balances protection and usability.
Controlling frame intensity and image focus
The KDE Blog dives into “This Week in KDE Plasma” and highlights two major upcoming features; adjustable boldness for window frames and outlines in Breeze themes, and a global sharpness slider for display content when using a compatible Linux kernel.
View more blogs or learn to publish your own on planet.opensuse.org.
Hack Week Project Aims to Rebuild Classic Games
Community developers plan to bring new life to classic 1990s video games by reviving and reverse engineering some classic games during a project during Hack Week 25.
The project calls on participants to select an older game, analyze its data formats and underlying rules, and write a clean-room engine capable of running the original game content.
Many games from the era are simple enough that contributors can produce a playable prototype within the week.
The classic-games project, which has grown over multiple years, has titles listed such as Master of Orion II, Chaos Overlords, and Signus: The Artifact Wars.
Work on Master of Orion II: Battle at Antares is regarded as one of the defining 4X strategy games of the 1990s and has become one of the project’s flagship effort. Developers have decoded savegame formats, resource files and interface screens across several Hack Weeks.
The team working on Chaos Overlords identified resource formats, mapped much of the logic and begun developing a Qt-based interface resembling the original’s mouse-driven design. The game’s AI remains one of the toughest puzzles. Contributors are calling it critical to the game’s identity.
Earlier efforts include Signus: The Artifact Wars, a Czech turn-based strategy title open-sourced in 2003. Developers continue to refine support for original file formats and work toward packaging improvements for openSUSE.
Participants frequently suggest new candidates. Companies and individuals are encouraged to join the project and hack on it for fun.
Another Hack Week 25 effort underway and the project aims to bring missing YaST features into Cockpit and System Roles; this comes following YaST’s deprecation from openSUSE Leap 16.0.
Hack Week, which began in 2007, has become a cornerstone of the project’s open-source culture. Hack Week has produced tools that are now integral to the openSUSE ecosystem, such as openQA, Weblate and Aeon Desktop. Hack Week has also seeded projects that later grew into widely used products; the origins of ownCloud and its fork Nextcloud derive from a Hack Week project started more than a decade ago.
For more information, visit hackweek.opensuse.org.
scx: Unauthenticated scx_loader D-Bus Service can lead to major Denial-of-Service
Table of Contents
- 1) Introduction
- 2) Overview of the Unauthenticated
scx_loaderD-Bus Service - 3) Passing Arbitrary Parameters to Schedulers
- 4) On the Verge of a Local Root Exploit
- 5) Affected Linux Distributions
- 6) Suggested Fixes
- 7) Missing Upstream Bugfix
- 8) CVE Assignment
- 9) Timeline
- 10) Links
1) Introduction
The scx project offers a range of dynamically loadable
custom schedulers implemented in Rust and C, which make use of the Linux
kernel’s sched_ext feature. An optional D-Bus service called scx_loader
provides an interface accessible to all users in the system, which allows to
load and configure the schedulers provided by scx. This D-Bus service is
present in scx up to version v1.0.17. As a response to this report,
scx_loader has been moved into a dedicated repository.
A SUSE colleague packaged scx for addition to openSUSE Tumbleweed, and the D-Bus service it contained required a review by our team. The review showed that the D-Bus service runs with full root privileges and is missing an authentication layer, thus allowing any user to nearly arbitrarily change the scheduling properties of the system, leading to Denial-of-Service and other attack vectors.
Upstream declined coordinated disclosure for this report and asked us to handle it in the open right away. In the discussion that followed, upstream rejected parts of our report and presented no clear path forward to fix the issues, which is why there is no bugfix available at the moment.
Section 2 provides an overview of the scx_loader D-Bus
service and its lack of authentication. Section 3 takes
a look into problematic command line parameters which can be influenced by
unprivileged clients. Section 4 looks into attempts to
achieve a local root exploit using the scx_loader API. Section
5 lists affected Linux distributions. Section
6 discusses possible approaches to fix the issues
found in this report. Section 7 takes a look at the upstream
efforts to fix the issues.
This report is based on version 1.0.16 of scx.
2) Overview of the Unauthenticated scx_loader D-Bus Service
The scx_loader D-Bus service is implemented in Rust
and offers a completely unauthenticated D-Bus interface on the system bus. The
upstream repository contains configuration files and documentation
advertising this service as suitable to be automatically
started via D-Bus requests. Thus arbitrary users in
the system (including low privilege service users or even nobody) are
allowed to make unrestricted use of the service.
The service’s interface offers functions to start, stop or switch between a number of scx schedulers. The start and switch methods also offer to specify an arbitrary list of parameters which will be directly passed to the binary implementing the scheduler.
Every scheduler is implemented in a dedicated binary and found e.g. in
/usr/bin/scx_bpfland for the bpfland scheduler. Not all schedulers that are
part of scx are accessible via this interface. The list of schedulers
supported by scx_loader in the reviewed version is:
scx_bpfland scx_cosmos scx_flash scx_lavd
scx_p2dq scx_tickless scx_rustland scx_rusty
We believe the ability to more or less arbitrarily tune the scheduling behaviour of the system already poses a local Denial-of-Service (DoS) attack vector that might even make it possible to lock up the complete system. We did not look into a concrete set of parameters that might achieve that, but it seems likely, given the range of schedulers and their parameters made available via the D-Bus interface.
3) Passing Arbitrary Parameters to Schedulers
The ability to pass arbitrary command line parameters to any of the supported scheduler binaries increases the attack surface of the D-Bus interface considerably. This makes a couple of concrete attacks possible, especially when the scheduler in question accepts file paths as input. Apart from parameters that influence scheduler behaviour, all schedulers offer the generic “Libbpf Options”, of which the following four options stick out in this context:
--pin-root-path <PIN_ROOT_PATH> Maps that set the 'pinning' attribute in their definition will have
their pin_path attribute set to a file in this directory, and be
auto-pinned to that path on load; defaults to "/sys/fs/bpf"
--kconfig <KCONFIG> Additional kernel config content that augments and overrides system
Kconfig for CONFIG_xxx externs
--btf-custom-path <BTF_CUSTOM_PATH> Path to the custom BTF to be used for BPF CO-RE relocations. This custom
BTF completely replaces the use of vmlinux BTF for the purpose of CO-RE
relocations. NOTE: any other BPF feature (e.g., fentry/fexit programs,
struct_ops, etc) will need actual kernel BTF at /sys/kernel/btf/vmlinux
--bpf-token-path <BPF_TOKEN_PATH> Path to BPF FS mount point to derive BPF token from. Created BPF token
will be used for all bpf() syscall operations that accept BPF token
(e.g., map creation, BTF and program loads, etc) automatically within
instantiated BPF object. If bpf_token_path is not specified, libbpf will
consult LIBBPF_BPF_TOKEN_PATH environment variable. If set, it will be
taken as a value of bpf_token_path option and will force libbpf to
either create BPF token from provided custom BPF FS path, or will
disable implicit BPF token creation, if envvar value is an empty string.
bpf_token_path overrides LIBBPF_BPF_TOKEN_PATH, if both are set at the
same time. Setting bpf_token_path option to empty string disables
libbpf's automatic attempt to create BPF token from default BPF FS mount
point (/sys/fs/bpf), in case this default behavior is undesirable
libbpf is a userspace support library for BPF programs found in the Linux source tree. The following sub-sections take a look at each of the attacker-controlled paths passed to this library in detail.
The --pin-root-path Option
The --pin-root-path option potentially causes libbpf to create the
parent directory of this path in
bfp_object__pin_programs(). We are not entirely
sure under which conditions the logic is triggered, however, and if these
conditions are controlled by an unprivileged caller in the context of the
scx_loader D-Bus API.
The --kconfig Option
The file found in the --kconfig path is completely read into memory in
libbpf_clap_opts.rs line 91. This makes
a number of attack vectors possible:
- pointing to a device file like
/dev/zeroleads to an out of memory situation in the selected scheduler binary. - pointing to a private file like
/etc/shadowcauses the scheduler binary to read in the private data. We did not find a way for this data to leak out into the context of an unprivileged D-Bus caller, however. This technique still allows to perform file existence tests in locations that are normally not accessible to unprivileged users. - pointing to a FIFO named pipe will block the scheduler binary indefinitely, breaking the D-Bus service. Also, by feeding data to such a PIPE, nearly all memory can be used up, keeping the system in a low-memory situation and possibly leading to the kernel’s OOM killer targeting unrelated processes.
- by pointing to a regular file controlled by the caller, crafted KConfig information can be passed into libbpf. The impact of this appears to be minimal, however.
The following command line is an example reproducer which will cause the
scx_bpfland process to consume all system memory until it is killed by
the kernel:
user$ gdbus call -y -d org.scx.Loader -o /org/scx/Loader \
-m org.scx.Loader.SwitchSchedulerWithArgs scx_bpfland \
'["--kconfig", "/dev/zero"]'
The --btf-custom-path Option
The --btf-custom-path option offers similar attack vectors as the
--kconfig option discussed above. Additionally, crafted binary symbol
information can be fed to the scheduler via this path, which will be processed
either by btf_parse_raw() or
btf_parse_elf() found in libbpf. This can lead to
integrity violation of the scheduler / the kernel, the impact of which we
cannot fully judge as we lack expertise in this low level area and did not
want to invest more time than necessary for the analysis.
The --bpf-token-path Option
The --bpf-token-path, if it refers to a directory, will be opened by
libbpf and the file descriptor will be passed to the
bpf system call like this:
bpf(BPF_TOKEN_CREATE, {token_create={flags=0, bpffs_fd=20}}, 8) = -1 EINVAL (Invalid argument)
This does not seem to achieve anything, however, because the kernel code rejects the caller if it lives in the initial user namespace (which the privileged D-Bus service always does). The path could maybe serve as an information leak to test file existence and type, if the behaviour of the scheduler “start” operation shows observable differences depending on the input.
4) On the Verge of a Local Root Exploit
With this much control over the command line parameters of many different scheduler binaries, which offer a wide range of options, we initially assumed that a full local root exploit would not be difficult to achieve. We tried hard, however, and did not find any working attack vector so far. It could be that we overlooked something in the area of the low level BPF handling regarding the attacker-controlled input files discussed in the previous section, however.
scx_loader is saved from a trivial local root exploit merely by the fact
that only a subset of the available scx scheduler binaries is accessible via
its interface. The scx_chaos scheduler, which is not among the schedulers
offered by the D-Bus service, supports a positional command line
argument referring to a “Program to run under the chaos
scheduler”. Would this scheduler be accessible via D-Bus, then unprivileged
users could cause user controlled programs to be executed with full root
privileges, leading to arbitrary code execution.
From discussions with upstream it sounds like the exclusion of schedulers like
scx_chaos from the D-Bus interface does not stem from security concerns, but
rather from functional restrictions, because some schedulers are not supported
in all contexts, or are not stable yet.
5) Affected Linux Distributions
From our investigations and communication with upstream it seems that only
Arch Linux is affected by the problem in its default installation of scx.
Gentoo Linux comes with an ebuild for scx, but for some reason there is no
integration of scx_loader into the init system and also the D-Bus autostart
configuration file is missing. Thus it will only be affected if an admin
manually invokes the service.
Otherwise we did not find a packaging of scx_loader on current Fedora Linux,
Ubuntu LTS or Debian Linux. Due to the outcome of this review we never allowed
the D-Bus service into openSUSE, which is therefore also not affected.
6) Suggested Fixes
Restrict Access to a Group on D-Bus Level
A quick fix for the worst aspects of the issue would be to restrict the
D-Bus configuration in org.scx.Loader.conf to allow
access to the interface only for members of a dedicated group like scx. This
at least prevents random unprivileged users from abusing the API.
We offer a patch for download which does exactly this.
Use Polkit for Authentication
By integrating Polkit authentication, the use of this interface can be
restricted to physically present interactive users. Even in this case we
suggest to restrict full API access to users that can authenticate as admin,
via Polkit’s auth_admin_keep setting. Read-only operations can still be
allowed without authentication.
Making the API more Robust
The individual methods offered by the scx.Loader D-Bus service should not allow to perform actions beyond the intended scope, even if a caller would have authenticated in some form as outlined in the previous sections.
To this end, dangerous parameters for schedulers should either be rejected (e.g. by enforcing a whitelist of allowed parameters) or verified (e.g. by determining whether a provided path is only under control of root and similar checks).
Regarding input files, the client ideally should not pass path names at all, but send file descriptors instead, to avoid unexpected surprises and the burden of verifying input paths in the privileged D-Bus service.
Use systemd Sandboxing
The systemd service for scx_loader could make use of various hardening
options that systemd offers (like ProtectSystem=full), as long as these
do not interfere with the functionality of the service. This would
prevent more dangerous attack vectors from succeeding if the first line of
defense fails.
7) Missing Upstream Bugfix
Upstream showed a reluctant reaction to the report
we provided in a GitHub issue, rejecting parts of our
assessment. An attempt to introduce a Polkit authentication
layer based on AI-generated code was abandoned quickly,
and upstream instead split off the scx_loader service into a new
repository to separate it from the scx core project. Our
original GitHub issue has been closed, and we cloned it
in the new repository to keep track of the issue.
Downstream integrators of scx_loader can can limit access to the D-Bus
service to members of an scx group by applying the patch we offer in the
Suggested Fixes section. This way access to the
problematic API becomes opt-in, and is restricted to more privileged users
that actually intend to use this service.
8) CVE Assignment
We suggested to upstream to assign at least one cumulative CVE to generally cover the unauthenticated D-Bus interface aspect leading to local DoS, potential information disclosure and integrity violation. We offered to assign a CVE from the SUSE pool to simplify the process.
Upstream did not respond to this and did not clearly confirm the issues we raised, but rather rejected certain elements of our report. For this reason there is currently no CVE assignment available.
9) Timeline
| 2025-09-30 | We contacted one of the upstream developers by email and asked for a security contact of the project, since none was documented in the repository. |
| 2025-09-30 | The upstream developer agreed to handle the report together with a fellow developer of the project. |
| 2025-09-30 | We shared a detailed report with the two developers. |
| 2025-10-02 | After analysis of the report, the upstream developer suggested to create a public GitHub issue, which we did. |
| 2025-10-03 | An upstream developer responded to the issue rejecting various parts of our report. |
| 2025-10-28 | With some delay we provided a short reply, pointing out that the rejections seem to miss the central point of the change of privilege which is taking place. |
| 2025-10-28 | Upstream created a pull request based on AI-generated code to add an authentication layer to the D-Bus service. |
| 2025-10-28 | Upstream closed the unmerged pull request shortly after. The discussion sounded like upstream no longer intends to support the scx_loader D-Bus service in this repository. |
| 2025-11-03 | We provided a more detailed reply to the issue discussion. |
| 2025-11-04 | Upstream closed the GitHub issue and split off a dedicated repository for scx_loader
|
| 2025-11-06 | We cloned the original issue in the new repository |
| 2025-11-06 | Publication of this report. |
10) Links