Two More Steps Toward a Better Requests Page: Grouped Actions and Accurate Labels
Installation of NVIDIA drivers on openSUSE and SLE
This blogpost covers only installation of G06 drivers, i.e. drivers for GPUs >= Maxwell, i.e.
-
Maxwell, Pascal, Volta (
ProprietaryKernel driver) -
Turing and higher (
OpenKernel driver)
Check with inxi -aG on openSUSE Leap/Tumbleweed if you have such a GPU. Use hwinfo --gfxcard on SLE. Use G04/G05 legacy drivers (both are Proprietary drivers) for older NVIDIA GPUs.
There are two different ways to install NVIDIA drivers. Either use GFX Repository or use CUDA Repository.
GFX Repository
First add the repository if it has not been added yet. On openSUSE Leap/Tumbleweed and SLE 15 Desktop and SLE 15 Workstation Extension it is being added by default. So check first, if it has already been added.
# openSUSE Leap/Tumbleweed
zypper repos -u | grep https://download.nvidia.com/opensuse/
# SLE
zypper repos -u | grep https://download.nvidia.com/suseVerify that the repository is enabled. If the output was empty add the repository now:
# Leap 15.6
zypper addrepo https://download.nvidia.com/opensuse/leap/15.6/ nvidia
# Leap 16.0 (Beta)
zypper addrepo https://download.nvidia.com/opensuse/leap/16.0/ nvidia
# Tumbleweed
zypper addrepo https://download.nvidia.com/opensuse/tumbleweed/ nvidia
# SLE15-SP6
zypper addrepo https://download.nvidia.com/suse/sle15sp6/ nvidia
# SLE15-SP7
zypper addrepo https://download.nvidia.com/suse/sle15sp7/ nvidia
# SLE16 (Beta)
zypper addrepo https://download.nvidia.com/suse/sle16/ nvidiaWith the following command the appropriate driver (Proprietary or Open Kernel driver) will be installed depending on the GPU on your system. In addition the CUDA and Desktop drivers are installed according to the software packages which are currently installed (Desktop driver trigger: libglvnd package).
zypper inrInstallation of Open driver on SLE15-SP6, Leap 15.6 and Tumbleweed
Unfortunately in our SLE15-SP6, Leap 15.6 and Tumbleweed repositories we still have driver packages for older Proprietary driver (version 550), which are still registered for Turing+ GPUs. The reason is that at that time the Open driver wasn’t considered stable yet for the desktop. Therefore, if you own a Turing+ GPU (check above) and would like to use the Open driver (which is recommended!) please use the following command instead of the above.
zypper in nvidia-open-driver-G06-signed-kmp-metaOtherwise you will end up with a Proprietary driver release 550 initially, which then will be updated later to the current version of the Proprietary driver, but not replaced by the open driver automatically.
Understanding package dependancies
The following graphics explains the installation and package dependancies. Zoom in for better reading.
CUDA Repository
Add the repository if it hasn’t been added yet. On SLE15 it might have already been added as aModule. So check first:
# openSUSE Leap/Tumbleweed
zypper repos -u | grep https://developer.download.nvidia.com/compute/cuda/repos/opensuse15
# SLE
zypper repos -u | grep https://developer.download.nvidia.com/compute/cuda/repos/sles15Verify that the repository is enabled. If the output is empty add the repository now:
# Leap 15.6/16.0(Beta)/Tumbleweed
zypper addrepo https://developer.download.nvidia.com/compute/cuda/repos/opensuse15/x86_64/ cuda
# SLE15-SPx/SLE16(Beta) (x86_64)
zypper addrepo https://developer.download.nvidia.com/compute/cuda/repos/sles15/x86_64/ cuda
# SLE15-SPx/SLE16(Beta) (aarch64)
zypper addrepo https://developer.download.nvidia.com/compute/cuda/repos/sles15/sbsa/ cudaUse Open prebuilt/secureboot-signed Kernel driver (GPU >= Turing)
In case you have a Turing or later GPU it is strongly recommended to use our prebuilt and secureboot-signed Kernel driver. Unfortunately this is often not the latest driver, which is availabe, since this driver needs to go through our official QA and Maintenance process before it can be released through our product update channels, but things are much easier to handle for the user.
# Install open prebuilt/secureboot-signed Kernel driver
zypper in nvidia-open-driver-G06-signed-cuda-kmp-default
# Make sure userspace CUDA/Desktop drivers will be in sync with just installed open prebuilt/secureboot-signed Kernel driver
version=$(rpm -qa --queryformat '%{VERSION}\n' nvidia-open-driver-G06-signed-cuda-kmp-default | cut -d "_" -f1 | sort -u | tail -n 1)
# Install CUDA drivers
zypper in nvidia-compute-utils-G06 == ${version} nvidia-persistenced == ${version}
# Install Desktop drivers
zypper in nvidia-video-G06 == ${version}Use Open DKMS Kernel driver on GPUs >= Turing (latest driver available)
If you really need the latest Open driver (also for Turing and later), use NVIDIA’s Open DKMS Kernel driver. This will build this driver on demand for the appropriate Kernel during the boot process.
# Install latest Open DKMS Kernel driver
zypper in nvidia-open-driver-G06
# Install CUDA drivers
zypper in nvidia-compute-utils-G06
# Install Desktop drivers
zypper in nvidia-video-G06On Secure Boot systems you still need to import the certificate, so you can later enroll it right after reboot in the MOK-Manager by using your root password.
mokutil --import /var/lib/dkms/mok.pub --root-pwOtherwise your freshly built kernel modules can’t be loaded by your kernel later.
Use Proprietary DKMS Kernel driver on Maxwell <= GPU < Turing
For Maxwell, Pascal and Volta you need to use the Proprietary DKMS Kernel driver.
# Install proprietary DKMS Kernel driver
zypper in nvidia-driver-G06
# Install CUDA drivers
zypper in nvidia-compute-utils-G06
# Install Desktop drivers
zypper in nvidia-video-G06Installation of CUDA
In case you used GFX Repository for installing NVIDIA drivers before, first add the CUDA Repository as outlined above in CUDA Repository chapter.
The following commands will install CUDA packages themselves. It describes a regular and minimal installation. In addition it makes it easy to do first tests with CUDA. Depending on which Kernel driver is being used it may be needed to install different CUDA versions.
# Kernel driver being installed via GFX Repo
cuda_version=13-0
# Kernel driver being installed via CUDA Repo
cuda_version=13-0
# Regular installation
zypper in cuda-toolkit-${cuda_version}
# Minimal installation
zypper in cuda-libraries-${cuda_version}
# Unfortunately the following package is not available for aarch64,
# but there are CUDA samples available on GitHub, which can be
# compiled from source: https://github.com/nvidia/cuda-samples
zypper in cuda-demo-suite-12-9Let’s have a first test for using libcuda (only available on x86_64).
/usr/local/cuda-12/extras/demo_suite/deviceQueryWhich one to choose for NVIDIA driver installation: GFX or CUDA Repository?
Good question! Not so easy to answer. If you rely on support from NVIDIA (especially when using SLE), for Compute usage we strongly recommend to use the CUDA Repository for NVIDIA driver installation. Even if you use NVIDIA Desktop drivers as well.
For others - usually running openSUSE Leap/Tumbleweed - it’s fine to use GFX Repository for NVIDIA driver installation and adding CUDA Repository for installing CUDA packages.
Known issues
CUDA Repository
Once you have added the CUDA Repository it may happen that some old or not recommended driver packages get mistakenly auto-selected for installation or even have already been mistakenly installed. These are:
- nvidia-gfxG05-kmp-default 535.x
- nvidia-open-gfxG05-kmp-default 535.x
- nvidia-open-driver-G06-kmp-default 570.x
- nvidia-driver-G06-kmp-default 570.x
- nvidia-open-driver-G06
In order to avoid mistakenly installing them add package locks for them with zypper.
zypper addlock nvidia-gfxG05-kmp-default
zypper addlock nvidia-open-gfxG05-kmp-default
zypper addlock nvidia-open-driver-G06-kmp-default
# only if you have Turing and higher, i.e. use Open Kernel driver
zypper addlock nvidia-driver-G06-kmp-default
# unless you plan to use the DKMS Open driver package from CUDA repository
zypper addlock nvidia-open-driver-G06In case you see any of these packages already installed on your system, better read the Troubleshooting section below how to get rid of these and all other nvidia driver packages related to them. Afterwards add locks to them as described right above.
Tumbleweed
On Tumbleweed it may happen that some legacy driver packages get mistakenly auto-selected for installation or even have already been mistakenly installed. These are:
- nvidia-gfxG04-kmp-default
- nvidia-gfxG05-kmp-default
In order to avoid mistakenly installing them add package locks for them with zypper.
zypper addlock nvidia-gfxG04-kmp-default
zypper addlock nvidia-gfxG05-kmp-defaultIn case you see any of these packages already installed on your system, better read the Troubleshooting section below how to get rid of these and all other nvidia driver packages related to them. Afterwards add locks to them as described right above.
Leap 15.6
On Leap 15.6 when doing a zypper dup this may result in a proposal to dowgrade the driver packages to some older 570 version and switching to -azure kernel flavor at the same time. The culprit for this issue is currently unknown, but you can prevent it from happening by adding a package lock with zypper.
zypper addlock nvidia-open-driver-G06-signed-kmp-azureTroubleshooting
In case you got lost in a mess of nvidia driver packages for different driver versions the best way to figure out what the current state the system is in is to run:
rpm -qa | grep -e ^nvidia -e ^libnvidia | grep -v container | sortOften then the best approach is to begin from scratch, i.e remove all the nvidia driver packages by running:
rpm -e $(rpm -qa | grep -e ^nvidia -e ^libnvidia | grep -v container)Then follow (again) the instructions above for installing the driver using the GFX or CUDA Repository.
POWER Is Not Just for Databases
The IBM POWER architecture is not just for database servers. While most people know it only for DB2 and SAP HANA, it is an ideal platform also for HPC or other high performance server applications, like syslog-ng.
While all the buzz is around POWER 11 now, we have yet to see real-world testing results, as GA is still a few weeks away. You can learn more about POWER 11 at https://newsroom.ibm.com/2025-07-08-ibm-power11-raises-the-bar-for-enterprise-it. I am an environmental engineer by degree, so, my favorite part is: “Power11 offers twice the performance per watt versus comparable x86 servers”.
People look surprised when I mention that I am an IBM Champion for POWER, saying “You are not a database guy. What do you have to do with POWER?”. Well, I have 30+ years of history with POWER, but I never had to do anything with databases. My focus was always open source software, even on AIX: https://opensource.com/article/20/10/power-architecture
Of course, we should not forget that POWER is the best platform to run SAP HANA workloads. Not just locally, but also in the cloud: https://www.ibm.com/new/announcements/ibm-and-sap-launch-new-hyperscaler-option-for-sap-cloud-erp. However, there are many other use cases for POWER.
I must admit that I’m not really into chip design. Still, it fascinates me how IBM POWER is more powerful (pun intended!), when it comes to a crucial part: Physical Design Verification using Synopsys IC Validator (ICV). While most people complain that POWER hardware is expensive, it is also faster. Compared to x86, it still can provide a 66% better TCO on workloads like PDV. For details, check: https://www.ibm.com/account/reg/us-en/signup?formid=urx-53646
Do you still think that buying hardware is too expensive? You can try PowerVS, where POWER 11 will also be available soon: https://community.ibm.com/community/user/blogs/anthony-ciccone/2025/07/07/ibm-power11-launches-in-ibm-power-virtual-server-u
Obviously, my favorite part is a simple system utility: syslog-ng. It is an enhanced logging daemon with a focus on portability and high-performance central log collection. When POWER 9 was released, I did a few performance tests. On the fastest x86 servers I had access to, syslog-ng barely could reach collecting 1 million messages a second. The P9 server I had access to could collect slightly more than 3 million, which is a significant difference. Of course, testing results not only depend on the CPU, but also on OS version, OS tuning, side-channel attack mitigation, etc.
I am not sure when I’ll have access to a POWER 11 box. However, you can easily do syslog-ng performance tests yourself using a shell script: https://github.com/czanik/sngbench/ Let me know if you have tested it out on P11! :-)

syslog-ng logo
FreeBSD audit source is coming to syslog-ng
Last year, I wrote a small configuration snippet for syslog-ng: FreeBSD audit source. I published it in a previous blog, and based on feedback, it is already used in production. And soon, it will be available also as part of a syslog-ng release.
As an active FreeBSD user and co-maintainer of the sysutils/syslog-ng port for FreeBSD, I am always happy to share FreeBSD-related news. Last year, we improved directory monitoring and file reading on FreeBSD and MacOS. Now, the FreeBSD audit source is already available in syslog-ng development snapshots.
Read more at https://www.syslog-ng.com/community/b/blog/posts/freebsd-audit-source-is-coming-to-syslog-ng

syslog-ng logo
openSUSE 15.5 to 15.6 upgrade notes
Running Local LLMs with Ollama on openSUSE Tumbleweed
Running large language models (LLMs) on your local machine has become increasingly popular, offering privacy, offline access, and customization. Ollama is a fantastic tool that simplifies the process of downloading, setting up, and running LLMs locally. It uses the powerful llama.cpp as its backend, allowing for efficient inference on a variety of hardware. This guide will walk you through installing Ollama on openSUSE Tumbleweed, and explain key concepts like Modelfiles, model tags, and quantization.
Installing Ollama on openSUSE Tumbleweed
Ollama provides a simple one-line command for installation. Open your terminal and run the following:
curl -fsSL https://ollama.com/install.sh | sh
This script will download and set up Ollama on your system. It will also detect if you have a supported GPU and configure itself accordingly.
If you prefer to use zypper, you can install Ollama directly from the repository:
sudo zypper install ollama
This command will install Ollama and all its dependencies. If you encounter any issues, make sure your system is up to date:
sudo zypper refresh
sudo zypper update
Once the installation is complete, you can start the Ollama service:
sudo systemctl start ollama
To have it start on boot:
sudo systemctl enable ollama
Running Your First LLM
With Ollama installed, running an LLM is as simple as one command. Let’s try running the llama3 model:
ollama run llama3
The first time you run this command, Ollama will download the model, which might take some time depending on your internet connection. Once downloaded, you’ll be greeted with a prompt where you can start chatting with the model.
Choosing the Right Model
The Ollama library has a wide variety of models. When you visit a model’s page on the Ollama website, you’ll see different “tags”. Understanding these tags is key to picking the right model for your needs and hardware.
Model Size (e.g., 7b, 8x7b, 70b)
These tags refer to the number of parameters in the model, in billions.
-
7b: A 7-billion parameter model. These are great for general tasks, run relatively fast, and don’t require a huge amount of RAM. -
4b: A 4-billion parameter model. Even smaller and faster, ideal for devices with limited resources. -
70b: A 70-billion parameter model. These are much more powerful and capable, but require significant RAM and a powerful GPU to run at a reasonable speed. -
8x7b: This indicates a “Mixture of Experts” (MoE) model. In this case, it has 8 “expert” models of 7 billion parameters each. Only a fraction of the total parameters are used for any given request, making it more efficient than a dense model of similar total size. -
70b_MoE: Similar to8x7b, this is a 70-billion parameter MoE model, which can be more efficient for certain tasks.
Specialization Tags (e.g., tools, thinking, vision)
Some models are fine-tuned for specific tasks:
-
tools: These models are designed for “tool use,” where the LLM can use external tools (like a calculator, or an API) to answer questions. -
thinking: This tag often implies the model has been trained to “show its work” or think step-by-step, which can lead to more accurate results for complex reasoning tasks. -
vision: Models with this tag are fine-tuned for tasks involving visual inputs, such as image recognition or analysis.
Distilled Models (distill)
A “distilled” model is a smaller model that has been trained on the output of a larger, more capable model. The goal is to transfer the knowledge and capabilities of the large model into a much smaller and more efficient one.
Understanding Quantization
Most models you see on Ollama are “quantized”. Quantization is the process of reducing the precision of the model’s weights (the numbers that make up the model). This makes the model file smaller and reduces the amount of RAM and VRAM needed to run it, with a small trade-off in accuracy.
Here are some common quantization tags you’ll encounter:
-
fp16: Full-precision 16-bit floating point. This is often the original, un-quantized version of the model. It offers the best quality but has the highest resource requirements. -
q8orq8_0: 8-bit quantization. A good balance between performance and quality. -
q4: 4-bit quantization. Significantly smaller and faster, but with a more noticeable impact on quality. -
q4_K_M: This is a more advanced 4-bit quantization method. TheK_Mpart indicates a specific variant (K-means quantization, Medium size) that often provides better quality than a standardq4quantization. -
q8_O: This is a newer 8-bit quantization method that offers improved performance and quality over older 8-bit methods.
For most users, starting with a q4_K_M or a q8_0 version of a model is a great choice.
Customizing Models with a Modelfile
Ollama uses a concept called a Modelfile to allow you to customize models. A Modelfile is a text file that defines a model’s base model, system prompt, parameters, and more.
Here is a simple example of a Modelfile that creates a persona for the llama3 model:
FROM llama3
# Set the temperature for creativity
PARAMETER temperature 1
# Set the system message
SYSTEM """
You are a pirate. You will answer all questions in the voice of a pirate.
"""
To create and run this custom model:
- Save the text above into a file named
Modelfilein your current directory. -
Run the following command to create the model:
ollama create pirate -f ./Modelfile -
Now you can run your customized model:
ollama run pirate
Now, your LLM will respond like a pirate! This is a simple example, but Modelfiles can be used for much more complex customizations.
For more information, check out the official Ollama documentation:
- Ollama Documentation: The main documentation for Ollama.
- Importing Models: Learn how to import models from other formats.
- Hugging Face Integration: Information on using Ollama with Hugging Face.
Happy modeling on your openSUSE system!
Sovereign AI Platform Picks openSUSE
Europe’s first federated AI initiative has chosen openSUSE as part of its foundation aimed sovereign AI.
OpenNebula Systems officially announced the launch of Fact8ra, which is Europe’s first federated AI-as-a-Service platform.
This initiative marks a major milestone under of a €3 billion Important Project of Common European Interest (IPCEI) on Next Generation Cloud Infrastructure and Services (IPCEI-CIS).
“Fact8ra is able to combine computing resources from eight EU Member States,” according to the press release on July 9.
Those states are France, Germany, Italy, Latvia, the Netherlands, Poland, Spain, and Sweden where Fact8ra aims to deliver sovereign, open source AI capabilities across Europe’s High Performance Computing (HPC), cloud and telco infrastructure.
The technology of openSUSE, which is a global open-source project sponsored by SUSE, was selected as a core component of Fact8ra’s sovereign AI stack.
The validation by one of Europe’s largest public-private cloud projects is a credit to the trust in openSUSE’s stability, adaptability and openness. It can be used not only for server-grade applications but also for advanced AI/ML workloads.
The stack not only incorporates openSUSE, but other European open source technologies such as OpenNebula and MariaDB, according to the release.
The platform enables deployment of private instances of open source large language models (LLMs), including Mistral and EuroLLM, while offering native integration with external catalogs like Hugging Face.
The inclusion of openSUSE with Fact8ra is more than a technical choice, it’s a strategic endorsement.
Fact8ra’s mission centers on European technological sovereignty and reducing dependence on foreign platforms for AI innovation.
The operating system’s ability to support cloud-native environments, container orchestration with Kubernetes, and hardware acceleration tools for AI inference has earned it a place in one of the EU’s most ambitious digital projects to date.
Project Seeks Input on Future of 32-bit ARM
The openSUSE Project is seeking community input to determine whether it should continue supporting 32-bit ARM architectures.
Maintaining support for legacy platforms is increasingly challenging. The openSUSE team cited limited upstream support and dwindling maintenance resources as key factors behind the potential decision to retire 32-bit ARM (ARMv6 and ARMv7) support.
Devices like the Raspberry Pi 1 , Pi Zero, BeagleBone, and other older embedded boards rely on 32-bit ARM. If you’re using openSUSE on any of these platforms, the team wants to hear from you.
Take the survey at survey.opensuse.org to help the team determine a path for 32-bit ARM architectures.
The survey will go until the end of July.
Get Involved
If you’re interested in helping maintain 32-bit ARM support through testing, bug reports, or documentation, join one of the following communication channels:
IRC: #opensuse-arm on Libera.Chat
Matrix: #arm:opensuse.org
Mailing List: lists.opensuse.org
Celebrating 20 Years of openSUSE
Contributors and community members are encouraged to celebrate the openSUSE Project’s 20th anniversary by sharing some of their favorite moments from the past two decades.
Over the years, it has grown into a global movement, powering desktops, servers and development environments across the open source world.
To celebrate the project’s vibrant history, we are collecting photos from across the globe that capture the spirit of the project from conferences and hackathons to community meetups, swag collections and personal milestones.
Members are encouraged to submit up to 20 images drop file. Submissions will be used in presentations showcasing the 20th anniversary and shared amongst members of the community.
People are encouraged to celebrate something in their town or locally and share their photos to news and presentations.
Community members are encouraged to present the 20-year history of the project at conferences and summits. A presentation using the images will be made available on the openSUSE Wiki.
Members are also encouraged to celebrate the 20th Anniversary on August 9 in the openSUSE bar where members can reminisce with others in the global community.
During the openSUSE.Asia Summit, there will be a quiz to win 20th Anniversary t-shirts.
This is a celebration of the people who have made openSUSE what it is today. It’s an accomplishment that happened through seasoned and new contributors along with passionate users. Every photo tells a piece of our shared story.
High-resolution images are appreciated and a minimum 300 dpi resolution image is ideal.
The openSUSE Project was launched on August 9, 2005.
Anyone having group photos at openSUSE Conferences are asked to email them to ddemaio@opensuse.org
Travel Support Available for openSUSE.Asia Summit
The openSUSE.Asia Summit 2025 will take place from August 29 to 31 at MRIIRS in Faridabad, India, and we’re excited to welcome community members from across Asia and around the globe.
To make the summit more accessible, the Travel Support Program (TSP) is available to assist participants who may need financial support to attend. Funded by the Geeko Foundation, the TSP helps cover some of the travel expenses presenters might incur with traveling to the summit.
If you’re attending the summit and need support, submit your TSP application no later than July 31.
Key Dates:
TSP Application Deadline: July 31, 2025
Summit Dates: August 29 – 31, 2025
Location: MRIIRS, Faridabad, India
This is a great opportunity to connect with fellow open-source enthusiasts, share knowledge, and help shape the future of openSUSE and Linux in Asia.
For details on how to apply, visit: https://en.opensuse.org/openSUSE:Travel_Support_Program and reach the Geeko Foundation’s travel policy.
The foundation will work with the summit’s organizers to determine the level of support that will be able to be provided to participants.