Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

POWER Is Not Just for Databases

The IBM POWER architecture is not just for database servers. While most people know it only for DB2 and SAP HANA, it is an ideal platform also for HPC or other high performance server applications, like syslog-ng.

While all the buzz is around POWER 11 now, we have yet to see real-world testing results, as GA is still a few weeks away. You can learn more about POWER 11 at https://newsroom.ibm.com/2025-07-08-ibm-power11-raises-the-bar-for-enterprise-it. I am an environmental engineer by degree, so, my favorite part is: “Power11 offers twice the performance per watt versus comparable x86 servers”.

People look surprised when I mention that I am an IBM Champion for POWER, saying “You are not a database guy. What do you have to do with POWER?”. Well, I have 30+ years of history with POWER, but I never had to do anything with databases. My focus was always open source software, even on AIX: https://opensource.com/article/20/10/power-architecture

Of course, we should not forget that POWER is the best platform to run SAP HANA workloads. Not just locally, but also in the cloud: https://www.ibm.com/new/announcements/ibm-and-sap-launch-new-hyperscaler-option-for-sap-cloud-erp. However, there are many other use cases for POWER.

I must admit that I’m not really into chip design. Still, it fascinates me how IBM POWER is more powerful (pun intended!), when it comes to a crucial part: Physical Design Verification using Synopsys IC Validator (ICV). While most people complain that POWER hardware is expensive, it is also faster. Compared to x86, it still can provide a 66% better TCO on workloads like PDV. For details, check: https://www.ibm.com/account/reg/us-en/signup?formid=urx-53646

Do you still think that buying hardware is too expensive? You can try PowerVS, where POWER 11 will also be available soon: https://community.ibm.com/community/user/blogs/anthony-ciccone/2025/07/07/ibm-power11-launches-in-ibm-power-virtual-server-u

Obviously, my favorite part is a simple system utility: syslog-ng. It is an enhanced logging daemon with a focus on portability and high-performance central log collection. When POWER 9 was released, I did a few performance tests. On the fastest x86 servers I had access to, syslog-ng barely could reach collecting 1 million messages a second. The P9 server I had access to could collect slightly more than 3 million, which is a significant difference. Of course, testing results not only depend on the CPU, but also on OS version, OS tuning, side-channel attack mitigation, etc.

I am not sure when I’ll have access to a POWER 11 box. However, you can easily do syslog-ng performance tests yourself using a shell script: https://github.com/czanik/sngbench/ Let me know if you have tested it out on P11! :-)

syslog-ng logo

a silhouette of a person's head and shoulders, used as a default avatar

FreeBSD audit source is coming to syslog-ng

Last year, I wrote a small configuration snippet for syslog-ng: FreeBSD audit source. I published it in a previous blog, and based on feedback, it is already used in production. And soon, it will be available also as part of a syslog-ng release.

As an active FreeBSD user and co-maintainer of the sysutils/syslog-ng port for FreeBSD, I am always happy to share FreeBSD-related news. Last year, we improved directory monitoring and file reading on FreeBSD and MacOS. Now, the FreeBSD audit source is already available in syslog-ng development snapshots.

Read more at https://www.syslog-ng.com/community/b/blog/posts/freebsd-audit-source-is-coming-to-syslog-ng

syslog-ng logo

the avatar of FreeAptitude

openSUSE 15.5 to 15.6 upgrade notes

In a previous article I have shown how to upgrade a distro using zypper and the recently released plugin zypper-upgradedistro, but some issues might always happen for a specific version, that’s why I collected all the changes and the tweaks I applied, switching from openSUSE Leap 15.5 to 15.6 during and after the installation process.

the avatar of openSUSE News

Running Local LLMs with Ollama on openSUSE Tumbleweed

Running large language models (LLMs) on your local machine has become increasingly popular, offering privacy, offline access, and customization. Ollama is a fantastic tool that simplifies the process of downloading, setting up, and running LLMs locally. It uses the powerful llama.cpp as its backend, allowing for efficient inference on a variety of hardware. This guide will walk you through installing Ollama on openSUSE Tumbleweed, and explain key concepts like Modelfiles, model tags, and quantization.

Installing Ollama on openSUSE Tumbleweed

Ollama provides a simple one-line command for installation. Open your terminal and run the following:

curl -fsSL https://ollama.com/install.sh | sh

This script will download and set up Ollama on your system. It will also detect if you have a supported GPU and configure itself accordingly.

If you prefer to use zypper, you can install Ollama directly from the repository:

sudo zypper install ollama

This command will install Ollama and all its dependencies. If you encounter any issues, make sure your system is up to date:

sudo zypper refresh
sudo zypper update

Once the installation is complete, you can start the Ollama service:

sudo systemctl start ollama

To have it start on boot:

sudo systemctl enable ollama

Running Your First LLM

With Ollama installed, running an LLM is as simple as one command. Let’s try running the llama3 model:

ollama run llama3

The first time you run this command, Ollama will download the model, which might take some time depending on your internet connection. Once downloaded, you’ll be greeted with a prompt where you can start chatting with the model.

Choosing the Right Model

The Ollama library has a wide variety of models. When you visit a model’s page on the Ollama website, you’ll see different “tags”. Understanding these tags is key to picking the right model for your needs and hardware.

Model Size (e.g., 7b, 8x7b, 70b)

These tags refer to the number of parameters in the model, in billions.

  • 7b: A 7-billion parameter model. These are great for general tasks, run relatively fast, and don’t require a huge amount of RAM.
  • 4b: A 4-billion parameter model. Even smaller and faster, ideal for devices with limited resources.
  • 70b: A 70-billion parameter model. These are much more powerful and capable, but require significant RAM and a powerful GPU to run at a reasonable speed.
  • 8x7b: This indicates a “Mixture of Experts” (MoE) model. In this case, it has 8 “expert” models of 7 billion parameters each. Only a fraction of the total parameters are used for any given request, making it more efficient than a dense model of similar total size.
  • 70b_MoE: Similar to 8x7b, this is a 70-billion parameter MoE model, which can be more efficient for certain tasks.

Specialization Tags (e.g., tools, thinking, vision)

Some models are fine-tuned for specific tasks:

  • tools: These models are designed for “tool use,” where the LLM can use external tools (like a calculator, or an API) to answer questions.
  • thinking: This tag often implies the model has been trained to “show its work” or think step-by-step, which can lead to more accurate results for complex reasoning tasks.
  • vision: Models with this tag are fine-tuned for tasks involving visual inputs, such as image recognition or analysis.

Distilled Models (distill)

A “distilled” model is a smaller model that has been trained on the output of a larger, more capable model. The goal is to transfer the knowledge and capabilities of the large model into a much smaller and more efficient one.

Understanding Quantization

Most models you see on Ollama are “quantized”. Quantization is the process of reducing the precision of the model’s weights (the numbers that make up the model). This makes the model file smaller and reduces the amount of RAM and VRAM needed to run it, with a small trade-off in accuracy.

Here are some common quantization tags you’ll encounter:

  • fp16: Full-precision 16-bit floating point. This is often the original, un-quantized version of the model. It offers the best quality but has the highest resource requirements.
  • q8 or q8_0: 8-bit quantization. A good balance between performance and quality.
  • q4: 4-bit quantization. Significantly smaller and faster, but with a more noticeable impact on quality.
  • q4_K_M: This is a more advanced 4-bit quantization method. The K_M part indicates a specific variant (K-means quantization, Medium size) that often provides better quality than a standard q4 quantization.
  • q8_O: This is a newer 8-bit quantization method that offers improved performance and quality over older 8-bit methods.

For most users, starting with a q4_K_M or a q8_0 version of a model is a great choice.

Customizing Models with a Modelfile

Ollama uses a concept called a Modelfile to allow you to customize models. A Modelfile is a text file that defines a model’s base model, system prompt, parameters, and more.

Here is a simple example of a Modelfile that creates a persona for the llama3 model:

FROM llama3

# Set the temperature for creativity
PARAMETER temperature 1

# Set the system message
SYSTEM """
You are a pirate. You will answer all questions in the voice of a pirate.
"""

To create and run this custom model:

  1. Save the text above into a file named Modelfile in your current directory.
  2. Run the following command to create the model:

     ollama create pirate -f ./Modelfile
    
  3. Now you can run your customized model:

     ollama run pirate
    

Now, your LLM will respond like a pirate! This is a simple example, but Modelfiles can be used for much more complex customizations.

For more information, check out the official Ollama documentation:

Happy modeling on your openSUSE system!

the avatar of openSUSE News

Sovereign AI Platform Picks openSUSE

Europe’s first federated AI initiative has chosen openSUSE as part of its foundation aimed sovereign AI.

OpenNebula Systems officially announced the launch of Fact8ra, which is Europe’s first federated AI-as-a-Service platform.

This initiative marks a major milestone under of a €3 billion Important Project of Common European Interest (IPCEI) on Next Generation Cloud Infrastructure and Services (IPCEI-CIS).

“Fact8ra is able to combine computing resources from eight EU Member States,” according to the press release on July 9.

Those states are France, Germany, Italy, Latvia, the Netherlands, Poland, Spain, and Sweden where Fact8ra aims to deliver sovereign, open source AI capabilities across Europe’s High Performance Computing (HPC), cloud and telco infrastructure.

The technology of openSUSE, which is a global open-source project sponsored by SUSE, was selected as a core component of Fact8ra’s sovereign AI stack.

The validation by one of Europe’s largest public-private cloud projects is a credit to the trust in openSUSE’s stability, adaptability and openness. It can be used not only for server-grade applications but also for advanced AI/ML workloads.

The stack not only incorporates openSUSE, but other European open source technologies such as OpenNebula and MariaDB, according to the release.

The platform enables deployment of private instances of open source large language models (LLMs), including Mistral and EuroLLM, while offering native integration with external catalogs like Hugging Face.

The inclusion of openSUSE with Fact8ra is more than a technical choice, it’s a strategic endorsement.

Fact8ra’s mission centers on European technological sovereignty and reducing dependence on foreign platforms for AI innovation.

The operating system’s ability to support cloud-native environments, container orchestration with Kubernetes, and hardware acceleration tools for AI inference has earned it a place in one of the EU’s most ambitious digital projects to date.

the avatar of openSUSE News

Project Seeks Input on Future of 32-bit ARM

The openSUSE Project is seeking community input to determine whether it should continue supporting 32-bit ARM architectures.

Maintaining support for legacy platforms is increasingly challenging. The openSUSE team cited limited upstream support and dwindling maintenance resources as key factors behind the potential decision to retire 32-bit ARM (ARMv6 and ARMv7) support.

Devices like the Raspberry Pi 1 , Pi Zero, BeagleBone, and other older embedded boards rely on 32-bit ARM. If you’re using openSUSE on any of these platforms, the team wants to hear from you.

Take the survey at survey.opensuse.org to help the team determine a path for 32-bit ARM architectures.

The survey will go until the end of July.

Get Involved

If you’re interested in helping maintain 32-bit ARM support through testing, bug reports, or documentation, join one of the following communication channels:

IRC: #opensuse-arm on Libera.Chat

Matrix: #arm:opensuse.org

Mailing List: lists.opensuse.org

the avatar of openSUSE News

Celebrating 20 Years of openSUSE

Contributors and community members are encouraged to celebrate the openSUSE Project’s 20th anniversary by sharing some of their favorite moments from the past two decades.

Over the years, it has grown into a global movement, powering desktops, servers and development environments across the open source world.

To celebrate the project’s vibrant history, we are collecting photos from across the globe that capture the spirit of the project from conferences and hackathons to community meetups, swag collections and personal milestones.

Members are encouraged to submit up to 20 images drop file. Submissions will be used in presentations showcasing the 20th anniversary and shared amongst members of the community.

People are encouraged to celebrate something in their town or locally and share their photos to news and presentations.

Community members are encouraged to present the 20-year history of the project at conferences and summits. A presentation using the images will be made available on the openSUSE Wiki.

Members are also encouraged to celebrate the 20th Anniversary on August 9 in the openSUSE bar where members can reminisce with others in the global community.

During the openSUSE.Asia Summit, there will be a quiz to win 20th Anniversary t-shirts.

This is a celebration of the people who have made openSUSE what it is today. It’s an accomplishment that happened through seasoned and new contributors along with passionate users. Every photo tells a piece of our shared story.

High-resolution images are appreciated and a minimum 300 dpi resolution image is ideal.

The openSUSE Project was launched on August 9, 2005.

Anyone having group photos at openSUSE Conferences are asked to email them to ddemaio@opensuse.org

the avatar of openSUSE News

Travel Support Available for openSUSE.Asia Summit

The openSUSE.Asia Summit 2025 will take place from August 29 to 31 at MRIIRS in Faridabad, India, and we’re excited to welcome community members from across Asia and around the globe.

To make the summit more accessible, the Travel Support Program (TSP) is available to assist participants who may need financial support to attend. Funded by the Geeko Foundation, the TSP helps cover some of the travel expenses presenters might incur with traveling to the summit.

If you’re attending the summit and need support, submit your TSP application no later than July 31.

Key Dates:

TSP Application Deadline: July 31, 2025

Summit Dates: August 29 – 31, 2025

Location: MRIIRS, Faridabad, India

This is a great opportunity to connect with fellow open-source enthusiasts, share knowledge, and help shape the future of openSUSE and Linux in Asia.

For details on how to apply, visit: https://en.opensuse.org/openSUSE:Travel_Support_Program and reach the Geeko Foundation’s travel policy.

The foundation will work with the summit’s organizers to determine the level of support that will be able to be provided to participants.

a silhouette of a person's head and shoulders, used as a default avatar

Releasing version 16

As we dive into the European summer, Agama development goes on steady. And what is more refreshing than a new version of a Linux installer? Enjoy Agama 16, loaded with bug fixes and several features covering both unattended and interactive scenarios.

Regarding the former, we keep expanding the configuration options that are accessible using the Agama command line tools and the JSON profiles. So let's start by taking a look to those features before jumping into the more visible user interface changes.

More options to configure the software to install

In particular the software section of the configuration schema received a couple of extra features in this version, like the ability to define additional repositories and the possibility to ignore optional dependencies, installing only those packages that are strictly required.

See the documentation of the corresponding section to learn more about the new extraRepositories and onlyRequired configuration options.

Another aspect of Agama that unleashes its full potential when using JSON profiles is the setup of the storage devices (disks, RAIDs, partitions, LVM, etc.). In that regard, this new version adds the new keyword sort that can be used when matching existing devices with the definitions at the JSON configuration.

That is used in the following example to indicate that an MD RAID must be created using the two biggest disks in the system.

"storage": {
"drives": [
{
"search": {
"sort": { "size": "desc" },
"max": 2,
},
"alias": "big"
}
],
"mdRaids": [
{
"devices": [ "big" ],
"level": "raid0"
}
]
}

See more examples of the syntax at the description of the corresponding pull request.

Reporting installation status via IPMI

The features discussed previously allow more flexibility for unattended and massive deployments. But in addition to configuring the installation process, it is also important to be able to monitor its progress. For that purpose, Agama 16 introduces status reporting via IPMI (Intelligent Platform Management Interface), a set of standard interfaces that, among many other things, allow system administrators to install operating systems remotely.

Now Agama can report to the BMC (Baseboard Management Controller) the status of the installation process like STARTED, FINISHED or FAILED. Of course, Agama's own monitoring mechanisms can additionally be used to get more fine-grained information that goes beyond the intentionally generic IPMI specification.

Reorganization of the Agama commands

And talking about Agama tools for monitoring and configuring the installation, we also decided it was time for a consistency check on our commands, especially agama config and agama profile. Compared to previous versions of Agama:

  • agama profile import is replaced by agama config generate | agama config load.
  • agama profile evaluate and agama profile autoyast are replaced by agama config generate.
  • agama profile validate is renamed to agama config validate.
  • All sub-commands use stdio in a consistent way, using a new --output option when needed.

See more information at the command-line reference.

Identify conflicting patterns

Although many of the improvements at Agama 16 are targeted at automated installations and advanced scenarios, we also found time to partially polish some basic aspects of the graphical web-based interface.

For example, we added a first basic mechanism to detect and fix conflicts in the selection of the software patterns to install. You can see it in action at the following screenshot.

Initial resolution of patterns conflicts

Initial support to use existing MD RAIDs from the web interface

As mentioned above, Agama allows to create very advanced storage setups combining LVM, MD RAID and other technologies. But currently only a limited subset of those options are available at the graphical interface. As a first step to expand the usefulness of that interface in advanced scenarios, Agama now offers the possibility to select any existing MD RAID device and use them for the same operations that are available for regular disks.

Using an existing MD RAID

Define the scope of network connections

Another aspect of the installation that can get really complex is the network configuration. We would need a whole series of blog posts to fully explain all the associated challenges. As a simplistic summary, let's say Agama 16 introduces two new related features at the user interface.

On the one hand, the UI now allows to associate a given network connection to a fixed network interface, either by interface name or by MAC address.

On the other hand, we made the concept of "persistent" network connections visible, allowing the users to decide which connections should be used only during installation and not configured in the installed system.

Configuring network connections

Moreover, if Agama detects that no network will be explicitly configured by the installer at the target system, it now alerts the user about the implications.

More friendly experience for remote installations

And talking about network, one of the main features of Agama is the possibility to install remotely using a browser to interactively control the installation process from another device. But there was a small usability problem in that scenario. At the end of the process, Agama offers the possibility to reboot into the new system. But the users received no visual feedback after clicking that "reboot" button.

That is fixed now, see the corresponding pull request if you are interested in the most intrincate technical details.

Page displayed while the system reboots

Check strength of the typed passwords

But that was not the only usability issue addressed at Agama 16. We also took the opportunity to pay a historical debt - the lack of a mechanism at the user interface to discourage the usage of weak passwords.

Now Agama relies on libpwquality to perform some basic checks and alerts the user if any of the provided passwords fails to match the quality standards of that library.

Warning about a weak password

But the recent improvements go beyond Agama itself.

Installer media news: Wayland and rescue mode

We often make the distinction between Agama and the Agama Live ISO. The former is the installer application itself, while the latter refers to the Live image that can be used to boot a minimal Linux system that runs Agama and a full-screen web browser to interact with it.

Although the Agama team is not in charge of the installer media of the different (open)SUSE distributions, our Live ISO serves as some kind of reference implementation of the expected environment to execute Agama. So we decided to invest a bit into it.

First of all, we introduced the option to boot the Live ISO without executing Agama or any graphical session. We did it with the intention to mitigate the pain of those users missing the classical Rescue System that is traditionally integrated into the openSUSE installation images. But the new option is far from being a full replacement for that special system, check the corresponding pull request for more information.

The new Rescue System option

On the other hand, and thinking in the long term, we decided is about time to jump into the Wayland boat. So we dropped X.Org and decided to rely on Wayland for running Firefox during installation.

The new installer image is still a bit rough around the edges. First of all, it is considerably bigger than the former X11-based image. And we lost some keyboard shortcuts in the process. We plan to put some work into it in the short term, but any help is greatly appreciated.

Get involved

We want Agama to be the official installer for the upcoming openSUSE Leap 16.0 and SUSE Linux Enterprise 16.0. And for that we need the installation media to be tested in as many scenarios as possible. Especially with that recent switch from X11 to Wayland. So do not hesitate to test the latest development version.

If you got questions or want to get involved further, please contact us at the Agama project at GitHub and our #yast channel at Libera.chat. Of course, we will also keep you updated on this blog. Stay tuned!

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE turned 20

Last week, I was in Nürnberg for the openSUSE conference. The project turned 20 years old this year, and I was there right from the beginning (and even before that, if we also count the S.u.S.E. years). There were many great talks, including a syslog-ng talk from me, and even a birthday party… :-)

This year marks not just 20 years of openSUSE but also a major new SLES and openSUSE Leap release: version 16.0. There were many talks about what is coming and how things are changing. I already have a test version running on my laptop, and you should too, if you want to help to make version 16 the best release ever! :-) Slowroll also had a dedicated talk. It is a new openSUSE variant, offering a rolling Linux distribution with a bit more stability. So it is positioned somewhere between Leap and Tumbleweed, but of course it is a bit closer to the latter.

That said, I also had a couple of uncomfortable moments. I ended up working in open-source, because it’s normally a place without real-world politics. In other words, people from all walks of life can work together on open-source software, regardless of whether they are religious or atheist, LGBTQ+ allies or conservatives, or come from the east or the west. And even though I agree that we are in a geopolitical situation in which European software companies are needed to ensure our digital sovereignty, it’s not a topic I was eager to hear at an open-source event. I enjoy the technology and spirit of open-source, but I’m not keen on the politics surrounding it, especially at this time of geopolitical tensions.

syslog-ng logo

As usual, I delivered a talk on log management, specifically about message parsing. While my configuration examples came from syslog-ng, I tried to make sure that anything I said could be applied to other log management applications as well. I also introduced my audience to sequence, which allows you to create parsing rules to parse free-form text messages: https://github.com/ccin2p3/sequence-RTG In the coming weeks, I plan to package it for openSUSE.

Happy birthday to openSUSE, and here’s to another successful 20 years!