Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

Central log collection - more than just compliance

I often hear, even at security conferences that “no central log collection here” or “we have something due to compliance”. Central logging is more than just compliance. It makes logs easier to use, available and secure, thus making your life easier in operations, security, development, but also in marketing, sales, and so on.

What are logs and what is central log collection?

Most operating systems and applications keep track of what they are doing. They write log messages. A syslog message might look similar:

Mar 16 13:13:49 cent sshd[543817]: Accepted publickey for toor from 192.168.97.14 port 58246 ssh2: RSA SHA256:GeGHdsl1IZrnTniKUxxxX4NpP8Q

Applications might store their logs separately and have their own log format, like this Apache access log:

192.168.0.164 - - [16/Mar/2026:13:17:01 +0100] "HEAD /other/syslog-ng-insider-2026-03-4110-release-opensearch-elasticsearch/ HTTP/1.1" 200 3764 "-" "SkytabBot/1.0 (URL Resolution)"

Central log collection simply means that log messages are collected to a central location instead or in addition to saving them locally.

In this blog we take a look at what ease of use, availability, and security of central log collection mean for you.

Ease of use

If you have a single computer in your organization, finding a log message about an event on that computer takes some time. Once you have 2 computers, you have to check 2 computers to find that event. It might take twice as much time, but it is still easier than implementing central log collection. Not to mention, which one is the central computer. :-)

Once you have a network of 10 computers, logging in to each of them to find a log message about an event becomes a huge overhead. It is still doable, but implementing central log collection is a lot easier already in the short term, than looking at the logs on the machines where they were created.

On a network of 100 computers, it is practically impossible to find relevant logs by security or operations, unless logs are collected centrally.

Availability

Collecting logs centrally means that log messages are available even when the sending machine is down. If you want to know what happened, you do not have to get the machine up and running again, but you can check the logs at the central location. If you see signs of a hardware failure, you can go with a spare part immediately, reducing the time and effort needed to repair the machines.

Security

When a computer is compromised, log messages are often altered or deleted completely. However, this tactic only works with logs stored locally. Collecting logs at a central location allows you to use the unmodified logs and to figure out how the compromise happened.

What is next?

It is time to introduce central logging to your organization if you have not yet done it yet. Of course I am a bit biased, but syslog-ng is the perfect tool to do so. You can get started by reading / watching the syslog-ng tutorial on https://peter.czanik.hu/posts/syslog-ng-tutorial-toc/.

syslog-ng logo

Originally published at https://www.syslog-ng.com/community/b/blog/posts/central-log-collection—more-than-just-compliance

a silhouette of a person's head and shoulders, used as a default avatar

GNOME 50 Wallpapers

GNOME 50 just got released! To celebrate, I thought it’d be fun to look into the background (ding) of the newest additions to the collection.

While the general aesthetic remains consistent, you might be surprised to see the default shifting from the long-standing triangular theme to hexagons.

Default Wallpaper in GNOME 50

Well, maybe not that surprised if you’ve been following the gnome-backgrounds repo closely during the development cycle. We saw a rounded hexagon design surface back in 2024, but it was pulled after being deemed a bit too "flat" despite various lighting and color iterations. We’ve actually seen other hex designs pop up in 2020 and 2022, but they never quite got the ultimate spot as the default. Until now.

Hexagon iteration 1 Hexagon iteration 2 Hexagon iteration 3 Hexagon iteration 4

Beyond the geometry shift of the default, the Symbolics wallpaper has also received its latest makeover. Truth be told, it hasn’t historically been a fan favorite. I rarely see it in the wild in "show off your desktop" threads. Let's see if this new incarnation does any better.

Similarly, the glass chip wallpaper has undergone a bit of a makeover as well. I'll also mention a… let's say less original design that caters to the dark theme folks out there. While every wallpaper in GNOME features a light and dark variant, Tubes comes with a dark and darker variant. I hope to see more of those "where did you get that wallpaper?" on reddit in 2026.

a silhouette of a person's head and shoulders, used as a default avatar

My new toy: AI first steps with the HP Z2 Mini

In the past few weeks, I installed five different operating systems on my latest toy: an AI workstation from HP. I love playing with OSes, but my main goal with the new machine is to learn various aspects of AI. I took my first steps in this adventure on Windows.

Of course, you might ask: why on Windows? Well, it’s easy: because it’s easy… :-) There is nothing to install or configure there, as Windows has multiple built-in apps that support AI and can utilize the NPU (hardware-accelerated AI) support of the AMD Ryzen 395 chip. The Windows Task Manager application already has a graph for NPU utilization and I found good use of it already: the Task Manager was open on the NPU tab while I conducted my experiments, allowing me to verify that it was indeed in use, as I ran a couple of generative AI tasks.

I find it somewhat surprising, but Paint now also comes with integrated AI support. Of course, you must pay for image generation but generating simple graphics (or “stickers”, as they are called in Paint) is free. If the prompt field is empty, its prompt text suggests “a cat wearing sunglasses”, so that’s what I generated. It has no practical use for me, but looks fun regardless… :-) You can see an example in the first figure.

Windows Paint with AI

It’s only slightly crazier that the simplest Windows app, Notepad, now also has AI features. I asked Notepad to write me a three-paragraph-long blog about writing a blog with AI in a boring, formal style. It completed the task successfully: the result is pretty boring and full of bullshit. Here it is:

The integration of artificial intelligence into the realm of content creation, particularly in the sphere of blog writing, presents a multitude of advantages that merit serious consideration. Firstly, AI-driven tools offer an unprecedented level of efficiency, allowing content creators to produce substantial volumes of text without the need for exhaustive manual input. This technological boon is particularly beneficial for those with time constraints or heavy workloads, as it liberates them from the mundane task of typing and editing.

Moreover, the precision and consistency of AI in language generation cannot be understated. By leveraging advanced algorithms, AI systems can maintain a uniform tone and style throughout a piece, which is a common challenge for human writers. This consistency ensures that the content is cohesive and that the voice of the piece remains unaltered, thereby enhancing the overall reading experience.

Lastly, the ability of AI to learn from vast datasets and adapt to various writing styles is a significant asset. It empowers writers to explore new topics and styles with confidence, knowing that the AI can provide a solid foundation upon which to build. This adaptability not only streamlines the writing process but also encourages creativity and innovation in content creation.

Windows Notepad with AI

I also wanted to try a controversial Windows feature: Recall. Well, it does not work. When I started it, I got a nice error message stating that it needs Secure Boot. Linux requires it to be turned off, so I cannot test it now. But I must admit that I do not mind that… :-)

Windows Recall needs Secure Boot

If everything goes well, I’ll make my first steps next week to enable hardware-accelerated AI under Linux.

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

the avatar of Nathan Wolf

Kontainer | Distrobox Container Mangager Built for KDE Plasma

Kontainer is a KDE-native GUI for managing Distrobox containers, enhancing user experience by simplifying container management. It integrates well with the desktop environment and facilitates the installation and operation of applications from different Linux distributions. While similar to BoxBuddy, Kontainer offers a more integrated feel, though lacks a feature for directly running applications.
the avatar of Nathan Wolf

Kontainer | Distrobox Container Manager Built for KDE Plasma

Kontainer is a KDE-native GUI for managing Distrobox containers, enhancing user experience by simplifying container management. It integrates well with the desktop environment and facilitates the installation and operation of applications from different Linux distributions. While similar to BoxBuddy, Kontainer offers a more integrated feel, though lacks a feature for directly running applications.

the avatar of openSUSE News

openSUSE Releases Updated Legal Classification Model

the avatar of Zoltán Balogh

LogAI

The Idea

After building RAG systems for Bugzilla and Redmine data, the next obvious candidate was sitting right there on every Linux machine: the system journal. Instead of grepping through journalctl output or staring at walls of log text, why not just ask plain English questions like “What went wrong last night?” and get a useful answer?

The constraints were the same as always: fully local, no cloud services, no API costs, runs on my openSUSE machines.

the avatar of Nathan Wolf

Linux Saloon 192 | Open Mic Night

Desktop security dominated much of the conversation on this Open Mic Night. I think it’s good to keep these things in mind when we navigate the Internet and secure our information. What have you been doing in tech or Linux? HipDad early days of streaming using RealPlayer, IRC ✅StrawPoll: What are the common activities you […]

a silhouette of a person's head and shoulders, used as a default avatar

Personal Digital Sovereignty

We feel how dependencies can hurt

There is a lot of talk about digital sovereignty. Being able to act as a state or as a company is obviously important. But there are real dependencies, and given the current geopolitical dynamics, there are real risks. Unfortunately, there are no easy answers. Digital sovereignty matters, but so do stability, efficiency, and innovation. Fortunately, there are options and some good examples of how to deal with it. I collected some material in an awesome list on digital sovereignty.

While it is complex at the state level, it is merely complicated at the personal level. Reaching something like personal digital sovereignty is possible. If you are informed about the technical landscape, you probably already have a good intuition about it. You feel the pain of having to stop using a service because the provider decided to discontinue it without you having a say. You can decide whether it feels right to upload your personal diary to a server in a jurisdiction you do not control.

Free Software provides a path

There is a clear path to personal digital sovereignty. The goal is nicely expressed in KDE's mission: "A world in which everyone has control over their digital life and enjoys freedom and privacy." The path is provided by Free Software. The freedoms to use, study, share, and improve give you exactly what you need to be in control.

For software you run yourself, this works well. Running Free Software on your personal computer gives you control. It feels good. It becomes more complicated when you use services you do not and cannot run yourself. The software freedoms do not transfer easily. There are a lot of services, which are mostly based on Free Software, but only the service providers enjoy the freedoms, not their users. I have written about this before when working on my Fair Web Services project.

A good testament to personal digital sovereignty is the Blue Angel for software. Its focus on resource and energy efficiency is one side of responsible software use. Maybe even more important is its emphasis on user autonomy: being able to use software without ads being forced on you, being able to choose what to install, and having transparency about what you run. These are the ingredients of personal digital sovereignty.

Finding the balance

Freedom is one side, but convenience is another. Sometimes it is easier to just use something a vendor has invested heavily in providing, even if you pay with your data and some independence. It is also a question of where you spend your time: do you build something for yourself, or do you use something that already exists? And sometimes it is about the limits what you can do yourself. Powerful tools can give you leverage so you can focus on your actual mission.

So it is also about compromise. One very important aspect for me is that I am still able to choose. That is the core of personal digital sovereignty. Sovereignty does not mean doing everything yourself. It means preserving the ability to leave, even if you choose not to.

Federated services make it easy to migrate. For git, for example, it does not matter so much where the server is or who runs it, because switching is as simple as changing the remote. For a proprietary note-taking service, this looks different. You may need special exports, format conversions, and you might lose functionality because it is not based on open standards. Choose your dependencies wisely.

It is important to remember that dependencies are not bad per se. We know this from Free Software. We know what it feels like to stand on the shoulders of giants. We rely on the collective strength of a global community. It is not about rejecting all dependencies or doing everything on your own. It is about creating alternatives and shaping an ecosystem based on openness, so that we can choose and act on our own terms.

My personal stack

I am quite happy with my personal stack, which gives me the control I need. My 12-year-old desktop runs Linux and KDE. I pay to host my own email, Nextcloud, and git services. One project I particularly like is GitJournal, which gives me control over my note-taking across all my devices. This covers the core of my computing needs, with my family, my friends, and what I decide to keep private.

To stay connected to the wider world, there is no way around being present on large networks. GitHub and LinkedIn are the compromises that give me reach without requiring me to abandon all my principles. I would not publish my writing only on LinkedIn, though, because I want to own what I produce.

AI is a difficult question right now. It is easy to switch between services, and with rapid development it is changing quickly what the best choice is. And it can provide tremendous leverage. So it remains an evolving compromise. An ideal future would offer open models powerful enough to serve your needs and that you can run locally.

Building digital sovereignty

On a personal level, you can decide for yourself. There are limitations, and you will have to build on the environment available to you. But there are alternatives, and you can choose to build your personal digital sovereignty.

At the corporate and state level, it is more difficult. The systems are more intertwined, but the pain of dependencies you cannot control and the risks of others making decisions for you are just as real. Alternatives exist there as well, often the same ones available on a personal level. It can be worth taking bold decisions.

Digital sovereignty at the state level is about national security. At the personal level, it is about personal freedom. Free Software provides a powerful path to maintaining control over our digital lives.

I am not arguing for tools. I am arguing for agency.

a silhouette of a person's head and shoulders, used as a default avatar

Tumbleweed – Review of the week 2026/11

Dear Tumbleweed users and hackers,

It’s been a productive and busy week for Tumbleweed—and for openQA in particular. We threw 7 snapshots at the engines, and 6 were confirmed and published (0305, 0306, 0307, 0308, 0310, and 0311).

Snapshot 0309 was the first to include systemd 259.3, and openQA was not happy at all. The culprit turned out to be a missing sync with the SELinux policies. Once the policies were updated in snapshot 0310, openQA was (mostly) satisfied. A few additional policy tweaks were pushed via the update channel to ensure we didn’t block the snapshot pipeline any longer than necessary.

Those 6 snapshots brought you these changes:

  • bind 9.20.20
  • gstreamer 1.28.1
  • iptables 1.8.13
  • shadow 4.19.4
  • PackageKit 1.3.4
  • KDE Gear 25.12.3
  • Linux kernel 6.19.6 & kernel longterm 6.18.16
  • libvirt 12.1.0
  • GCC 16 is providing the base libraries, such as libgcc_s1. The system compiler is still version 15 for the time being
  • Pipewire 1.6.1
  • systemd 259.3
  • Mozilla Firefox 148.0.2
  • postfix 3.11.1

The future holds these changes, once they pass QA:

  • Mesa 26.0.2
  • cURL 8.19.0
  • systemd 259.4
  • Switch default bootloader on uefi systems to systemd-boot (aligning tumbleweed to microos)
  • GCC 16 as the default compiler
  • GNOME 50: RC is staged for QA; release planned by upstream for March 18
  • glibc 2.43: metabug: https://bugzilla.opensuse.org/show_bug.cgi?id=1257250