The syslog-ng Insider 2025-06: arm64; PAM; testing
The June syslog-ng newsletter is now on-line:
-
Installing nightly syslog-ng arm64 packages on a Raspberry Pi
-
Working with One Identity Cloud PAM Linux agent logs in syslog-ng
-
Testing the new syslog-ng wildcard-file() source options on Linux
It is available at https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2025-06-arm64-pam-testing

syslog-ng logo
Tumbleweed – Review of the weeks 2025/23 & 24
Dear Tumbleweed users and hackers,
I’m again spanning two weeks, as the Swiss summer weather prevents me from working too long on Friday afternoons (and a couple of holidays on Thursdays), and thus I miss writing the reviews. However, the changes seem manageable, and it should be possible to provide you with an overview of what has happened and what is to come..After the infrastructure issues mentioned in my last review, we could increase the cadence a bit again and have managed to publish 8 snapshots in the past two weeks (0531, 0601, 0602, 0604, 0605, 0606, 0610, and 0611)
The most relevant changes delivered as part of those snapshots were:
- Mozilla Firefox 138.0.4 & 139..01
- cURL 8.14.0 & 8.14.1
- GCC 14.3.0
- GNOME 48.2
- Qt 5.15.17 & Qt 6.9.1
- Samba 4.22.1 & 4.22.2
- libzypp: enable curl2 and parallel download by default
- Linux kernel 6.15.0 & 6.15.1
- LLVM 20.1.6
- Systemd 257.6
- KDE Gear 25.04.2
- GStreamer 1.26.2
- util-linux 2.41
- Virtualbox 7.1.10
- Mesa 25.1.3
- PostgreSQL 17.5
- MariaDB 11.8.2
- SQLite 3.50.1
I admit, the list became longer than I had expected at first. As usual, Tumbleweed does not stop here, and the next snapshot is already being tested with more requests submitted and piled up. The changes we can predict for the foreseeable future are:
- QEmu 10.0.2
- Linux kernel 6.15.2
- Go 1.25
- KDE Plasma 6.4 (6.3.91 is currently being tested)
- Using grub2-bls as default bootloader on UEFI systems
- GCC 15 as distro compiler, see https://build.opensuse.org/staging_workflows/openSUSE:Factory/staging_projects/openSUSE:Factory:Staging:Gcc7
- CMake 4.0 is not yet submitted, but please help fix issues, See https://lists.opensuse.org/archives/list/factory@lists.opensuse.org/thread/FHM4V3PGI3GX65LG6ZIAGJ6QQD5O57WN/
Quiz Set for Conferences
The openSUSE Project has rolled out a new web-based quiz application aimed at engaging conference attendees and open-source enthusiasts around the world.
The quiz platform, available at quiz.opensuse.org, offers a colorful, friendly interface with multiple curated challenges including “Kernel Ninja,” “Chameleon Fun for Kids!,” “The Ultimate YaST Challenge,” and an evolving “openSUSE Expert” mode. The app is designed for use at openSUSE booths during tech conferences, but it’s also accessible for daily use by the broader community.
Quizzes used to mean reprinting a thousand sheets when there was a typo or error, wrote Luboš Kocman in an email to the project mailing list. The change to putting the quiz online makes it sustainable, and far more fun.
Organizers can easily launch dedicated quiz instances for their events by submitting a pull request to the openSUSE/quiz GitHub repository. They can customize content, remove irrelevant quizzes, and avoid PR merges to keep deployments simple. Daily stats and winner selection are available via a built-in /stats endpoint, with optional /bingo functionality to ensure fairness in prize distributions, which will be offered at the openSUSE Conference.
The app supports offline use via npm start, which enables local quiz hosting over a private hotspot. Data is stored in local JSON files and allows event organizers to restart quizzes without losing participant scores. All content is open source, with translations managed through openSUSE’s Weblate platform.
People are encouraged to contribute quiz questions and translations.
“The goal is to make the ‘Expert’ quiz never-ending and truly global,” Kocman said.
The openSUSE community plans to showcase the quiz at DevConf.cz today and the openSUSE Conference 2025 in a couple weeks.
sslh: Remote Denial-of-Service Vulnerabilities
Table of Contents
- 1) Introduction
- 2) Overview of sslh
- 3) Security Issues
- 4) Other Findings and Remarks
- 5) Resilience of
sslhAgainst High Network Load Attacks - 6) Summary
- 7) Timeline
- 8) References
1) Introduction
sslh is a protocol demultiplexer that allows to provide
different types of services on the same network port. To achieve this, sslh
performs heuristic analysis of the initial network data arriving on a
connection, and forwards all further traffic to a matching service on the
local system. A typical use case is to serve both SSL and SSH connections
(hence the name) on port 443, to accommodate corporate firewall restrictions.
In April 2025 we conducted a review of sslh, mostly due to the fact that
it processes all kinds of network protocols and is implemented in the C
programming language, which is known to be prone to memory handling errors.
For this review we looked into release v2.2.1 of
sslh. Bugfixes for the issues described in this report can be found in
release v2.2.4.
The next section provides an overview of
the sslh implementation. Section 3)
describes two security relevant Denial-of-Service issues we discovered during
our review. Section 4) discusses some
non-security relevant findings and remarks we gathered during our review.
Section 5) looks into the general resilience
of sslh against high network load attacks. Section 6) provides a summary of our assessment of
sslh.
2) Overview of sslh
sslh implements so-called probes to determine the type of
service when a new TCP or UDP session is initiated. These probes inspect the
first few bytes of incoming data until a positive or a negative decision can
be made. Once a specific service type has been determined, all following
traffic will be forwarded to a dedicated service running on localhost, without
interpreting further data. sslh will only probe for those protocols that are
actively configured, no other probes will be
invoked without need.
sslh supports three different I/O models for handling network input. The
choice of what model to use is made at compile time, which is why there
can exist multiple sslh binaries, one for each I/O flavor. The following
models exist:
- a fork model implemented in
sslh-fork.c. In this model, a separate process is forked for each newly incoming TCP connection. The forked process obtains ownership of the TCP connection, handles related I/O, and exits when the connection ends. UDP protocols are not supported in this model. - a select model implemented in
sslh-select.c. In this model, file descriptors are monitored in a single process using theselect()system call. This model also supports UDP protocols: for this purpose, all data originating from the same source address are considered to be part of the same session. A dedicated socket is created for each new sessionsslhdetects. - an implementation based on libev implemented in
sslh-ev.c. This variant outsources the I/O management details to the third party library. This also supports UDP protocols in a similar way to the select model described earlier.
The different probes implemented in sslh were one of the focus areas during
our review. sslh runs with lowered privileges and systemd hardenings
enabled, thus privilege escalation attack vectors will only have limited
impact. An area that is still important in spite of these protections is
Denial-of-Service, which we looked into as well.
3) Security Issues
3.1) File Descriptor Exhaustion Triggers Segmentation Fault (CVE-2025-46807)
As part of our investigation of Denial-of-Service attack vectors, we looked
into what happens when a lot of connections are created towards sslh and, as
a result, file descriptors are exhausted. While the sslh-fork variant
manages file descriptor exhaustion quite well, the other two variants have
issues in this area. This especially affects UDP connections that need to be
tracked on application level, since there is no concept of a connection on
protocol level.
For each connection, sslh maintains a timeout after which the connection is
terminated if the type of service could not be determined. The sslh-select
implementation only checks UDP timeouts when there is network activity,
otherwise the file descriptors that are created for each UDP session stay
open. Due to this, an attacker can create enough sessions to exhaust the
1024 file descriptors supported by default by sslh, thereby making it
impossible for genuine clients to connect anymore.
Even worse, when the file descriptor limit is encountered, sslh crashes with
a segmentation fault, as it attempts to dereference
new_cnx, which is a NULL pointer in this case. Therefore,
this issue represents a simple remote Denial-of-Service attack vector. The
segmentation fault also happens when the admin configures the
udp_max_connections setting (or command line switch), as the NULL pointer
dereference is reached in this context as well.
To reproduce this, we tested the openvpn probe configured for UDP. On the
client side we created many connections where each connection only sends a
single 0x08 byte.
We did not check the sslh-ev implementation very thoroughly, because it
depends on the third party libev library. The behaviour is similar to the
sshl-select variant, though. UDP sockets are seemingly never closed again.
Bugfix
Upstream fixed this issue in commit ff8206f7c which is part
of the v2.2.4 release. While the segmentation fault
is fixed with this change, UDP sockets potentially still stay open for a
longer time until further traffic is processed by sslh, which triggers the
socket timeout logic.
3.2) Misaligned Memory Accesses in OpenVPN Protocol Probe (CVE-2025-46806)
In the UDP code path of is_openvpn_protocol(), if
clauses like this can be found:
if (ntohl(*(uint32_t*)(p + OVPN_HARD_RESET_PACKET_ID_OFFSET(OVPN_HMAC_128))) <= 5u)
This dereferences a uint32_t* that points to memory located 25 bytes after
the start of the heap allocated network buffer. On CPU architectures like ARM
this will cause a SIGBUS error, and thus represents a remote DoS attack vector.
We reproduced this issue on a x86_64 machine by compiling sslh with
-fsanitize=alignment. By sending a sequence of at least 29 0x08 bytes, the
following diagnostic is triggered:
probe.c:179:13: runtime error: load of misaligned address 0x7ffef1a5a499 for type 'uint32_t', which requires 4 byte alignment
0x7ffef1a5a499: note: pointer points here
08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08
^
probe.c:185:13: runtime error: load of misaligned address 0x7ffef1a5a49d for type 'uint32_t', which requires 4 byte alignment
0x7ffef1a5a49d: note: pointer points here
08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08 08
Bugfix
The usual fix for this problem in protocol parsing is to memcpy() the
integer data into a local stack variable instead of dereferencing the pointer
into the raw network data. This is what upstream did in commit
204305a88fb3 which is part of the
v2.2.4 release.
4) Other Findings and Remarks
4.1) Missing Consideration of Short Reads on TCP Streams
A couple of probes don’t consider short reads when dealing with the TCP
protocol. For example in is_openvpn_protocol() the
following code is found in the TCP code path:
if (len < 2)
return PROBE_AGAIN;
packet_len = ntohs(*(uint16_t*)p);
return packet_len == len - 2;
If less than two bytes have been received, then the function indicates
PROBE_AGAIN, which is fine. After the supposed message length has been
parsed into packet_len, the probe only succeeds if the complete message has
been received by now, otherwise the function returns 0 which equals
PROBE_NEXT.
Similar situations are found in
is_teamspeak_protocol() and
is_msrdp_protocol(). While it may be unlikely that such
short reads occur often with TCP, it is still formally incorrect and could
lead to false negative protocol detection in a number of cases.
Bugfix
Based on experience upstream believes that this is not an issue in practice currently, since no bug reports in this area have appeared. For this reason this is not a priority for upstream at the moment.
4.2) Likelihood of False Positive Probe Results
A couple of probe functions rely on very little protocol data to come to a
positive decision. For example is_tinc_protocol()
indicates a match if the packet starts with the string " 0". In
is_openvpn_protocol() any packet that stores the
packet length in the first two bytes in network byte order is considered a
match, which is probably the case for quite a few network protocols.
Security-wise this is not relevant, because the services these packets are
being forwarded to have to be able to deal with whatever data is sent to them,
even if it is destined for a different type of service. From a perspective of
correct probe implementation it could lead to unexpected behaviour in some
situations, however (especially when a lot of protocols are multiplexed over
the same sslh port). We suggested upstream to try and base probe decisions
on more reliable heuristics to avoid false positives.
Bugfix
Similar to section 4.1) upstream does not believe that this is a major issue for users at the moment, hence there are no immediate changes to the code base to address this.
4.3) Parsing of Potentially Undefined Data in is_syslog_protocol()
The following code is found in is_syslog_protocol():
res = sscanf(p, "<%d>", &i);
if (res == 1) return 1;
res = sscanf(p, "%d <%d>", &i, &j);
if (res == 2) return 1;
The sscanf() function does not know about the boundaries of the incoming
network data here. Very short reads like a 1 byte input will cause sscanf()
to operate on undefined data, found in the buffer allocated on the heap in
defer_write().
Bugfix
For a quick bugfix we suggested to explicitly zero terminate the buffer by
allocating an extra byte after the end of the payload. Running sscanf() to
parse integers found in untrusted data could be considered a bit on the
dangerous side, however, thus we suggested to generally try and change this
into more careful code.
Upstream fixed this in commit ad1f5d68e96, which is part of the v2.2.4 release. The bugfix is along the lines of our suggestion and also adds an additional sanity check for the integer which is parsed from the network data.
5) Resilience of sslh Against High Network Load Attacks
A general-purpose network service like sslh can be sensitive to resource
depletion attacks, such as the aforementioned file
descriptor exhaustion issue. The sslh-fork implementation spawns a
new process for each incoming TCP connection, which brings to mind the
possibility to consume excessive resources not only on a per-process scope,
but also on a system-wide scope. By creating a large amount of connections
towards sslh, a “fork bomb” effect could be
achieved. When a “fork bomb” is executed locally on Linux then this often
still causes an inaccessible system even today, when there are no strict
resource limits in place. Achieving something like this remotely would be a
major DoS attack vector.
sslh-fork implements a timeout for each connection, which is based on the
select() system call. If the probing phase does not come to a decision
before the timeout occurs, then the connection is closed again. By default
this timeout is set to five seconds. Since sslh-fork creates a new process
for each newly incoming connection, there is no limit of 1024 file descriptors
being opened by sslh. In theory an attacker could attempt to exceed the
system-wide file descriptor and/or process limit by creating an excessive
amount of connections.
The default timeout enforcement of five seconds means that the attack is quite
limited, however. During our tests we were not able to create more than about
5,000 concurrent sslh-fork processes. This creates quite a bit of system
load, but does not expose any critical system behaviour on an average machine.
Even though the current situation is acceptable, it could be considered to offer
an application level limit for the amount of parallel connections. For UDP
there exists a udp_max_connections setting already, but not for TCP.
Bugfix
In discussions with upstream it was agreed that proper protection from
such Denial-of-Service attacks is best achieved on the end of the
administrator, who can for example configure Linux cgroup constraints.
Upstream is still considering to add a tcp_max_connections setting to limit
the maximum amount of parallel TCP connections in the future.
6) Summary
Overall we believe sslh is in good shape. There is little attack surface, and
hardenings are in place by default. With the two remote DoS vectors 3.1) and
3.2) fixed, it should be safe to use sslh in
production. Users that are worried about more complex DoS attacks should
additionally consider customizing their setup to enforce resource consumption
limits on operating system level.
There is some danger of false positive or false negative probe outcomes as
outlined in sections 4.1) and 4.2). These seem not to have occurred a lot
in practice yet, and it is a trade-off towards simplicity and efficiency in
the current implementation of sslh.
7) Timeline
| 2025-04-25 | We privately reported the findings to the author of sslh by email, offering coordinated disclosure. |
| 2025-05-06 | We discussed details about the reported issues and possible CVE assignments. The issues were kept private for the time being. |
| 2025-05-08 | We assigned two CVEs from our pool for the issues and shared them with upstream. |
| 2025-05-25 | The upstream author informed us about bugfixes that have already been published in the sslh GitHub repository and about an upcoming release containing the fixes. |
| 2025-05-28 | Release v2.2.4 containing the fixes was published. |
| 2025-06-13 | We published this report. |
8) References
- sslh GitHub project
- SUSE Bugzilla review bug for sslh
-
sslhrelease v2.2.1 (reviewed version) -
sslhrelease v2.2.4 (fixed version)
Asia Summit Call For Host is Here
openSUSE.Asia Summit 2026: Call For Host
The openSUSE.Asia Summit is an annual conference that brings together openSUSE contributors and enthusiasts from across Asia. It serves as a valuable platform for face-to-face collaboration, knowledge sharing, and community building. The summit primarily highlights the openSUSE distribution, exploring its applications in both personal and enterprise environments, while also promoting the broader open source culture.
As part of its mission to promote openSUSE across Asia, the openSUSE.Asia Summit Organizing Committee invites local communities to take on the exciting challenge of hosting the 2026 summit. The committee is committed to supporting you every step of the way to ensure a successful and impactful event.
Important Dates
- August 1: Deadline for application
- August 30: Presentation at openSUSE.Asia Summit 2025
- October 31: Announcement of the following host
We will invite you to join our regular online meetings, giving you the opportunity to observe and learn the process of organizing the event. Additionally, you will be expected to present your proposal at the upcoming summit in Faridabad, India. The submitted proposals will be reviewed by the organizing committee. During this process, the committee may reach out with additional questions or requests for further information.
How to Submit?
Please email your proposal to both summit@lists.opensuse.org and opensuseasia-summit@googlegroups.com. Because the former address does not allow attachments, you need to upload your proposal somewhere and share the link to it.
The proposal should contain:
- Venue
- How to reach your city and venue
-
Budget Estimation
- Conference Venue
- T-shirt
- Tea break, Lunch, Dinner, Conference Tour, etc.
- Introduction to your community who will organize the summit
- Introduction to your community that will organize the summit
Please refer to openSUSE.Asia Summit Tips for Organizers before writing your proposal.
We are looking forward to hearing from you soon!
Log Detective: Google Summer of Code 2025
I'm glad to say that I'll participate again in the GSoC, as mentor. This year we will try to improve the RPM packaging workflow using AI, as part of the openSUSE project.
So this summer I'll be mentoring an intern that will research how to integrate Log Detective with openSUSE tooling to improve the packager workflow to maintain rpm packages.
Log Detective
Log Detective is an initiative created by the Fedora project, with the goal of
"Train an AI model to understand RPM build logs and explain the failure in simple words, with recommendations how to fix it. You won't need to open the logs at all."
As a project that was promoted by Fedora, it's highly integrated with the build tools around this distribution and RPM packages. But RPM packages are used in a lot of different distributions, so this "expert" LLM will be helpful for everyone doing RPM, and everyone doing RPM, should contribute to it.
This is open source, so if, at openSUSE, we want to have something similar to improve the OBS, we don't need to reimplement it, we can collaborate. And that's the idea of this GSoC project.
We want to use Log Detective, but also collaborate with failures from openSUSE to improve the training and the AI, and this should benefit openSUSE but also will benefit Fedora and all other RPM based distributions.
The intern
The selected intern is Aazam Thakur. He studies at University of Mumbai, India. He has experience in using SUSE as he has previously worked on SLES 15.6 during his previous summer mentorship at OpenMainFrame Project for RPM packaging.
I'm sure that he will be able to achieve great things during these three months. The project looks very promising and it's one of the things where AI and LLM will shine, because digging into logs is always something difficult and if we train a LLM with a lot of data it can be really useful to categorize failures and give a short description of what's happening.
SELinux: finding an elegant solution for emulated Windows gaming on Tumbleweed
Table of Contents
- 1) Overview
- 2) Introduction to SELinux
- 3) The problem with emulating Windows games
- 4) Finding an elegant solution
- 5) Closing Remarks
- 6) References
1) Overview
OpenSUSE Tumbleweed recently switched to using SELinux by default. While generally well received, this change caused problems in particular when playing Windows games through Proton or Wine. This post will provide context and introduce the solution the openSUSE SELinux team came up with.
Section 2 gives an overview of SELinux and introduces the primitives necessary to understand the issue and solution. Section 3 takes a closer look at the root cause of the problem and the manual steps needed to work around the issue in the past. Section 4 discusses the requirements for a better solution and how it was implemented in the end. Section 5 closes with information on how to report SELinux bugs and how to reach the openSUSE SELinux team.
2) Introduction to SELinux
OpenSUSE Tumbleweed switched to SELinux as the default Mandatory Access Control mechanism for new installations in February 2025.
The central reason for the change was that we consider SELinux the more encompassing solution: security problems with a program do not pose a threat to the whole system, rather a system compromise can be confined to the affected program or daemon.
SELinux provides a powerful and detailed language to describe expected application behaviour. Allowing to confine a process, referred to as a SELinux domain, by limiting access to required system resources and describing the interaction with other domains. A large catalog of domains is already available via the upstream SELinux policy.
SELinux booleans
Common behaviour of a piece of software might be allowed by default for the domain, but very specific scenarios might be prohibited, especially when negatively impacting security. SELinux booleans provide a way for the user to enable such optional functionality in the SELinux policy.
To give an example: the Apache HTTP daemon is used to serve web pages. In
certain situations it might be needed that these webpages are stored in the user’s
home directory, but as a default it is not advisable that a network facing daemon
has access to the home directories. To address these different usage scenarios
a boolean called httpd_enable_homedirs exists. The user can turn on the boolean
if the HTTP daemon needs to access the home directories of users to serve web pages.
3) The problem with emulating Windows games
Playing Windows games on Linux with SELinux enabled did not work without manual intervention by the user.
This is related to the way Windows libraries have been developed and are used by emulation software.
To allow the software for emulating Windows games to work, for example Steam with Proton or
Lutris with Wine, a boolean called selinuxuser_execmod needs to be enabled:
sudo setsebool -P selinuxuser_execmod 1
But enabling this boolean has consequences for the general security of the system.
The user_selinux manpage states for selinuxuser_execmod:
If you want to allow all unconfined executables to use libraries requiring text relocation that are not labeled textrel_shlib_t, you must turn on the selinuxuser_execmod boolean.
But why exactly is the boolean problematic and required a manual change before? Executable stack is used by hackers as a building block in their exploitation techniques. A lot of research went into finding mitigation strategies to make it harder for malicious actors to run successful exploits. One central measure was Executable-space protection, and Text relocation touches a part of that mitigation. If the boolean is enabled it allows modification of the executable code portions of the affected libraries, and could result in successful exploitation of the processes using these libraries.
4) Finding an elegant solution
OpenSUSE Tumbleweed is a general-purpose Linux distribution, targeting a multitude of use cases, be it as a server, running on embedded devices, as container host or as a desktop system. Some Tumbleweed users require their desktop system to run emulations software for Windows games.
In general we try to take a Secure by Default approach when we take decisions affecting
security. For openSUSE Tumbleweed we decided to disable selinuxuser_execmod
by default, because we think it provides a risk to the security of the system if all
unconfined executables can use libraries with text relocation.
In software security we usually want to make it as hard as possible for malicious actors to exploit a target. Accomplishing this feat is not easy, because some attack scenarios rely on normal system behavior that can be used or exploited by attackers. An approach to mitigate this in defensive software security is a concept known as Defense in Depth, where different protective mechanisms are used to provide a layered defense, making a successful exploit as hard as possible.
A central requirement for a solution was not to cause a negative impact on the security of
other use cases, which do not require emulation of Windows games. Enabling selinuxuser_execmod
by default for all Tumbleweed installations was no option. It would take away a protection mechanism
and therefor weaken the Defense in Depth approach.
Manually setting the boolean was needed to get the emulation layer
for Windows to function properly. To arrive at that solution the user needed a
certain level of familiarity with the administration of SELinux. A transparent, but selective solution,
that would need no intervention from the user would be ideal to implement.
Implementation
We decided to introduce a new dependency to packaged gaming software in openSUSE Tumbleweed.
If a user installs the RPM version of Lutris or Steam, then the RPM selinux-policy-targeted-gaming
will now be installed as well, enabling the boolean on the user system automatically.
This solution improves usability for the users who install gaming software
and does not compromise the security of other use cases of the distribution.
A user preferring the Flatpak versions of Steam or Lutris can manually install the new package:
sudo zypper in selinux-policy-targeted-gaming
As we do not control the Flatpak applications, we can not add any dependencies to them. As an alternative the user can also still set the boolean manually.
5) Closing Remarks
The openSUSE SELinux team is committed to keeping openSUSE users safe with SELinux, and to fixing problems that SELinux may cause to the community. To facilitate changes with SELinux we rely on users to work with us and provide feedback, so that we understand what the current problematic areas are. If you encounter problems with SELinux feel free to open a bug or reach out over the mailing list.
6) References
Tackling performance issues caused by load from bots
In recent months, I observed an increase in performance issues with partial short outages, particularly of web applications performing expensive operations such as database or shell queries. The origin was always easy to map to an amount of requests larger than what some backend applications are able to handle. Whilst part of the requests do originate from legitimate users, a large amount is found to originate from obscure sources - particularly AI related crawlers seem to dominate. Whereas traditional search engine crawlers, which we do encourage to scan our websites to allow for more users to find them, scan with few requests spread over a long time frame, these new crawlers tend to issue thousands of requests, sometimes in less than a day. With multiple companies pursuing the same practices, this quickly adds up to requests and subsequently load which is not sensible to scale for, particularly given the lack of obvious benefit for the general public.
Over time I implemented various measures to reduce the amount of undesired requests based on the observed patterns, whilst aiming to maintain a stable experience for legitimate requests. These measures include rate limiting (with more fine grained limits for particularly "expensive" sites and paths), wide blocking of source networks from cloud providers and AI related companies, blocking of user agent patterns and blocking of "dumb" requests (for example, we stopped routing requests targeting various script file types to backends which do not speak the matching language). Monitoring did show these measures to help with reducing the immediate request load, however new patterns quickly emerged. A new phenomena are large amounts of requests spread over a large amount of different source networks. Especially with source networks identifying as serving residential traffic, blocking is not possible without risking the lockout of legitimate users. A new method needed to be found.
Of course, we are not the only organization affected by this. The recent influx of AI related crawlers impacting web services caused various operators to implement additional protections, and the most visible one to users are challenge websites, making the user land on an intermediate page before being redirected to the desired location. Whilst these come in various forms, I mostly observe ones asking for a captcha and ones computing a proof-of-work task in the client. The latter came particular popular with the release of Anubis [0], an open source software making it easy for operators to equip their website with a proof-of-work challenge protection. Anubis reached a certain level of fame by big websites deploying it and tech related news outlets talking about it. Most naturally, I looked into Anubis as a solution for our situation as well. The proof-of-work concept was particularly interesting, as automated challenges are less annoying to users and have less accessibility concerns than manual captcha based ones.
As for Anbuis, it acts as a reverse proxy and serves a pre-defined challenge website. It also ships with excludes for known-good search engine crawlers.
In our setup, which consists of internet-facing HAProxy servers routing traffic to backend application/web servers, this would introduce another proxy traffic would flow through. Upon discussion with @darix, we figured it would be beneficial to instead utilize SPOE, the HAProxy Stream Process Ofloading Engine, to "ask" Anubis to challenge problematic clients, but then to pass the result back to HAProxy to directly route the traffic as before. Following the upstream discussion we initiated, I prepared a patch for this [1] - as I was idling for a while before opening a PR, someone else picked up the work and improved upon it, bringing the implementation to a usable shape [2] - however, it has not yet been completed and merged by the time of writing. More importantly, also as part of the upstream discussion, a user suggested to swap out the Go library I used for the SPOP implementation in Anubis with a more performant one [3] - haproxy-go [4]. The same comment [3] lead me to discover the same user having developed a software similar to Anubis, which already implements the suggested library and specifically targets HAProxy native deployments: Berghain [5]. Whilst the user experience is similar to Anubis - one gets served a challenge website to complete an automated proof-of-work computation before being redirected to the desired location, the background implementation is different. It operates tightly integrated with HAProxy by utilizing the SPOE - first to construct a challenge for clients (that is, if a client is intended to be challenged, which is decided using standard HAProxy ACLs), then to verify the challenge response, which is stored in a cookie on the client. The challenge page (which is a combination of HTML, CSS, JS) itself is served directly by HAProxy from memory.
This seemed like what we were looking for:
- no additional reverse proxy, preservance of existing HAProxy based routing
- decision which clients to challenge using HAProxy ACLs, which we already use in our setup and can easily extend upon
- can be configured to not impact web service availability if the challenge service is offline
- easy branding using HTML + SCSS (Anubis in its default build does not allow for any branding - however they seem to have a version for paying customers and open source projects which allows to swap the imagery)
The project seemed to be in an early stage, with not much activity as compared to Anubis, however initial testing seemed promising. After opening an issue with a minor flaw, the upstream maintainer messaged me - as it turns out, they have similar ideas and are very nice to chat with. Over the last days, various improvements landed in Berghain - I contributed some patches [6], which were pleasantly reviewed and integrated, and the upstream maintainer helped as well, with answering questions in chat and solving bugs [7].
For branding, I made a fork [8] in which the sources of the web page are modified. An upstream discussion to decouple this, allowing theming to reside separately, was started [9], but ideas on achieving this are still pending. As the web sources are deemed to not change as often, the maintenance effort should not be too bad for the time being - I rebase our branch with the upstream one when there are changes, and add the customizations as a patch in our package [10].
With this, all seems to be set for deployment. However, there were some challenges (pun intended) which had to be considered:
- users should not be unnecessarily "annoyed"
=> cover websites and paths only selectively when there is need for additional protection due to excess application load - the challenge requires JavaScript, don't unnecessarily harm users which do not have JavaScript enabled, and, most importantly, don't break legitimate command line tooling and scripts
=> cover only websites and paths which would require JavaScript anyways (i.e. no API paths) - if the configured validity period expires while the user is filling out a form, accidental form resubmission might be triggered
=> cover only GET requests - legitimate search engine crawlers should not be inhibited
=> adapt lists of user agent + source network combinations from Anubis
These points were easily solved using HAProxy ACLs. Of course, the exemptions also leave more room for malicious actors to work around the protection. Whilst this is a concern, most bots are found to be "dumb", hence the enablement can be expected to significantly help with the current situation even with the constraints at hand. Over time, solutions allowing for tighter limitations might be investigated and developed. Particularly interesting was a discussion with the maintainer of Berghain which brought up some ideas to challenge clients without JavaScript, however there is no concrete plan for this yet.
With all that being said, the protection has now been deployed and enabled for two services [11] - including progress.opensuse.org, where you are reading this article right now. Enablement for more services will follow over time as needed, when needed.
With all the considerations which went into this implementation, I hope for the impact on legitimate users to be minimal. If you notice any undesired breakage as a result of this nonetheless, please do open an issue in our tracker [12] explaining the circumstances, and I will try to work out a solution.
[0] https://anubis.techaro.lol
[1] https://github.com/TecharoHQ/anubis/issues/236#issuecomment-2784919382
[2] https://github.com/TecharoHQ/anubis/pull/460
[3] https://github.com/TecharoHQ/anubis/issues/236#issuecomment-2801861198
[4] https://github.com/DropMorePackets/haproxy-go
[5] https://github.com/DropMorePackets/berghain
[6] https://github.com/DropMorePackets/berghain/issues?q=author%3Atacerus
[7] https://github.com/DropMorePackets/berghain/commit/6080b227008a759c267a973202cf2b4edff38e31, https://github.com/DropMorePackets/haproxy-go/commit/c1707895ddabaa9c11d4e0b99e2cba040a0a3330
[8] https://github.com/openSUSE/berghain
[9] https://github.com/DropMorePackets/berghain/issues/26
[10] https://build.opensuse.org/package/show/openSUSE:infrastructure/berghain
[11] https://progress.opensuse.org/projects/opensuse-admin/repository/salt/revisions/3257c222f1c92c96c1d3caaeb7c14604fefad54a
[12] https://progress.opensuse.org/projects/opensuse-admin/issues (in case of issues with using the tracker directly, create a ticket via admin@o.o)
Edit after 1 day:
it was suggested to attach some graphs showing how the load went down after this deployment - here are the graphs behind progress.o.o as an example (times in the graphs are in CEST):


Improvements To RPM Lint Results and Reviewing Submit Requests
Speakers Set Course for openSUSE Conference
The openSUSE Conference 2025 in Nuremberg from June 26 - 28 is shaping up to be a great gathering for the open source software community.
There are three packed days of presentations, workshops and discussion along with three keynotes.
This year’s conference features SUSE CEO Dirk-Peter van Leeuwen who will recognize the openSUSE community’s 20-year journey. Peer Heinlein, who founded the Heinlein Group, which includes companies like Heinlein Support, mailbox.org, OpenTalk, and OpenCloud, will provide another keynote on the same day and his talk will focus on the risks users face when using proprietary software. Another keynote from Tropic Square’s CEO Jan Pleskač will spotlight the growing need to extend open source hardware.
The conference is offering a broad look at where openSUSE is heading and what challenges are emerging for the project’s development and how the open-source communities can resolve them.
There are several sessions drawing attention like “Public Money? Public Code!” and a series of presentations addressing Cyber Resilience Act (CRA) and Network and Information Security 2 Directive (NIS2) readiness. These sessions explored how European cybersecurity regulations are impacting small to medium open-source vendors and what steps are needed to align with the evolving legal landscape.
On the technical side, integration and automation sessions continue. One talk demonstrated how Uyuni can be tightly woven into existing infrastructure management tools like Ansible and Terraform. Another session unveiled a tool called container-snap, a prototype designed to bring atomic OS updates through OCI images, which helps eliminate the risk of broken upgrades.
The Leap 16.0 Beta will have a dedicated session, and the future of SUSE Linux Enterprise will be discussed in a talk titled “From ALP to SLES16”.
Workshops on LLMs will show how to run large language models locally and turn them into functional agents and a popular penguin AI project called Kowalski should capture some attention at the conference.
Underlying many talks is a shared urgency around user empowerment. The “End of 10 Install Workshop” sessions are aimed at encouraging users to install openSUSE on aging or repurposed hardware based on Microsoft’s end-of-life date for Windows 10.
The full schedule of the openSUSE Conference 2025 is available at events.opensuse.org.