Building Edge AI Infrastructure with KVM, openSUSE, and Ollama
Edge AI infrastructure is transforming how we deploy machine learning workloads, bringing computation closer to data sources while maintaining privacy and reducing latency. This comprehensive guide demonstrates building an edge analytics platform using KVM virtualization, openSUSE Leap (15.6), K3s, and Ollama for local AI inference.
Our architecture leverages a KVM homelab infrastructure originally set-up by my Google Summer of Code Mentor. This set-up was built to create specialized AI nodes in a distributed cluster, with Longhorn providing shared storage for models and application data. Each component is chosen for reliability, scalability, and edge-specific requirements.
Prerequisites and Architecture Overview
This setup requires:
- KVM hypervisor with existing infrastructure
- Minimum 8GB RAM per VM (16GB recommended for larger models)
- Network storage for distributed file system
- Basic Kubernetes and networking knowledge
The final architecture includes multiple specialized nodes, distributed storage, monitoring, and load balancing for high-availability AI inference.
VM Foundation Setup
Creating the Edge AI Node
Start with a clean VM deployment using established automation tools:
cd ~geeko/bin/v-i
sudo ./viDeploy -c ./VM-K3s.cfg -n edge-ai-01
System Configuration
Complete the openSUSE installation with consistent settings across all nodes:
Installation Settings:
- Keyboard: US
- Timezone: UTC
- Root password:
gsoc(consistent across cluster)
Network Configuration:
# Configure static networking
cd /etc/sysconfig/network
cp ifcfg-eth1 ifcfg-eth0
vi ifcfg-eth0
Edit network configuration:
STARTMODE=auto
BOOTPROTO=static
IPADDR=172.xx.xxx.xx/24
Set hostname and disable firewall for edge deployment:
hostnamectl set-hostname edge-ai-01
echo "172.xx.xxx.xx edge-ai-01.local edge-ai-01" >> /etc/hosts
systemctl disable --now firewalld
systemctl restart network
Essential Package Installation
Install required components for Kubernetes and distributed storage:
zypper refresh
zypper install -y open-iscsi kernel-default e2fsprogs xfsprogs apparmor-parser
systemctl enable --now iscsid
Storage Configuration for Longhorn
Prepare dedicated storage for distributed AI workloads:
lsblk
fdisk /dev/vdb
# Create new GPT partition table and primary partition
mkfs.ext4 /dev/vdb1
mkdir -p /var/lib/longhorn
echo "/dev/vdb1 /var/lib/longhorn ext4 defaults 0 0" >> /etc/fstab
mount -a
systemctl reboot
Kubernetes Cluster Integration
Joining the Edge AI Cluster
Access your Rancher management interface to create a dedicated AI cluster:
- Navigate to Rancher WebUI:
http://172.16.200.15 - Create → Custom cluster
- Name:
edge-ai-cluster - Select K3s version
- Copy and execute registration command:
curl -fsSL https://get.k3s.io | K3S_URL=https://172.xx.xxx.xx:6443 K3S_TOKEN=your-token sh -
Verify cluster connectivity:
kubectl get nodes
kubectl get pods --all-namespaces
Ollama Installation and Configuration
Installing Ollama
Deploy Ollama for local LLM inference:
curl -fsSL https://ollama.com/install.sh | sh
systemctl enable --now ollama
Cluster Access Configuration
Configure Ollama for distributed access:
mkdir -p /etc/systemd/system/ollama.service.d
vi /etc/systemd/system/ollama.service.d/override.conf
Add cluster binding:
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
Apply configuration:
systemctl daemon-reload
systemctl restart ollama
Model Deployment Strategy
Deploy models based on hardware capabilities:
# For 8GB RAM nodes - quantized models
ollama pull phi3
# For 16GB+ RAM nodes - higher precision
ollama pull phi3
# Verify installation
ollama list
Quantized models (q4_K_M) reduce memory usage by ~75% while maintaining performance for edge analytics.
Edge Analytics Platform Deployment
Repository Setup
Clone the Edge Analytics ecosystem:
git clone https://github.com/rudrakshkarpe/Edge-analytics-ecosystem-workloads-openSUSE.git
cd Edge-analytics-ecosystem-workloads-openSUSE
Configuration for Cluster Deployment
Update Kubernetes manifests for distributed deployment:
vi k8s-deployment/streamlit-app-deployment.yaml
Modify Ollama endpoint:
- name: OLLAMA_BASE_URL
value: "http://172.xx.xxx.xx:11434"
Application Deployment
Deploy in correct order for dependency resolution:
kubectl apply -f k8s-deployment/namespace.yaml
kubectl apply -f k8s-deployment/storage.yaml
kubectl apply -f k8s-deployment/streamlit-app-deployment.yaml
kubectl apply -f k8s-deployment/ingress.yaml
Verify deployment status:
kubectl get pods -n edge-analytics
kubectl logs -f deployment/edge-analytics-app -n edge-analytics
Distributed Storage with Longhorn
Longhorn Deployment
Deploy distributed storage system:
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml
Wait for all pods to be running:
kubectl get pods -n longhorn-system -w
Configure Default Storage Class
Set Longhorn as default for persistent volumes:
kubectl patch storageclass longhorn -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Multi-Node Scaling and Specialization
Additional Node Deployment
Scale the cluster with specialized nodes:
Node IP Assignment:
- edge-ai-02:
172.16.220.11 - edge-ai-03:
172.16.220.12
Node Labeling for Workload Distribution
Label nodes based on capabilities:
# GPU-enabled nodes
kubectl label node edge-ai-02 node-type=gpu-inference
# CPU-optimized nodes
kubectl label node edge-ai-03 node-type=cpu-inference
Specialized Model Deployment
Deploy appropriate models per node type:
# GPU nodes - larger models
ssh root@172.16.220.11
ollama pull phi3
# CPU nodes - optimized quantized models
ssh root@172.16.220.12
ollama pull phi3
Production Monitoring and Operations
Monitoring Stack Deployment
Deploy comprehensive observability:
kubectl apply -f k8s-deployment/monitoring.yaml
Service Access
For development and testing access:
# Edge Analytics application
kubectl port-forward svc/edge-analytics-service 8501:8501 -n edge-analytics
# Prometheus metrics
kubectl port-forward svc/prometheus 9090:9090 -n monitoring
# Grafana dashboards
kubectl port-forward svc/grafana 3000:3000 -n monitoring
Operational Commands
Model Management:
# Check model status across cluster
kubectl exec -it daemonset/ollama -n edge-analytics -- ollama list
# Update models cluster-wide
kubectl exec -it daemonset/ollama -n edge-analytics -- ollama pull llama3:latest
Scaling Operations:
# Horizontal scaling
kubectl scale deployment edge-analytics-app --replicas=3 -n edge-analytics
# Resource monitoring
kubectl top nodes
kubectl top pods -n edge-analytics
Access Points and Integration
Service URLs:
- Edge Analytics UI:
http://172.xx.xxx.xx:8501 - Rancher Management:
http://172.16.200.15 - Prometheus Metrics:
http://172.xx.xxx.xx:9090 - Grafana Dashboards:
http://172.xx.xxx.xx:3000(admin/admin)
Key Advantages of This Architecture
- Privacy-First: All AI inference happens locally, ensuring data never leaves your infrastructure
- Scalable: Kubernetes orchestration enables easy horizontal scaling as workloads grow
- Resilient: Distributed storage and multi-node deployment provide high availability
- Cost-Effective: Utilizes existing hardware infrastructure without cloud dependencies
- Flexible: Support for various model sizes and quantization levels based on hardware
Troubleshooting Common Issues
VM Connectivity:
virsh list --all
virsh console edge-ai-01
Kubernetes Issues:
kubectl describe node edge-ai-01
kubectl get events --sort-by=.metadata.creationTimestamp
Ollama Service Problems:
systemctl status ollama
journalctl -u ollama -f
curl http://172.xx.xxx.xx:11434/api/tags
This edge AI infrastructure provides a robust foundation for deploying local LLMs with enterprise-grade reliability, enabling organizations to leverage AI capabilities while maintaining complete control over their data and compute resources.
For advanced configurations and additional features, explore the complete repository documentation and consider integrating with external tools like vector databases for enhanced RAG capabilities.
Huge shoutout to my mentors Bryan Gartner, Terry Smith, Ann Davis for making this set-up possible.
Request Workflow Redesign: RPM Lint Results Filtering
Quick Install: Kubernets Cluster (RKE2) with Warewulf
In a previous blog
post
we've learned how we can leverage Warewulf - a node deployment tool used in
High Performance Computing (HPC) - to deploy a K8s cluster using RKE2. The
deployment was described step-by step explaining the rationale behind each
step.
This post supplements the previous post by providing a quick install guide
taking advantage of pre-build containers. With the help of these we are able
to perform the deployment of an RKE2 cluster with a few simple commands.
Warewulf uses PXE boot to bring up the machines, so your cluster nodes need
to be configured for this and the network needs to be configure so that the
nodes will PXE-boot from the Warewulf deployment server. How to do this and
how to make node known to Warewulf is covered in a post
about the basic Warewulf setup.
Prerequisite: Prepare Configuration Overlay Template for RKE2
K8s agents and servers use a shared secret for authentication - the
connection token. Moreover, agents need to know the host name of the
server to contact.
This information is provided in a config file: /etc/rancher/rke2/config.yaml
We let Warewulf provide this config file as a configuration overlay.
A suitable overlay template can be installed from the package
warewulf4-overlay-rk2:
zypper -n install -y warewulf4-overlay-rke2
Set Up the Environment
First we set a few environment variables which we will use later.
cat > /tmp/rke2_environment.sh <<"EOF"
server=<add the server node here>
agents="<list of agents>"
container_url=docker://registry.opensuse.org/network/cluster/containers/containers-15.6
server_container=$container_url/warewulf-rke2-server:v1.32.7_rke2r1
agent_container=$container_url/warewulf-rke2-agent:v1.32.7_rke2r1
token="$(for n in {1..41}; do printf %2.2x $(($RANDOM & 0xff));done)"
EOF
Here we assume we have one server node and multiple agent nodes whose
host names will have to replace in the above script.
Lists of nodes can be specified as a comma-seperated list, but also as
a range of nodes using square brackets (for example k8s-agent[00-15]
would refer to k8s-agent00 to k8s-agent15) or lists and ranges combined.
Also, the script generates a connection token for the entire cluster. This
token is used to allow agents (and secondary servers) to connect to the
primary server1. If we set up the server persistently outside of Warewulf
we either need to add this token to the config file /etc/rancher/rke2/config.yaml
on the server before we start the rke2-server service for the first time
or grab it from the server once it has booted.
In this example, we are using a 'short token' which does not contain
the fingerprint of the root CA of the cluster. For production environments
you are encouraged to create certificates for your K8s cluster beforehand
and calculate the fingerprint from the root CA for use in the agent token.
Refer to the appendix below how to do so.
In case your server is already running you will have to grab the token from
it instead and set the variable token in the script above accordingly.
It can be found in the file /var/lib/rancher/rke2/server/node-token.
Finally, we set the environment:
source /tmp/rke2_environment.sh
Obtain the Node Images
The openSUSE registry has pre-built image for K8s agents. Taking advantage of these removes the tedious task of preparing the base images. We pull these and build the container:
wwctl container import ${agent_container} leap15.6-RKE2-agent
wwctl container build leap15.6-RKE2-agent
In the previous blog we've talked about the implications deploying K8s servers using Warewulf. If we plan to deploy the server using Warewulf as well, we pull the server container from the registry also and build it:
wwctl container import ${server_container} leap15.6-RKE2-server
wwctl container build leap15.6-RKE2-server
Set up Profiles
We utilize Warewulf profiles to configure different aspects of the setup and assign these profiles to the different types of nodes as needed.
Set up rootfs
The settings in this profile will ensure that a rootfs type is set so
that a call to pivot_root() performed by the container runtime will
succeed.
wwctl profile add container-host
wwctl profile set --root tmpfs -A "crashkernel=no net.ifnames=1 rootfstype=ramfs" container-host
Set up Container Storage
Since the deployed nodes are ephemeral - that is they run out of a RAM disk,
we may want to to store container images as well as administrative data
on disk. Luckily, Warewulf is capable of configuring disks on the deployed
nodes.
For this example, we assumes that the container storage should reside on
disk /dev/sdb, we are using the entire disk and we want to scrap the data
at every boot.
Of course other setups are possible, check the upstream
documentation for
details.
wwctl profile add container-storage
wwctl profile set --diskname /dev/sdb --diskwipe "true" \
--partnumber 1 --partcreate --partname container_storage \
--fsname container_storage --fswipe --fsformat ext4 \
--fspath /var/lib/rancher \
container-storage
Note: When booting the node from Warewulf, the entire disk (/dev/sdb in
this example) will be wiped. Make sure it does not contain any valuable
data!
Set up the Connection Key
When we've set up our environment, we generated a unique connection key which we will use throughout this K8s cluster to allow agents (and secondary servers) to connect to the primary server.
wwctl profile add rke2-config-key
wwctl profile set --tagadd="connectiontoken=${token}" \
-O rke2-config rke2-config-key
Set up the Server Profile
Here, we set up a profile which is used to point agents - and secondary servers - to the primary server:
wwctl profile add rke2-config-first-server
wwctl profile set --tagadd="server=${server}" -O rke2-config rke2-config-first-server
Set up and Start the Nodes
Now, we are ready to deploy our nodes.
Set up and deploy the Server Node
If applicable, set up the server node
wwctl node set \
-P default,container-host,container-storage,rke2-config-key \
-C leap15.6-RKE2-server ${server}
wwctl overlay build ${server}
Now, we are ready to PXE-boot the first server.
Set up and deploy the Agent Nodes
We now configure the agent nodes and build the overlays:
wwctl node set \
-P default,container-host,container-storage,rke2-config-key,rke2-config-first-server \
-C leap15.6-RKE2-agent ${agents}
wwctl overlay build ${agents}
Once the first server node is up and running we can now start PXE-booting the agents. They should connect to the server automatically.
Check Node Status
To check the status of the nodes, we connect to the server through ssh and run
kubectl get nodes
to check the status of the available agents.
Appendix: Set Up CA Certificates and caluculate Node Access Token
Rancher provides a script (enerate-custom-ca-certs.sh) to generate the
different certificates required for a K8s cluster.
First, download this script to current directory:
curl -sOL --output-dir
. https://github.com/k3s-io/k3s/raw/master/contrib/util/generate-custom-ca-certs.sh
and run the following commands:
export DATA_DIR=$(mktemp -d $(pwd)/tmp-XXXXXXXX)
chmod go-rwx $DATA_DIR
./generate-custom-ca-certs.sh
wwctl overlay rm -f rke2-ca
wwctl overlay create rke2-ca
files="server-ca.crt server-ca.key client-ca.crt client-ca.key \
request-header-ca.crt request-header-ca.key \
etcd/peer-ca.crt etcd/peer-ca.key etcd/server-ca.crt etcd/server-ca.key"
for d in $files; do
d=${DATA_DIR}/server/tls/$d
wwctl overlay import --parents rke2-ca $d /var/lib/rancher/rke2/${d#${DATA_DIR}/}.ww
wwctl overlay chown rke2-ca /var/lib/rancher/rke2/${d#${DATA_DIR}/}.ww 0
wwctl overlay chmod rke2-ca /var/lib/rancher/rke2/${d#${DATA_DIR}/}.ww $(stat -c %a $d)
done
ca_hash=$(openssl x509 -in $DATA_DIR/server/tls/root-ca.crt -outform DER \
|sha256sum)
token="$( printf "K10${ca_hash% *}::server:"; \
for n in {1..41}; do printf %2.2x $(($RANDOM & 0xff));done )"
The certificates will be passed to the K8s in the system overlay.
If you are setting up mulitple K8s clusters you may want to create
separate certificates for each cluster and place the overlay name
into the clusters namespace instead of using rke2-ca.
Before you delete $DATA_DIR, you may want to save the root and
and intermiate certificates and keys ($DATADIR/server/tls/root-ca.*,
($DATADIR/server/tls/intermediate-ca.*) to a safe location.
If you have a root and intermediate CA you want to reuse, you may copy
these into the directory $DATADIR/server/tls before running
generate-custom-ca-certs.sh. Use $token as the agent token instead.
-
Secondary servers need to find the primary server endpoint like nodes. Therefore, they recieve the access token like nodes. ↩
Epic Earbud Recovery | Earbud-Finder
Planet News Roundup
This is a roundup of articles from the openSUSE community listed on planet.opensuse.org.
The below featured highlights listed on the community’s blog feed aggregator from August 16 to 22. Some of the most recent blogs involve updates on Tumbleweed developments, KDE and an extension of the Call for Hosts for the 2026 openSUSE.Asia Summit.
Here is a summary and links for each post:
openSUSE Welcome Receiving Makeover
The familiar opensuse-welcome-launcher is being phased out in favor of a new openSUSE-welcome greeter. Instead of relying on its own autostart, the greeter will coordinate desktop-specific greeters like gnome-tour and plasma-welcome. This approach allows welcome screens to appear not only on first boot but also after major updates and will help the project retire its legacy Qt5-based greeter.
DeepSeek V3.1: Why the Update Matters for the Market
A look at the newly announced DeepSeek V3.1 model from the Chinese AI startup. While technical details remain limited, the update promises a longer context window and incremental improvements at a fraction of the training cost of Western rivals. The release highlights the growing pressure on OpenAI and Anthropic as DeepSeek continues its fast-paced, cost-efficient development strategy.
Podcast Linux #27 – Slimbook One Special
The KDE Blog continues its archive project for Podcast Linux by revisiting the 27th episode, where Juan Febles reviews the Slimbook One desktop. Although the podcast is no longer active, its episodes remain a valuable resource for Linux enthusiasts.
KDE Gear 25.08 – Productivity Updates
This post highlights productivity-focused applications in the KDE Gear 25.08 release. Akonadi has seen major memory optimizations, KOrganizer introduces an improved date selector and search tooltips, and Kleopatra now offers independent encrypted notepad windows for better multitasking.
Accessibility with Free Technologies – Episode 6
The sixth episode of the podcast series explores accessibility topics with Eva Marco, Vicent Sanchis, and others. Discussions cover OpenStreetMap data visualization for accessibility, inclusive talent initiatives, testing adaptive hardware, and how AI is contributing to more inclusive design.
Cloud Containers: What They Are and How to Optimize Them
An overview of cloud container management, explaining how containers streamline deployment, reduce resource usage, and improve security. The article also details strategies for efficient management, from resource allocation to monitoring, and highlights OVHcloud’s solutions.
KDE Gear 25.08 – Itinerary Updates
KDE’s travel planner app, Itinerary, now allows manual input of bus and train journeys, provides live station maps, improved delay alerts, and expanded alternative connection searches to include ferries and flights. Enhanced live maps now display train infrastructure details via OpenRailwayMap.
Tumbleweed Review of the Week 33
This week brought eight new snapshots to openSUSE Tumbleweed, delivering updates such as Firefox 141.0.2, Linux kernel 6.16.0, KDE Plasma 6.4.4, KDE Frameworks 6.17.0, Postfix 3.10.3, OpenSSL 3.5.2, and more. Preparations are underway for upcoming updates including KDE Gear 25.08.0, kernel 6.16.1, and Python 3.13.7.
Lots of Polish – This Week in Plasma
Nate Graham’s weekly Plasma update, translated into Spanish, emphasizes bug fixes, performance improvements, and UI polish over new features. Highlights include accessibility improvements in notifications, refined Flatpak permissions, removal of outdated Breeze app icons, and numerous fixes across Plasma 6.4.5 and Frameworks 6.18.
Deadline Extended – Call for Host openSUSE.Asia Summit 2026
The deadline for proposals to host the 2026 openSUSE.Asia Summit has been extended to August 27, 2025. The selected host will be announced during this year’s Summit in Faridabad. Communities are encouraged to take this opportunity to showcase their region and bring openSUSE contributors together.
KDE Gear 25.08 – Dolphin Updates
The Dolphin file manager adds new search options with both indexed and simple search, tighter integration with Filelight for disk usage visualization, and improved view mode controls. These enhancements make Dolphin even more powerful for file management on KDE.
View more blogs or learn to publish your own on planet.opensuse.org.
Tumbleweed – Review of the week 2025/34
Dear Tumbleweed users and hackers,
Welcome back to our weekly review, where we dissect the latest snapshots to see what’s new in the openSUSE rolling release. This week has been incredibly busy, with a massive influx of updates touching nearly every part of the system, from the desktop environment and core libraries to the installer itself. Let’s get right into the details of what rolled out this week.
We published 5 snapshots (0815, 0816, 01817, 0818, and 0820) containing these changes:
- KDE Gear 25.08.0
- Linux kernel 6.16.1
- Postgresql 17.6
- SQLite 3.50.4
- Virtualbox 7.2.0
- glibc 2.42
- git 2.51.0
- hplip 3.25.6
- Introduction of opensuse-welcome-launcher: a shell script that will allow us to show different ‘welcome apps’ per desktop. The future will go towards e.g. GNOME Tour on a GNOME Desktop.
The following things are currently being tested and will reach you when things are worked out:
- Python 3.13.7
- nftables 1.1.4: earlier detected issues could be solved
- Rust 1.89
- Mesa 25.2: Xvfb crashes on 32bit systems (https://bugzilla.opensuse.org/show_bug.cgi?id=1247995)
- GNU Gettext 0.26
- Linux kernel 6.16.2
- VLC moving to ffmpeg-7 by backporting a patchset from upstream
openSUSE Welcome Receiving Makeover
The familiar openSUSE-welcome window that greets millions of desktops is nearing retirement and a new approach will soon take its place.
Rather than re-inventing the wheel, members of openSUSE’s release team have decided to tweak and refine existing solutions like gnome-tour for GNOME and plasma-welcome for KDE’s Plasma by making a new controller and opensuse-welcome-launcher to coordinate them and provide desktop-specific content.
This new welcome-launcher manages which greeter to run depending on the desktop environment. This gives openSUSE’s release team more control over when and how welcome screens are shown, instead of relying on each greeter’s own autostart mechanism.
The launcher isn’t limited to the first boot. It can display greeters after major system updates so users learn about new features, enhancements, and changes in a timely way.
The enrollment of this new greeter will be done in multiple phases.
1) The launcher will initially call the well known legacy openSUSE-welcome. The only difference is that it loses the checkbox show on next boot, as it’s no longer in charge of autostart.

2) The launcher triggers openSUSE branded gnome-tour and plasma-welcome while keeping openSUSE-welcome as a fallback (in case it’s installed).
3) The legacy Qt5-based greeter will eventually be decommissioned. We should have an agreed fallback on desktop sessions without dedicated greeter.
The phased approach allows integration with openQA testing and provides flexibility for future improvements.

Since opensuse-welcome-launcher is considered a legacy and is one of the last Qt5-dependent applications, the move will help phase out some remaining Qt5 components across the distribution.
Tumbleweed – Review of the week 2025/33
Dear Tumbleweed users and hackers,
Week 33 went without major hiccups for Tumbleweed snapshot building and testing, and so it happens that we released once again 8 Snapshots in a week (0807…0814).
The changes are numerous, most interestingly:
- Mozilla Firefox 141.0.2
- KDE Plasma 6.4.4 & KDE Frameworks 6.17.0
- python cryptography 45.0.5
- Postfix 3.10.3
- openSSL 3.5.2
- gnu gettext 0.25.1
- NetworkManager 1.52.1
- GTK 3.24.50
- Linux kernel 6.16.0
- GStreamer 1.26.5
- PHP 8.4.11
- qemu 10.0.3
- Bash 5.3.3
- Readline 8.3.1
- python PIP 25.2
- python pytest 8.4.1
A few things are still in the staging areas – either submitted recently or having issues passing QA; most interestingly, we have these changes lined up:
- KDE Gear 25.08.0
- Linux kernel 6.16.1
- Rust 1.89: build fix for 389-ds pending
- glibc 2.42: yast2/text mode installation looks ‘broken’ (missing blue background)
- Mesa 25.2: Xvfb crashes on 32bit systems (https://bugzilla.opensuse.org/show_bug.cgi?id=1247995)
- Python 3.13.7: build fix for qemu pending
- nftables 1.1.4: issues detected by openQA in combination with netavark
- openSUSE-welcome: prepare infra to have different ‘welcome apps’ per desktop (e.g gnome-tour on gnome)
Deadline Extended Call for Host openSUSE.Asia Summit 2026
The openSUSE.Asia Summit Organizing Committee has extended the deadline for the Call for Host to submit proposals for the 2026 Summit. Communities now have until August 27, 2025 to apply.
The extension comes in response to requests from local communities seeking more time to prepare their proposals. This is a great opportunity to showcase your region and bring the openSUSE community together in your city.
The selected proposals will be presented during openSUSE.Asia Summit 2025 in Faridabad from August 29–30. Proposals can be presented online if on-site participation is not possible.
For more information, visit: https://news.opensuse.org/2025/06/10/osas-cfh/
The syslog-ng Insider 2025-08: HDFS; configuration; Prometheus
The August syslog-ng newsletter is now on-line:
-
Deprecating Java-based drivers from syslog-ng: Is HDFS next?
-
Your first steps configuring syslog-ng
-
Prometheus exporter in syslog-ng
It is available at https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2025-08-hdfs-configuration-prometheus
