Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

Syslog-ng 101, part 13: Updating syslog-ng, syslog-ng 4

Version 4 of syslog-ng is now available. The good news is that it is fully backwards compatible. If the version string in your configuration is set to a 3.X version, it will work as expected even after updating to version 4. Of course you might run into corner cases, but I had no problems even with complex configurations. Today, we learn about updating syslog-ng, and some of the new features of syslog-ng 4.

You can watch the video on YouTube:

and the complete playlist at https://www.youtube.com/playlist?list=PLoBNbOHNb0i5Pags2JY6-6wH2noLaSiTb

Or you can read the rest the tutorial as a blog at: https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-101-part-13-updating-syslog-ng-syslog-ng-4

syslog-ng logo

the avatar of openSUSE News

Hands-on, Ad-free browsing at your home with Leap Micro 5.4 Beta

The Beta version of our Immutable HostOS Leap Micro 5.4 is now available. The update brings SELinux in enforcing mode by default as well as tuned. Leap Micro is not a traditional distribution, but rather a lightweight HostOS for running virtual machines and containerized workloads.

Leap Micro is an openSUSE equivalent of SUSE’s SLE Micro.

In this article, I would like to show you how it can be practically used to enhance your daily ad-free experience at home. I was able to replicate the entire setup in the VM, including downloading the image, in under 15 minutes.

My personal use case for Leap Micro is to have as much ad-free browsing as possible, DNS entries for local services, and a Nextcloud instance as a bridge to share pictures and videos in between my wife’s iPhone, kids tablet and my Android phone.

My private home setup is a Raspberry Pi 4 8GB with 1TB SDD connected via USB 3.0 to SATA III. I have a mesh via TP Link Deco X20. I do use port mapping from the Deco to expose services to the public via a static public IP.

The RPI has a reserved address based on its MAC address to keep stuff simple. If you have a dynamic public address, you can consider some dynamic DNS (DDNS) solutions.

I am personally happily using the described setup on my 8GB Raspberry Pi 4 with Leap Micro 5.3 along with Pi-hole for ad-free browsing and mapping of my NextCloud instance to a local address.

If you want to just test it out, virtual machines will work as well; just make sure that the VM’s virtual network interface is in bridge mode or uses forwarding of incoming connections. This can be easily set up with NetworkManager in just two clicks. Otherwise, you won’t be able to access web management of the VM services and the article becomes pointless.

The benefit I see in using Leap Micro is that the machine does not require any of my attention. I have automatic updates and self-healing on. The machine automatically reboots into an updated snapshot in the defined maintenance window (set by default) and if there is an issue that requires my attention, then I simply resolve the issue with the Cockpit interface in the web browser.

Cockpit Update

Leap Micro is an immutable operating system with read-only root. openSUSE solves this via btrfs snapshots and tools that enable automatic rollback and boot into a previous snapshot in case a system identifies that the boot into a new snapshot has failed.

Getting the Leap Micro 5.4 Beta

What is a self-install image?

The self-install image is essentially a bootable image that writes the pre-configured image of Leap Micro to the disk and enlarges system partitions to the disk size. In this way, installation takes less than two minutes in my VM (VM storage is a file, and I have PCI-e 4 gen M.2 SSD).

Download the self-install image from https://get.opensuse.org/leapmicro/5.4/ make sure to choose correctly the architecture x86-64, or AArch64 for arm devices. We’ll use a self-install x86-64 image as this article uses a VM for the demonstration.

get.opensuse.org

If you are using a physical device then please use zypper to install image writer. Users on other distributions can install e.g. Fedora Media Writer from Flathub. Use the tool of your choice to write the downloaded image to the USB flash drive.

$ sudo zypper in imagewriter

or

$ flatpak --user install org.fedoraproject.MediaWriter

Follow these instructions if you are reading this article on Windows: https://en.opensuse.org/SDB:Create_a_Live_USB_stick_using_Windows

Note - Users using Raspberry Pi without the USB boot enabled: Please download directly the pre-configured image and write it to the SD Card via the following command. The rest of the steps are the same.

xzcat [image].raw.xz | dd bs=4M of=/dev/sdX iflag=fullblock oflag=direct status=progress; sync

Need to avoid user interaction during the installation?

Users who want to avoid user interaction during installation (e.g. due to not having connected peripherals to the machine) can use Ignition or Combustion https://en.opensuse.org/Portal:MicroOS/Combustion to do the initial setup for them. For that use case, please use the preconfigured image that you will write to the disk drive by yourself. Self-install media requires user confirmation on overwriting the disk. Check this video tutorial for more information https://www.youtube.com/watch?v=ft8UVx9elKc

Writing the self-install image to the USB drive

This can be skipped in case you’re using a virtual machine. Make sure to have a tool for writing an image to the drive such as our image writer or e.g. Fedora Media Writer from Flathub if you are on an immutable system and want to avoid a reboot.

Booting the image

For demonstration purposes, I will be using Leap Micro 5.4 Beta x86-64 self-install image running in GNOME Boxes.

Beta Installer Boot

The self-install image is pretty straightforward. As mentioned before, it’s essentially a bootable image that writes a pre-configured image of Leap Micro to the disk and enlarges system partitions to the disk size.

Beta Installer Disks

About a minute after we are already booting into the deployed Leap Micro 5.4 Beta.

Beta Boot

The boot itself takes a few seconds, and we are entering a simple first boot wizard known from our minimal images (used to be called JeOS).

Beta Firstboot

First boot wizard lets you choose Timezone, Language and a root password, and your Leap Micro 5.4 is ready to be used (can be automated with Ignition/Combustion). We are ready to serve after two minutes including the initial configuration.

Beta Root Console

Getting into the cockpit

Message of the day (MOTD) suggests you enable the cockpit web. It will be accessible through the ip.address.of.this.server:9090. Login to the cockpit as root.

Note: For home purposes, I highly recommend not exposing this port to the public and keeping it for management only from your local network or at least that is how my setup looks like. You can completely skip SSH since the cockpit allows you to access the terminal via a web browser.

$ systemctl enable --now cockpit.socket

Beta Cockpit

Podman vs Docker

In this tutorial, we will run Pi-hole as a containerized workload. Leap Micro uses a podman by default. Cockpit has a nice podman plugin so you can pull && run containers directly from cockpit.

Unless the suggested deployment is very Docker centric, you should be able to just substitute docker with podman respective podman-docker and be good.

Pi-hole advertises Docker in its example; we can use this as an opportunity to show you how to install additional software on a transactional-update system.

You can use transactional-update pkg install docker or preferably use the transactional-update shell, which gets you a shell in a chroot of the newly created snapshot. There you can continue working just like it would be a traditional system.

# sudo transactional-update shell # zypper install docker # systemctl enable docker # exit # reboot

Do not forget to exit the transactional-update shell (type exit) and reboot afterward. All of the changes were done into a btrfs snapshot of the current environment, so we have to reboot it to see the changes. Fortunately, the reboot of vanilla Leap Micro takes less than 10 seconds

Note: A recommended way to install additional tools without a reboot is to use Distrobox

Install Docker

Deploying Pi-hole

We will follow instructions from https://github.com/pi-hole/docker-pi-hole#readme. This part took me literally two minutes.

This is essentially a copy-paste from the readme that runs the Pi-hole container in the background. Please change the password and set your time zone. Pay special attention to the host-to-container port mapping -p HOST_PORT:CONTAINER_PORT especially if you are running multiple workloads.

The -p 8888:80 says that we are mapping port 8888 of the Host to port 80 (web management interface) in the container. Port 53 (DNS) is mapped to the same port in the container.

You can store this in a wrapper e.g. /root/pihole_deploy.sh

Docker volume

In this example, we’re passing local /root/etc-pihole and /root/etc-dnsmaq.d directories to the container as Docker volumes where they’ll be present as /etc/pihole and /etc/dnsmasq.d respectively.

# mkdir -p /root/etc-pihole /root/etc-dnsmasq.d

# docker run -d \

--name pihole \

-p 53:53/tcp -p 53:53/udp \

-p 8888:80 \

-e TZ="Europe/Prague" \

-e WEBPASSWORD="CHANGEME" \

-v "/root/etc-pihole:/etc/pihole" \

-v "/root/etc-dnsmasq.d:/etc/dnsmasq.d" \

--dns=127.0.0.1 --dns=1.1.1.1 \

--restart=unless-stopped \

--hostname pi.hole \

-e VIRTUAL_HOST="pi.hole" \

-e PROXY_LOCATION="pi.hole" \

-e FTLCONF_LOCAL_IPV4="127.0.0.1" \

pihole/pihole:latest

Please wait until the state is healthy. You can proactively check the state with the following command.

# docker inspect -f "" pihole

Cleanup in case you messed up

# docker rm -f pihole # ^ re-do the above

Accessing Pi-hole web management

At this point, the containerized Pi-hole is already daemonized, and you can access the interface through ip.address.of.this.server:8888/admin

There is a default list; however, I did not find it sufficient for my ad-free youtube experience. You can use a builtin tool to look up further adlists.

Adlist Lookup

Accessing services with external domain with a local IP

This is especially useful in our Nextcloud example later. Here I create local DNS records with local IP for my public domain, so I can access my NextCloud instance with an external domain name but interact with local IPs.

DNS Entry

Using your new home DNS server with adlists

The last step is to configure your router’s DHCP server to use your new pi-hole instance as the DNS server. Please double-check that your end devices are using it as a DNS server. Otherwise, it will have no effect. In the example case, I did manually set the DNS entry in the DHCP settings of my TP-Link Deco to ip.address.of.this.server (in my case 192.168.68.69).

Tip: I find the function to temporarily disable blocking in case I am trying to debug issues with accessing certain sites.

And we’re done. Have a lot of fun!

the avatar of Nathan Wolf

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE Tumbleweed – Review of the week 2023/12

Dear Tumbleweed users and hackers,

This week we released only 5 snapshots, but one was hefty in size and we needed the extra time for the mirrors to settle again and get the bandwidth back under control. The large snapshot was due to the change in the default compiler: Tumbleweed has been rebuilt entirely using GCC 13. The released snapshots were numbered 0316, 0317, 0318, 0319, and 0321.

The most relevant changes in these snapshots were:

  • Linux kernel 6.2.6
  • transmission 4.0.2
  • KDE Plasma 5.27.3
  • systemd 253.1
  • GCC 13 is now used as the default compiler

As we needed to slow down check-ins, naturally the staging projects are well filled up now. You can expect these changes to reach you soon:

  • GNOME 44
  • Linux kernel 6.2.8
  • cmake 3.26.0
  • cURL 8.0.1
  • LibreOffice 7.5.2
  • Samba 4.18.0
  • LLVM 16
  • openSSL 3.1.0

One extra reminder for i586 users now: next week marks the end of i586 packages in the regular openSUSE Tumbleweed repositories. Should you still run Tumbleweed on such an old machine, you can keep on doing so using the separate port (repositories at http://download.opensuse.org/ports/i586/tumbleweed/repo/).

a silhouette of a person's head and shoulders, used as a default avatar

Teaching an odd dog new tricks

We – that is to say the storage team at SUSE – have a tool we’ve been using for the past few years to help with development and testing of Ceph on SUSE Linux. It’s called sesdev because it was created largely for SES (SUSE Enterprise Storage) development. It’s essentially a wrapper around vagrant and libvirt that will spin up clusters of VMs running openSUSE or SLES, then deploy Ceph on them. You would never use such clusters in production, but it’s really nice to be able to easily spin up a cluster for testing purposes that behaves something like a real cluster would, then throw it away when you’re done.

I’ve recently been trying to spend more time playing with Kubernetes, which means I wanted to be able to spin up clusters of VMs running openSUSE or SLES, then deploy Kubernetes on them, then throw the clusters away when I was done, or when I broke something horribly and wanted to start over. Yes, I know there’s a bunch of other tools for doing toy Kubernetes deployments (minikube comes to mind), but given I already had sesdev and was pretty familiar with it, I thought it’d be worthwhile seeing if I could teach it to deploy k3s, a particularly lightweight version of Kubernetes. Turns out that wasn’t too difficult, so now I can do this:

> sesdev create k3s
=== Creating deployment "k3s" with the following configuration === 
Deployment-wide parameters (applicable to all VMs in deployment):
deployment ID:    k3s
number of VMs:    5
version:          k3s
OS:               tumbleweed
public network:   10.20.190.0/24 
Proceed with deployment (y=yes, n=no, d=show details) ? [y]: y
=== Running shell command ===
vagrant up --no-destroy-on-error --provision
Bringing machine 'master' up with 'libvirt' provider...
Bringing machine 'node1' up with 'libvirt' provider...
Bringing machine 'node2' up with 'libvirt' provider...
Bringing machine 'node3' up with 'libvirt' provider...
Bringing machine 'node4' up with 'libvirt' provider...

[...
  wait a few minutes
  (there's lots more log information output here in real life)
...]

=== Deployment Finished ===
 You can login into the cluster with:
 $ sesdev ssh k3s

…and then I can do this:

> sesdev ssh k3s
Last login: Fri Mar 24 11:50:15 CET 2023 from 10.20.190.204 on ssh
Have a lot of fun…

master:~ # kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   5m16s   v1.25.7+k3s1
node2    Ready                     2m17s   v1.25.7+k3s1
node1    Ready                     2m15s   v1.25.7+k3s1
node3    Ready                     2m16s   v1.25.7+k3s1
node4    Ready                     2m16s   v1.25.7+k3s1 

master:~ # kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   local-path-provisioner-79f67d76f8-rpj4d   1/1     Running     0          5m9s
kube-system   metrics-server-5f9f776df5-rsqhb           1/1     Running     0          5m9s
kube-system   coredns-597584b69b-xh4p7                  1/1     Running     0          5m9s
kube-system   helm-install-traefik-crd-zz2ld            0/1     Completed   0          5m10s
kube-system   helm-install-traefik-ckdsr                0/1     Completed   1          5m10s
kube-system   svclb-traefik-952808e4-5txd7              2/2     Running     0          3m55s
kube-system   traefik-66c46d954f-pgnv8                  1/1     Running     0          3m55s
kube-system   svclb-traefik-952808e4-dkkp6              2/2     Running     0          2m25s
kube-system   svclb-traefik-952808e4-7wk6l              2/2     Running     0          2m13s
kube-system   svclb-traefik-952808e4-chmbx              2/2     Running     0          2m14s
kube-system   svclb-traefik-952808e4-k7hrw              2/2     Running     0          2m14s

…and then I can make a mess with kubectl apply, helm, etc.

One thing that sesdev knows how to do is deploy VMs with extra virtual disks. This functionality is there for Ceph deployments, but there’s no reason we can’t turn it on when deploying k3s:

> sesdev create k3s --num-disks=2
> sesdev ssh k3s
master:~ # for node in \
    $(kubectl get nodes -o 'jsonpath={.items[*].metadata.name}') ;
    do echo $node ; ssh $node cat /proc/partitions ; done
master
major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
node3
major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc
node2
 major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc
node4
 major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc
node1
 major minor  #blocks  name
 253        0   44040192 vda
 253        1       2048 vda1
 253        2      20480 vda2
 253        3   44016623 vda3
 253       16    8388608 vdb
 253       32    8388608 vdc

As you can see this gives all the worker nodes an extra two 8GB virtual disks. I suspect this may make sesdev an interesting tool for testing other Kubernetes based storage systems such as Longhorn, but I haven’t tried that yet.

the avatar of Nathan Wolf
the avatar of openSUSE News

GCC, EFI Boot Manager Update in Tumbleweed

Rolling-release distribution openSUSE Tumbleweed had a large number of security patches, bug fixes, and new features in snapshots released this week.

Users who did a zypper dup had a full distribution rebuild with GNU Compiler Collection 13, which is the distro’s new default compiler.

This rebuild 20230319 snapshot provided a GCC 13.0.1+git update that rebased a patch and enables a mutual exclusion (mutex) link. An update of flatpak 1.14.4 updated translations and eliminated two Common Vulnerabilities and Exposures; CVE-2023-28101 and CVE-2023-28100, which was specific to virtual consoles and users were recommended to use a graphical user interface like GNOME Software rather than graphical terminal emulator such as xterm , gnome-terminal or Konsole. The C++ library for Single Instruction, Multiple Data highway 1.0.4 provides faster KV128 sorting. The package also updated RISC-V Vector Extension Intrinsics for the 1.0-draft. Other packages to update was libstorage-ng 4.5.86 along with several libqt5 packages.

The 20230318 snapshot updated just two packages. The fcitx5-gtk package updated to version 5.0.22. This gtk-im-module and glib-based dbus client library implements notify-focus-out signal and changes GtkIMContext.reset to always commit the preedit state. The other package to update was a Library for creating MusicBrainz DiscIDs, which is a fantastic open music encyclopedia that collects music metadata and makes it available to the public. The libdiscid 0.6.4 package fixes compiler errors and requires CMake 2.8.12 as a minimum version.

Snapshot 20230317 updates DNS protocol bind 9.18.13. The update provides several new features like increasing the responsiveness of named Response Policy Zone (RPZ) updates that are applied after an RPZ zone is successfully transferred. KDE enthusiasts can be happy with the bug fixes released in the Plasma 5.27.3 update. A few of the highlighted fixes were the addition of emoji picker to mappings, the remove of duplicate items when loading from history and PowerDevil sought to make some changes in order to not waste precious energy. An update of gtk4 4.10.1 brought a plethora of changes. Besides dropping a patch that was fixed upstream, the new version fixed a memory leak, some scrolling problems and improved search performance for the cross-platform widget toolkit. An update of systemd to version 253.1 added a few patches, which one is a [temporary workaround until LVM boot failure is fixed in dracut. Several other packages were update in the snapshot including pipewire 0.3.67, icewm 3.3.2, and many qt6 packages.

The Extensible Firmware Interface (EFI) Boot Manager had a major version update in snapshot 20230316, but it wasn’t the only one. There were two other major versions in the snapshot. The efibootmgr 18 restored an activation error message and fixed help messages. The package also added an option for insertion location of new entries and fixed the simple run example. Another major version update was for EFE variables in the efivar 38 update. This package fiedx parsing for nvme-subsystem devices, added some new tooling and properly checks mmap return errors. And yet there was one more major version update with the free BitTorrent client transmission updating for version 4.0.2. The new version takes care of some potential crashes and fixes the display of IPv6 tracker URLs. The Web client was rewritten and now supports mobile use. The Linux Kernel was the only update in the snapshot that wasn’t a major version update. The 6.2.6 kernel-source update partial reverted some wifi configurations and removed some Realtek wireless drivers.

the avatar of FreeAptitude

a silhouette of a person's head and shoulders, used as a default avatar

Syslog-ng 101, part 12: Elasticsearch (and Opensearch, Zinc, Humio, etc.)

One of the most popular destinations in syslog-ng is Elasticsearch (and OpenSearch, Zinc, Humio, etc.). The 12th part of my syslog-ng #tutorial shows you how to send log messages to Elasticsearch.

You can watch the video on YouTube:

and the complete playlist at https://www.youtube.com/playlist?list=PLoBNbOHNb0i5Pags2JY6-6wH2noLaSiTb

Or you can read the rest the tutorial as a blog at: https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-101-part-12-elasticsearch-and-opensearch-zinc-humio-etc

syslog-ng logo

the avatar of Open Build Service

Continuing on API Endpoint Documentation for Package and File Sources

You might have been already using our new openAPI Documentation. We now want to let you know we added some more documentation about package endpoints and we added the new section about file endpoints. You can check them out below! Sources - Packages Sources - Files After kicking off the API documentation remake in January 2021, we’ve continued with the Build and Workers endpoints in April 2021, we followed with Sources Projects and Search endpoints...