Mastodon Setup with Docker and Caddy
In my previous post, we setup a Mastodon server using Docker and nginx-proxy. In this post, we use instead the web server Caddy. I’ve only discovered Caddy a week ago. The configuration of Mastodon is now even simpler and shorter. Caddy redirects traffic automatically from HTTP to HTTPS and configures HTTPS for all domains via Let’s Encryption. :rocket:
Our starting point is the docker-compose.yml shipped with the Mastodon code. Why is it not enough? It assumes you setup up proxy with HTTPS endpoints yourself. So let’s integrate this as well in Docker.
Setup
Few remarks to start with:
- my testing system: Ubuntu 20.04.3 LTS (GNU/Linux 5.4.0-97-generic x86_64)
- install first some software with
apt install docker docker.io caddy jq git -
create an unprivileged user account, e.g.
mastodonadduser --disabled-login mastodon adduser mastodon docker adduser mastodon sudo # optional, remove later su mastodon # switch to that user -
my docker compose: Docker Compose version v2.2.3 (based on go)
install docker compose in 3 lines:
mkdir -p ~/.docker/cli-plugins curl -sSL https://github.com/docker/compose/releases/download/v2.2.3/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose chmod +x ~/.docker/cli-plugins/docker-compose -
my testing domain (for this example): social.host
-
my dot-env file
.envfor docker compose:LETS_ENCRYPT_EMAIL=admin-mail@social.host MASTODON_DOMAIN=social.host FRONTEND_SUBNET="172.22.0.0/16" # check the latest version here: https://hub.docker.com/r/tootsuite/mastodon/tags MASTODON_VERSION=v3.4.6 -
I have commented out
build: ., because I prefer to rely on the official images from Docker Hub. -
With little effort, we enable as well full-text search with elasticsearch.
-
The setup places all databases and uploaded files in the folder
mastodon. -
We use a named volume
mastodon-publicto expose the static files from themastodon-webcontainer to the Caddy webserver. Caddy serves directy static files for improved speed. Awesome! :star2: -
The setup comes with the Mastodon Twitter Crossposter. You need to setup an extra subdomain for it. Remove it from the
docker-compose.ymlin case you have no use for it. -
Using
extra_host, we expose with"host.docker.internal:host-gateway"the Docker host to the Mastodon Sidekiq container in case that you configure Mastodon to use a mail transfer agent (e.g. postfix) running on the host. In that case, useSMTP_SERVER=host.docker.internal. - Consider to replace the git repository to build the crossposter with a local copy for more control over updates.
# file: 'docker-compose.yml'
version: "3.7"
services:
caddy:
image: caddy:2-alpine
restart: unless-stopped
container_name: caddy
ports:
- "80:80"
- "443:443"
volumes:
- ./caddy/etc-caddy:/etc/caddy
- ./caddy/data:/data # Optional
- ./caddy/config:/config # Optional
- ./caddy/logs:/logs
- mastodon-public:/srv/mastodon/public:ro
env_file: .env
# helps crossposter resolve the mastodon server internally
hostname: "${MASTODON_DOMAIN}"
networks:
frontend:
aliases:
- "${MASTODON_DOMAIN}"
networks:
- frontend
- backend
mastodon-db:
restart: always
image: postgres:14-alpine
container_name: "mastodon-db"
healthcheck:
test: pg_isready -U postgres
environment:
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- "./mastodon/postgres:/var/lib/postgresql/data"
networks:
- backend
mastodon-redis:
restart: always
image: redis:6.0-alpine
container_name: "mastodon-redis"
healthcheck:
test: redis-cli ping
volumes:
- ./mastodon/redis:/data
networks:
- backend
mastodon-elastic:
restart: always
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
container_name: "mastodon-elastic"
healthcheck:
test: curl --silent --fail localhost:9200/_cluster/health || exit 1
environment:
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
cluster.name: es-mastodon
discovery.type: single-node
bootstrap.memory_lock: "true"
volumes:
- ./mastodon/elasticsearch:/usr/share/elasticsearch/data
networks:
- backend
ulimits:
memlock:
soft: -1
hard: -1
mastodon-web:
restart: always
image: "tootsuite/mastodon:${MASTODON_VERSION}"
container_name: "mastodon-web"
healthcheck:
test: wget -q --spider --proxy=off localhost:3000/health || exit 1
env_file: mastodon.env.production
environment:
LOCAL_DOMAIN: "${MASTODON_DOMAIN}"
SMTP_FROM_ADDRESS: "notifications@${MASTODON_DOMAIN}"
ES_HOST: mastodon-elastic
ES_ENABLED: true
command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
expose:
- "3000"
depends_on:
- mastodon-db
- mastodon-redis
- mastodon-elastic
volumes:
# https://www.digitalocean.com/community/tutorials/how-to-share-data-between-docker-containers
- mastodon-public:/opt/mastodon/public # map static files in volume for caddy
- ./mastodon/public/system:/opt/mastodon/public/system
networks:
- frontend
- backend
extra_hosts:
- "host.docker.internal:host-gateway"
mastodon-streaming:
restart: always
image: "tootsuite/mastodon:${MASTODON_VERSION}"
container_name: "mastodon-streaming"
healthcheck:
test: wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1
]
env_file: mastodon.env.production
environment:
LOCAL_DOMAIN: "${MASTODON_DOMAIN}"
SMTP_FROM_ADDRESS: "notifications@${MASTODON_DOMAIN}"
ES_HOST: mastodon-elastic
ES_ENABLED: true
command: node ./streaming
expose:
- "4000"
depends_on:
- mastodon-db
- mastodon-redis
networks:
- frontend
- backend
mastodon-sidekiq:
restart: always
image: "tootsuite/mastodon:${MASTODON_VERSION}"
container_name: "mastodon-sidekiq"
healthcheck:
test: ps aux | grep '[s]idekiq\ 6' || false
env_file: mastodon.env.production
environment:
LOCAL_DOMAIN: "${MASTODON_DOMAIN}"
SMTP_FROM_ADDRESS: "notifications@${MASTODON_DOMAIN}"
ES_HOST: mastodon-elastic
ES_ENABLED: true
command: bundle exec sidekiq
depends_on:
- mastodon-db
- mastodon-redis
volumes:
- ./mastodon/public/system:/mastodon/public/system
networks:
- frontend
- backend
extra_hosts:
- "host.docker.internal:host-gateway"
crossposter-db:
restart: always
image: postgres:14-alpine
container_name: "crossposter-db"
healthcheck:
test: pg_isready -U postgres
environment:
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- ./crossposter/postgres:/var/lib/postgresql/data
networks:
- backend
crossposter-redis:
restart: always
image: redis:6.0-alpine
container_name: "crossposter-redis"
healthcheck:
test: redis-cli ping
volumes:
- ./crossposter/redis:/data
networks:
- backend
crossposter-web:
restart: always
build: https://github.com/renatolond/mastodon-twitter-poster.git#main
image: mastodon-twitter-poster
container_name: "crossposter-web"
env_file: crossposter.env.production
environment:
CROSSPOSTER_DOMAIN: "https://crossposter.${MASTODON_DOMAIN}"
expose:
- "3000"
depends_on:
- crossposter-db
networks:
- frontend
- backend
crossposter-sidekiq:
restart: always
build: https://github.com/renatolond/mastodon-twitter-poster.git#main
image: mastodon-twitter-poster
container_name: "crossposter-sidekiq"
healthcheck:
test: ps aux | grep '[s]idekiq\ 6' || false
env_file: crossposter.env.production
environment:
ALLOWED_DOMAIN: "${MASTODON_DOMAIN}"
CROSSPOSTER_DOMAIN: "https://crossposter.${MASTODON_DOMAIN}"
command: bundle exec sidekiq -c 5 -q default
depends_on:
- crossposter-db
- crossposter-redis
networks:
- frontend
- backend
volumes:
mastodon-public:
networks:
frontend:
name: "${COMPOSE_PROJECT_NAME}_frontend"
ipam:
config:
- subnet: "${FRONTEND_SUBNET}"
backend:
name: "${COMPOSE_PROJECT_NAME}_backend"
internal: true
The web server Caddy is configured using a Caddyfile stored in ./caddy/etc-caddy. I started with a config I found on Github.
# file: 'Caddyfile'
# kate: indent-width 8; space-indent on;
{
# Global options block. Entirely optional, https is on by default
# Optional email key for lets encrypt
email {$LETS_ENCRYPT_EMAIL}
# Optional staging lets encrypt for testing. Comment out for production.
# acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
# admin off
}
{$MASTODON_DOMAIN} {
log {
# format single_field common_log
output file /logs/access.log
}
root * /srv/mastodon/public
encode gzip
@static file
handle @static {
file_server
}
handle /api/v1/streaming* {
reverse_proxy mastodon-streaming:4000
}
handle {
reverse_proxy mastodon-web:3000
}
header {
Strict-Transport-Security "max-age=31536000;"
}
header /sw.js Cache-Control "public, max-age=0";
header /emoji* Cache-Control "public, max-age=31536000, immutable"
header /packs* Cache-Control "public, max-age=31536000, immutable"
header /system/accounts/avatars* Cache-Control "public, max-age=31536000, immutable"
header /system/media_attachments/files* Cache-Control "public, max-age=31536000, immutable"
handle_errors {
@5xx expression `{http.error.status_code} >= 500 && {http.error.status_code} < 600`
rewrite @5xx /500.html
file_server
}
}
crossposter.{$MASTODON_DOMAIN} {
log {
# format single_field common_log
output file /logs/access-crossposter.log
}
encode gzip
handle {
reverse_proxy crossposter-web:3000
}
}
With these file in place, create a few more folders and launch the setup of the instance. If the instance has been setup before, a database setup may be enough.
# mastodon
touch mastodon.env.production
sudo chown 991:991 mastodon.env.production
mkdir -p mastodon/public
sudo chown -R 991:991 mastodon/public
mkdir -p mastodon/elasticsearch
sudo chmod g+rwx mastodon/elasticsearch
sudo chgrp 0 mastodon/elasticsearch
# first time: setup mastodon
# https://github.com/mastodon/mastodon/issues/16353 (on RUBYOPT)
docker compose run --rm -v $(pwd)/mastodon.env.production:/opt/mastodon/.env.production -e RUBYOPT=-W0 web bundle exec rake mastodon:setup
# subsequent times: skip generation of config and only setup database
docker compose run --rm -v $(pwd)/mastodon.env.production:/opt/mastodon/.env.production web bundle exec rake db:setup
# crossposter
mkdir crossposter
docker-compose run --rm crossposter-web bundle exec rake db:setup
# launch mastodon and crossposter
docker compose run -d
# look into the logs, -f for live logs
docker compose logs -f
Troubleshooting
-
I had much problems to let the mastodon container connect to a mail transport agent (MTA) of my host. Eventually, I solved it with an extra filewall rule:
ufw allow proto tcp from any to 172.17.0.1 port 25. -
The mail issue can be avoided by a) using a SaaS such as (mailgun/mailjet/sendinblue) or b) using another Docker container with postfix that is in the frontend network as well. Look at Peertube’s docker-compose file for some inspiration.
References
- https://caddyserver.com
-
https://gist.github.com/yukimochi/bb7c90cbe628f216f821e835df1aeac1?permalink_comment_id=3607303#gistcomment-3607303 (
Caddyfilefor Mastodon on Github)
openSUSE 15.2 to 15.3 upgrade notes
dracut: fix hibernate after fresh install
dracut -a resume -f
add_dracutmodules+=" resume "
- openSUSE: https://bugzilla.opensuse.org/1192506
- upstream dracut: https://github.com/dracutdevs/dracut/issues/924
- Fedora: https://bugzilla.redhat.com/show_bug.cgi?id=1795422
- RedHat actually fixed it for RHEL9 by just removing the crazy "only include it if it has been included before"-logic: https://github.com/redhat-plumbers/dracut-rhel9/pull/12
syslog-ng-future.blog? Is this a fork or what?
Seemingly a boring topic, Balázs Scheidler finds open source licensing fascinating. It allows him to work on syslog-ng even though Balabit was acquired. He writes:
“I mentioned in the previous post that I would like to focus on syslog-ng and put it more into the spotlight. I also mentioned that Balabit, the company I was a founder of and the commercial sponsor behind syslog-ng, was acquired by One Identity ~4 years ago. How does this add up? Who owns the Intellectual Property (IP) for syslog-ng? Who am I or this blog affiliated with?”
Read the rest of his blog at https://syslog-ng-future.blog/syslog-ng-future-blog-is-this-a-fork-or-what/

syslog-ng logo
cvtsudoers: merging multiple sudoers files into one
We learned in my previous sudo blog that cvtsudoers is not just for LDAP. Version 1.9.9 of sudo extends the querying possibilities of cvtsudoers further and adds a brand new feature: merging multiple sudoers files into one. Both are especially useful when you have complex configurations. Querying lets you to better understand what the various rules allow in your sudoers file. Merging helps you to combine multiple configurations into one, so you do not have to maintain a separate sudoers file on each of your hosts.
Read the rest of my blog on the sudo website at https://www.sudo.ws/posts/2022/02/cvtsudoers-merging-multiple-sudoers-files-into-one/

Sudo logo
Mastodon Setup with Docker and nginx-proxy
I have been working on a setup with Mastodon that is easy to repeat and share. A setup with very few steps. Please consider that this setup is not enough for a production environment. It requires additional security measures. Please put your recommendations in the comments! :grinning:
Our starting point is the docker-compose.yml shipped with the Mastodon code. Why is it not enough? It assumes you setup up proxy with HTTPS endpoints yourself. So let’s integrate this as well in Docker.
Consider also the compact setup with the Caddy webserver.
Setup
Few remarks to start with:
- my testing system: Ubuntu 20.04.3 LTS (GNU/Linux 5.4.0-97-generic x86_64)
- install first some software with
apt install docker docker.io jq git -
create an unprivileged user account, e.g.
mastodonadduser --disabled-login mastodon adduser mastodon docker adduser mastodon sudo # optional, remove later su mastodon # switch to that user -
my docker compose: Docker Compose version v2.2.3 (based on go)
install docker compose in 3 lines:
mkdir -p ~/.docker/cli-plugins curl -sSL https://github.com/docker/compose/releases/download/v2.2.3/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose chmod +x ~/.docker/cli-plugins/docker-compose -
my testing domain (for this example): social.host
-
my dot-env file
.envfor docker compose:LETS_ENCRYPT_EMAIL=admin-mail@social.host MASTODON_DOMAIN=social.host -
I have commented out
build: ., because I prefer to rely on the official images from Docker Hub. -
With little effort, I enable as well full-text search with elasticsearch.
-
The support of
VIRTUAL_PATHis brand-new in nginx-proxy. It is not yet in the main branch, so that we rely onnginxproxy/nginx-proxy:dev-alpine. -
The Mastodon code also ships an nginx configuration. However, nginx-proxy creates much of it as well, so that I currently believe no further configuration is required here. However, nginx-proxy allows to add custom elements to the generated configuration.
- The setup places all databases and uploaded files in the folder
mastodon
# file: 'docker-compose.yml'
version: "3.7"
services:
nginx-proxy:
image: nginxproxy/nginx-proxy:dev-alpine
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/conf:/etc/nginx/conf.d
- ./nginx/vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- ./nginx/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx/logs:/var/log/nginx
networks:
- external_network
- internal_network
acme-companion:
image: nginxproxy/acme-companion
container_name: nginx-proxy-acme
volumes_from:
- nginx-proxy
volumes:
- ./nginx/certs:/etc/nginx/certs:rw
- ./nginx/acme:/etc/acme.sh
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
DEFAULT_EMAIL: "${LETS_ENCRYPT_EMAIL}"
networks:
- external_network
db:
restart: always
image: postgres:14-alpine
shm_size: 256mb
networks:
- internal_network
healthcheck:
test: ["CMD", "pg_isready", "-U", "postgres"]
volumes:
- .mastodon//postgres14:/var/lib/postgresql/data
environment:
POSTGRES_HOST_AUTH_METHOD: trust
redis:
restart: always
image: redis:6-alpine
networks:
- internal_network
healthcheck:
test: ["CMD", "redis-cli", "ping"]
volumes:
- ./mastodon/redis:/data
# elasticsearch
es:
restart: always
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.10
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "cluster.name=es-mastodon"
- "discovery.type=single-node"
- "bootstrap.memory_lock=true"
networks:
- internal_network
healthcheck:
test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
volumes:
- ./mastodon/elasticsearch:/usr/share/elasticsearch/data
ulimits:
memlock:
soft: -1
hard: -1
web:
# build: .
image: tootsuite/mastodon:v3.4.6
restart: always
env_file: mastodon.env.production
command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
networks:
- external_network
- internal_network
healthcheck:
test: ["CMD-SHELL", "wget -q --spider --proxy=off localhost:3000/health || exit 1"]
ports:
- "127.0.0.1:3000:3000"
depends_on:
- db
- redis
- es
volumes:
- ./mastodon/public/system:/mastodon/public/system
environment:
VIRTUAL_HOST: "${MASTODON_DOMAIN}"
VIRTUAL_PATH: "/"
VIRTUAL_PORT: 3000
LETSENCRYPT_HOST: "${MASTODON_DOMAIN}"
ES_HOST: mastodon-elastic
ES_ENABLED: true
streaming:
# build: .
image: tootsuite/mastodon:v3.4.6
restart: always
env_file: mastodon.env.production
command: node ./streaming
networks:
- external_network
- internal_network
healthcheck:
test: ["CMD-SHELL", "wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1"]
ports:
- "127.0.0.1:4000:4000"
depends_on:
- db
- redis
environment:
VIRTUAL_HOST: "${MASTODON_DOMAIN}"
VIRTUAL_PATH: "/api/v1/streaming"
VIRTUAL_PORT: 4000
sidekiq:
# build: .
image: tootsuite/mastodon:v3.4.6
restart: always
env_file: mastodon.env.production
command: bundle exec sidekiq
depends_on:
- db
- redis
networks:
# - external_network
- internal_network
volumes:
- ./mastodon/public/system:/mastodon/public/system
volumes:
html:
networks:
external_network:
internal_network:
internal: true
With this file in place, create a few more folders and launch the setup of the instance. If the instance has been setup before, a database setup may be enough.
# mastodon
touch mastodon.env.production
sudo chown 991:991 mastodon.env.production
mkdir -p mastodon/public
sudo chown -R 991:991 mastodon/public
mkdir -p mastodon/elasticsearch
sudo chmod g+rwx mastodon/elasticsearch
sudo chgrp 0 mastodon/elasticsearch
# first time: setup mastodon
# https://github.com/mastodon/mastodon/issues/16353 (on RUBYOPT)
docker compose run --rm -v $(pwd)/mastodon.env.production:/opt/mastodon/.env.production -e RUBYOPT=-W0 web bundle exec rake mastodon:setup
# subsequent times: skip generation of config and only setup database
docker compose run --rm -v $(pwd)/mastodon.env.production:/opt/mastodon/.env.production web bundle exec rake db:setup
# launch mastodon
docker compose run -d
# look into the logs, -f for live logs
docker compose logs -f
Mastodon Twitter Crossposter
To setup the Mastodon Twitter Poster for crossposting, add the following services to the docker-compose.yml
- create a
crossposter.env.productionwith content adapted from https://github.com/renatolond/mastodon-twitter-poster/blob/main/.env.example -
create a directory
crossposter - if you prefer to have logs handled by Docker, add to the
crossposter.env.productionalsoRAILS_LOG_TO_STDOUT=enabled(Github issue)
crossposter-db:
restart: always
image: postgres:14-alpine
container_name: "crossposter-db"
healthcheck:
test: pg_isready -U postgres
environment:
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- ./crossposter/postgres:/var/lib/postgresql/data
networks:
- internal_network
crossposter-redis:
restart: always
image: redis:6.0-alpine
container_name: "crossposter-redis"
healthcheck:
test: redis-cli ping
volumes:
- ./crossposter/redis:/data
networks:
- internal_network
crossposter-web:
restart: always
build: https://github.com/renatolond/mastodon-twitter-poster.git#main
image: mastodon-twitter-poster
container_name: "crossposter-web"
env_file: crossposter.env.production
environment:
ALLOWED_DOMAIN: "${MASTODON_DOMAIN}"
DB_HOST: crossposter-db
REDIS_URL: "redis://crossposter-redis"
networks:
- internal_network
- external_network
expose:
- "3000"
depends_on:
- crossposter-db
crossposter-sidekiq:
restart: always
build: https://github.com/renatolond/mastodon-twitter-poster.git#main
image: mastodon-twitter-poster
container_name: "crossposter-sidekiq"
env_file: crossposter.env.production
environment:
ALLOWED_DOMAIN: "${MASTODON_DOMAIN}"
REDIS_URL: "redis://crossposter-redis"
DB_HOST: crossposter-db
command: bundle exec sidekiq -c 5 -q default
healthcheck:
test: ps aux | grep '[s]idekiq\ 6' || false
networks:
# - external_network
- internal_network
depends_on:
- crossposter-db
- crossposter-redis
The crossposter requires a database setup before the containers can be launched:
docker-compose run --rm crossposter-web bundle exec rake db:setup
References
GeckoLinux Pantheon | Review from an openSUSE User
openSUSE Tumbleweed – Review of the week 2022/05
Dear Tumbleweed users and hackers,
Week 5 – 5 snapshots. I hope nobody expects that we keep up the ‘week number == number of snapshots’, or I’ll be in deep trouble very soon. Looking at the staging dashboard it seems like the vacation period is definitively over: almost all stagings are full, but as not too many submitted things have proven to be broken it makes the stagings still manageable. It gets really difficult if we end up with a lot of breakage in the Stagings, then need to chase fixes.
The five delivered snapshots (0128, 0130, 0131, 0201, and 0202) brought you these changes:
- Polkit with fix for pwnkit (CVE-2021-4034)
- Switched default Ruby version to Ruby 3.1
- Dropped Ruby 2.7 and Ruby 3.0 (including all rubyx.y-rubygem-* packages)
- Mozilla Firefox 96.0.3
- git 2.35.1
- Linux kernel 5.16.4
- 389-ds 2.0.14
- Wireplumber 0.4.7
- pipewire 0.3.44
As already mentioned, the staging projects are mostly filled. The largest changes being integrated are:
- Linux kernel 5.16.5: fix for cifs crash, boo#1195360; full drm switch will follow later
- KDE Gear 21.12.2
- systemd: drop SUSE specific sysv support. Generic, upstream based sysv support remains in place. See original announcement at Factory mailinglist
- KDE Plasma 5.24 (currently beta is staged, release scheduled for Feb 8th)
- Lua 5.4.4
- rpm will no longer pull glibc-locale, but only glibc-locale-base, See this discussion
- glibc 2.35
- Python 3.6 interpreter will be removed (We have roughly 100 python36-FOO packages left)
- Python 3.10 as the distro default interpreter (a bit down the line)
- GCC 12 introduction has started to be as ready as possible for when the upstream release happens.
Version Control Tool, IRC Client Update in Tumbleweed
This week openSUSE Tumbleweed had a steady pace of snapshots with four releases users could #zypper dup their system into, which brought updates for an Internet Relay Chat client and a new default version of Ruby .
Version Control package git updated in snapshot 20220201. The 2.35.1 version of git now shows the number of stash entries with --show-stash like the normal output does. The color palette used by git grep has been updated to match that of GNU grep. The Mozilla Firefox 96.0.3 update fixed an issue that allowed unexpected data to be submitted in some of the search telemetry. Google’s data interchange format protobuf 3.19.4 fixed data loss bugs occurring when the number of optional fields in a message is an exact multiple of 32; this affected both Ruby and php in the package. Other packages to update in the snapshot were yast 4.4.43, python-fsspec 2022.1.0, suse-module-tools 16.0.19), and yast2-network 4.4.35, which transitioned to inclusive naming for asymmetric communication.
The IRC client hexchat was the single package updated in snapshot 20220131. The 2.16.0 version of hexchat updated the network list and included Libera.Chat as the default. The chat client also fixed miscellaneous parsing issues and added support for strikethrough formatting.
The 20220130 snapshot updated pipewire 0.3.44; this audio and video package changed some properties that make it possible to configure buffer sizes for larger than 8192, which is what JACK applications have as a hardcoded limit. The package also made it possible to run a minimal PipeWire server without a session manager, enough to run JACK clients. The default Ruby was switched to version 3.1. The version brings improved debugging performance, supports remote debugging and merges YJIT, which is a new in-process JIT compiler developed by Shopify. Salt 3004 arrived in the snapshot and offers new features. New modules for a transactional systems, like MicroOS, present challenges, yet salt 3004 supports atomicity-type systems via two new modules, transactional_update and rebootmgr, and a new executor transactional_update; the modules will help to treat the transactional system transparently. The glib2 2.70.3 update fixed a potential data loss due to missing fsync when saving files on btrfs. Other packages to update in the snapshot were snapper 0.9.1, libstorage-ng 4.4.78, bolt 0.9.2, freeipmi 1.6.9 and more.
Snapshot 20220128 also had an updated of libstorage-ng; the 4.4.77 version provided translations for the Brazilian Portuguese language. There was also another protobuf update that arrived at the beginning of the week; the 3.19.3 version improved parsing performance and aligned dependency handling best practices for building the package. Network and hardware utility package ethtool 5.16 added a couple new features like the use of memory maps for module EEPROM parsing and fixed a dumping of a FEC mode shown with --show-fec. The sendmail 8.17.1 package addressed several potential memory leaks and other similar problems that relate to error handling. PipeWire’s policy manager package wireplumber 0.4.7 fixed a regression that caused the selection of the default audio sources to be delayed and fixed a regression affecting the echo-cancellation pipewire module. Several YaST and RubyGem packages were also updated in the snapshot.