Skip to main content

the avatar of FreeAptitude

openSUSE 15.2 to 15.3 upgrade notes

In a previous article I have shown how to upgrade a distro using zypper but after the first reboot some issue might always happen, that’s why I collected all the changes and the tweaks I applied switching from openSUSE 15.2 to 15.3.
a silhouette of a person's head and shoulders, used as a default avatar

dracut: fix hibernate after fresh install

Just for fun, I finally installed a fresh Tumbleweed on one of my machines to actually find out if I'm missing out on new features that are masked by just always updating the old installation.

One feature that I had missed was, that hibernation was no longer working. Or, to be more exact, hibernation was working, but resume was not. Investigating the issue, I found that dracut's "resume" module was not included in the initramfs, which in turn lead to initramfs not even trying to resume.

The dracut mechanism has some logic to actually find out if suspend and resume is configured, and when it decides it is not, it will just skip adding the "useless" resume module.

Unfortunately, the check is faulty IMHO. It checks, if the resume device is configured in the kernel, and if it is, it adds the module to dracut. The problem is, that this kernel config is written by the resume code in the initrd... so during installation this is not the case, and as a result the module is not added to initrd, which leads to the config not being set on next reboot... 

The trivial fix is to call dracut once with the option to include the resume module:

dracut -a resume -f

After a reboot, the kernel config will be set and in the future the resume module will always be included in the initrd.
For an even saver setup, adding

add_dracutmodules+=" resume "

to /etc/dracut.conf.d/99-local.conf might be even better.

Of course I wanted to report a bug about the issue, then found out that there are plenty already:
a silhouette of a person's head and shoulders, used as a default avatar

syslog-ng-future.blog? Is this a fork or what?

Seemingly a boring topic, Balázs Scheidler finds open source licensing fascinating. It allows him to work on syslog-ng even though Balabit was acquired. He writes:

“I mentioned in the previous post that I would like to focus on syslog-ng and put it more into the spotlight. I also mentioned that Balabit, the company I was a founder of and the commercial sponsor behind syslog-ng, was acquired by One Identity ~4 years ago. How does this add up? Who owns the Intellectual Property (IP) for syslog-ng? Who am I or this blog affiliated with?”

Read the rest of his blog at https://syslog-ng-future.blog/syslog-ng-future-blog-is-this-a-fork-or-what/

syslog-ng logo

a silhouette of a person's head and shoulders, used as a default avatar

cvtsudoers: merging multiple sudoers files into one

We learned in my previous sudo blog that cvtsudoers is not just for LDAP. Version 1.9.9 of sudo extends the querying possibilities of cvtsudoers further and adds a brand new feature: merging multiple sudoers files into one. Both are especially useful when you have complex configurations. Querying lets you to better understand what the various rules allow in your sudoers file. Merging helps you to combine multiple configurations into one, so you do not have to maintain a separate sudoers file on each of your hosts.

Read the rest of my blog on the sudo website at https://www.sudo.ws/posts/2022/02/cvtsudoers-merging-multiple-sudoers-files-into-one/

Sudo logo

the avatar of Robert Riemann

Mastodon Setup with Docker and nginx-proxy

I have been working on a setup with Mastodon that is easy to repeat and share. A setup with very few steps. Please consider that this setup is not enough for a production environment. It requires additional security measures. Please put your recommendations in the comments! :grinning:

Our starting point is the docker-compose.yml shipped with the Mastodon code. Why is it not enough? It assumes you setup up proxy with HTTPS endpoints yourself. So let’s integrate this as well in Docker.

Consider also the compact setup with the Caddy webserver.

Setup

Few remarks to start with:

  • my testing system: Ubuntu 20.04.3 LTS (GNU/Linux 5.4.0-97-generic x86_64)
  • install first some software with apt install docker docker.io jq git
  • create an unprivileged user account, e.g. mastodon

    adduser --disabled-login mastodon
    adduser mastodon docker
    adduser mastodon sudo # optional, remove later
    su mastodon # switch to that user
    
  • my docker compose: Docker Compose version v2.2.3 (based on go)

    install docker compose in 3 lines:

    mkdir -p ~/.docker/cli-plugins
    curl -sSL https://github.com/docker/compose/releases/download/v2.2.3/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
    chmod +x ~/.docker/cli-plugins/docker-compose
    
  • my testing domain (for this example): social.host

  • my dot-env file .env for docker compose:

    LETS_ENCRYPT_EMAIL=admin-mail@social.host
    MASTODON_DOMAIN=social.host
    
  • I have commented out build: ., because I prefer to rely on the official images from Docker Hub.

  • With little effort, I enable as well full-text search with elasticsearch.

  • The support of VIRTUAL_PATH is brand-new in nginx-proxy. It is not yet in the main branch, so that we rely on nginxproxy/nginx-proxy:dev-alpine.

  • The Mastodon code also ships an nginx configuration. However, nginx-proxy creates much of it as well, so that I currently believe no further configuration is required here. However, nginx-proxy allows to add custom elements to the generated configuration.

  • The setup places all databases and uploaded files in the folder mastodon
# file: 'docker-compose.yml'
version: "3.7"

services:
  nginx-proxy:
    image: nginxproxy/nginx-proxy:dev-alpine
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/conf:/etc/nginx/conf.d
      - ./nginx/vhost:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - ./nginx/certs:/etc/nginx/certs:ro
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./nginx/logs:/var/log/nginx
    networks:
      - external_network
      - internal_network

  acme-companion:
    image: nginxproxy/acme-companion
    container_name: nginx-proxy-acme
    volumes_from:
      - nginx-proxy
    volumes:
      - ./nginx/certs:/etc/nginx/certs:rw
      - ./nginx/acme:/etc/acme.sh
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      DEFAULT_EMAIL: "${LETS_ENCRYPT_EMAIL}"
    networks:
      - external_network

  db:
    restart: always
    image: postgres:14-alpine
    shm_size: 256mb
    networks:
      - internal_network
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "postgres"]
    volumes:
      - .mastodon//postgres14:/var/lib/postgresql/data
    environment:
      POSTGRES_HOST_AUTH_METHOD: trust

  redis:
    restart: always
    image: redis:6-alpine
    networks:
      - internal_network
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
    volumes:
      - ./mastodon/redis:/data

  # elasticsearch
  es:
    restart: always
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.10
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "cluster.name=es-mastodon"
      - "discovery.type=single-node"
      - "bootstrap.memory_lock=true"
    networks:
      - internal_network
    healthcheck:
      test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
    volumes:
      - ./mastodon/elasticsearch:/usr/share/elasticsearch/data
    ulimits:
      memlock:
        soft: -1
        hard: -1

  web:
    # build: .
    image: tootsuite/mastodon:v3.4.6
    restart: always
    env_file: mastodon.env.production
    command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
    networks:
      - external_network
      - internal_network
    healthcheck:
      test: ["CMD-SHELL", "wget -q --spider --proxy=off localhost:3000/health || exit 1"]
    ports:
      - "127.0.0.1:3000:3000"
    depends_on:
      - db
      - redis
      - es
    volumes:
      - ./mastodon/public/system:/mastodon/public/system
    environment:
      VIRTUAL_HOST: "${MASTODON_DOMAIN}"
      VIRTUAL_PATH: "/"
      VIRTUAL_PORT: 3000
      LETSENCRYPT_HOST: "${MASTODON_DOMAIN}"
      ES_HOST: mastodon-elastic
      ES_ENABLED: true

  streaming:
    # build: .
    image: tootsuite/mastodon:v3.4.6
    restart: always
    env_file: mastodon.env.production
    command: node ./streaming
    networks:
      - external_network
      - internal_network
    healthcheck:
      test: ["CMD-SHELL", "wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1"]
    ports:
      - "127.0.0.1:4000:4000"
    depends_on:
      - db
      - redis
    environment:
      VIRTUAL_HOST: "${MASTODON_DOMAIN}"
      VIRTUAL_PATH: "/api/v1/streaming"
      VIRTUAL_PORT: 4000

  sidekiq:
    # build: .
    image: tootsuite/mastodon:v3.4.6
    restart: always
    env_file: mastodon.env.production
    command: bundle exec sidekiq
    depends_on:
      - db
      - redis
    networks:
      # - external_network
      - internal_network
    volumes:
      - ./mastodon/public/system:/mastodon/public/system

volumes:
  html:

networks:
  external_network:
  internal_network:
    internal: true

With this file in place, create a few more folders and launch the setup of the instance. If the instance has been setup before, a database setup may be enough.

# mastodon
touch mastodon.env.production
sudo chown 991:991 mastodon.env.production
mkdir -p mastodon/public
sudo chown -R 991:991 mastodon/public
mkdir -p mastodon/elasticsearch
sudo chmod g+rwx mastodon/elasticsearch
sudo chgrp 0 mastodon/elasticsearch

# first time: setup mastodon
# https://github.com/mastodon/mastodon/issues/16353 (on RUBYOPT)
docker compose run --rm -v $(pwd)/mastodon.env.production:/opt/mastodon/.env.production -e RUBYOPT=-W0 web bundle exec rake mastodon:setup

# subsequent times: skip generation of config and only setup database
docker compose run --rm -v $(pwd)/mastodon.env.production:/opt/mastodon/.env.production web bundle exec rake db:setup

# launch mastodon
docker compose run -d

# look into the logs, -f for live logs
docker compose logs -f

Mastodon Twitter Crossposter

To setup the Mastodon Twitter Poster for crossposting, add the following services to the docker-compose.yml

crossposter-db:
  restart: always
  image: postgres:14-alpine
  container_name: "crossposter-db"
  healthcheck:
    test: pg_isready -U postgres
  environment:    
    POSTGRES_HOST_AUTH_METHOD: trust
  volumes:
    - ./crossposter/postgres:/var/lib/postgresql/data
  networks:
    - internal_network

crossposter-redis:
  restart: always
  image: redis:6.0-alpine
  container_name: "crossposter-redis"
  healthcheck:
    test: redis-cli ping
  volumes:
    - ./crossposter/redis:/data
  networks:
    - internal_network

crossposter-web:
  restart: always
  build: https://github.com/renatolond/mastodon-twitter-poster.git#main
  image: mastodon-twitter-poster
  container_name: "crossposter-web"
  env_file: crossposter.env.production
  environment:
    ALLOWED_DOMAIN: "${MASTODON_DOMAIN}"
    DB_HOST: crossposter-db
    REDIS_URL: "redis://crossposter-redis"
  networks:
    - internal_network
    - external_network
  expose:
    - "3000"
  depends_on:
    - crossposter-db

crossposter-sidekiq:
  restart: always
  build: https://github.com/renatolond/mastodon-twitter-poster.git#main
  image: mastodon-twitter-poster
  container_name: "crossposter-sidekiq"
  env_file: crossposter.env.production
  environment:
    ALLOWED_DOMAIN: "${MASTODON_DOMAIN}"
    REDIS_URL: "redis://crossposter-redis"
    DB_HOST: crossposter-db
  command: bundle exec sidekiq -c 5 -q default
  healthcheck:
    test: ps aux | grep '[s]idekiq\ 6' || false
  networks:
    # - external_network
    - internal_network
  depends_on:
    - crossposter-db
    - crossposter-redis

The crossposter requires a database setup before the containers can be launched:

docker-compose run --rm crossposter-web bundle exec rake db:setup

References

the avatar of Nathan Wolf

GeckoLinux Pantheon | Review from an openSUSE User

GeckoLinux is an openSUSE based operating system that is using either Leap or Tumbleweed underpinnings. I should also mention that openSUSE isn’t the distribution, it is the project so that first sentence maybe, in a way, factually inaccurate. That said, with any corrections there, GeckoLinux has many offerings that I find incredibly fascinating. It is […]
a silhouette of a person's head and shoulders, used as a default avatar

openSUSE Tumbleweed – Review of the week 2022/05

Dear Tumbleweed users and hackers,

Week 5 – 5 snapshots. I hope nobody expects that we keep up the ‘week number == number of snapshots’, or I’ll be in deep trouble very soon. Looking at the staging dashboard it seems like the vacation period is definitively over: almost all stagings are full, but as not too many submitted things have proven to be broken it makes the stagings still manageable. It gets really difficult if we end up with a lot of breakage in the Stagings, then need to chase fixes.

The five delivered snapshots (0128, 0130, 0131, 0201, and 0202) brought you these changes:

  • Polkit with fix for pwnkit (CVE-2021-4034)
  • Switched default Ruby version to Ruby 3.1
  • Dropped Ruby 2.7 and Ruby 3.0 (including all rubyx.y-rubygem-* packages)
  • Mozilla Firefox 96.0.3
  • git 2.35.1
  • Linux kernel 5.16.4
  • 389-ds 2.0.14
  • Wireplumber 0.4.7
  • pipewire 0.3.44

As already mentioned, the staging projects are mostly filled. The largest changes being integrated are:

  • Linux kernel 5.16.5: fix for cifs crash, boo#1195360; full drm switch will follow later
  • KDE Gear 21.12.2
  • systemd: drop SUSE specific sysv support. Generic, upstream based sysv support remains in place. See original announcement at Factory mailinglist
  • KDE Plasma 5.24 (currently beta is staged, release scheduled for Feb 8th)
  • Lua 5.4.4
  • rpm will no longer pull glibc-locale, but only glibc-locale-base, See this discussion
  • glibc 2.35
  • Python 3.6 interpreter will be removed (We have roughly 100 python36-FOO packages left)
  • Python 3.10 as the distro default interpreter (a bit down the line)
  • GCC 12 introduction has started to be as ready as possible for when the upstream release happens.
the avatar of openSUSE News

Version Control Tool, IRC Client Update in Tumbleweed

This week openSUSE Tumbleweed had a steady pace of snapshots with four releases users could #zypper dup their system into, which brought updates for an Internet Relay Chat client and a new default version of Ruby .

Version Control package git updated in snapshot 20220201. The 2.35.1 version of git now shows the number of stash entries with --show-stash like the normal output does. The color palette used by git grep has been updated to match that of GNU grep. The Mozilla Firefox 96.0.3 update fixed an issue that allowed unexpected data to be submitted in some of the search telemetry. Google’s data interchange format protobuf 3.19.4 fixed data loss bugs occurring when the number of optional fields in a message is an exact multiple of 32; this affected both Ruby and php in the package. Other packages to update in the snapshot were yast 4.4.43, python-fsspec 2022.1.0, suse-module-tools 16.0.19), and yast2-network 4.4.35, which transitioned to inclusive naming for asymmetric communication.

The IRC client hexchat was the single package updated in snapshot 20220131. The 2.16.0 version of hexchat updated the network list and included Libera.Chat as the default. The chat client also fixed miscellaneous parsing issues and added support for strikethrough formatting.

The 20220130 snapshot updated pipewire 0.3.44; this audio and video package changed some properties that make it possible to configure buffer sizes for larger than 8192, which is what JACK applications have as a hardcoded limit. The package also made it possible to run a minimal PipeWire server without a session manager, enough to run JACK clients. The default Ruby was switched to version 3.1. The version brings improved debugging performance, supports remote debugging and merges YJIT, which is a new in-process JIT compiler developed by Shopify. Salt 3004 arrived in the snapshot and offers new features. New modules for a transactional systems, like MicroOS, present challenges, yet salt 3004 supports atomicity-type systems via two new modules, transactional_update and rebootmgr, and a new executor transactional_update; the modules will help to treat the transactional system transparently. The glib2 2.70.3 update fixed a potential data loss due to missing fsync when saving files on btrfs. Other packages to update in the snapshot were snapper 0.9.1, libstorage-ng 4.4.78, bolt 0.9.2, freeipmi 1.6.9 and more.

Snapshot 20220128 also had an updated of libstorage-ng; the 4.4.77 version provided translations for the Brazilian Portuguese language. There was also another protobuf update that arrived at the beginning of the week; the 3.19.3 version improved parsing performance and aligned dependency handling best practices for building the package. Network and hardware utility package ethtool 5.16 added a couple new features like the use of memory maps for module EEPROM parsing and fixed a dumping of a FEC mode shown with --show-fec. The sendmail 8.17.1 package addressed several potential memory leaks and other similar problems that relate to error handling. PipeWire’s policy manager package wireplumber 0.4.7 fixed a regression that caused the selection of the default audio sources to be delayed and fixed a regression affecting the echo-cancellation pipewire module. Several YaST and RubyGem packages were also updated in the snapshot.

the avatar of Open Build Service

UI and Reporting Improvements for the Continuous Integration with OBS and GitHub/GitLab

Today, we have some improvements around the continuous integration we unveiled in one of our previous blog posts. We expanded the workflow run interface, added a new type of step to configure repository flags, and introduced a breaking change to the configure_repositories step. As you know, this is a beta feature. So, do not forget this feature is under the beta program. Join! We started off the continuous integration between OBS and GitHub/GitLab in May...

a silhouette of a person's head and shoulders, used as a default avatar

Working with JSON logs from sudo in syslog-ng

This weekend I am going to give a talk about sudo in the security track of FOSDEM. I will talk a few words about logging at each major point I mention, but I cannot go into too much detail there. So, consider this blog both as a teaser and an extension to my FOSDEM talk. You will learn how to work with JSON formatted logs in syslog-ng and also about new sudo features along the way. You will also learn about JSON logging in sudo, chroot support, logging sub-commands, and how to work with these logs in syslog-ng.

Read the rest of my blog at https://www.syslog-ng.com/community/b/blog/posts/working-with-json-logs-from-sudo-in-syslog-ng

syslog-ng logo