Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar
the avatar of Santiago Zarate

Remove file from the last commit in git

  • So, you want to remove that pesky file from your last commit?
  • By accident (naturally, as you and me are perfect beings) a file was commited and it should have not?
  • The cat went over the keyboard and now there’s an extra file in your commit?

If the answer to any of the above is yes, here’s how to do it without pain (Tanking into account, that you want to do that on the last commit; If you need to do it in the middle of a rebase, see the previous post or combine this trick with a rebase (edit a commit with a rebase…).

git restore --source=HEAD^ pesky.file

You can always checkout the file again, or use some witchcraft extracted from man git-restore to do it all at once

This is blatantly stolen from: https://devconnected.com/how-to-remove-files-from-git-commit/ but also man git could get you there too, given enough reading.

the avatar of openSUSE News

Latest Plasma Lands in Tumbleweed, Set for Leap Beta

This week’s openSUSE Tumbleweed snapshots delivered exciting news not only to rolling release users, but also brought significant news for users of the long-established Leap release.

KDE’s next Long-Term Support (LTS) release, Plasma 5.24, arrived in a recent snapshot, and it brings the “Perfect Harmony” for both Tumbleweed and Leap users. Plasma 5.24 will be one of the Desktop Environments (DE) in Leap 15.4; the beta version of Leap 15.4 is expected to be released for testing with the new Plasma version within the next couple of weeks, according to the roadmap.

Plasma 5.24 arrived in snapshot 20220207. The release has improvements in looks and ease of use. It is the final Plasma 5 version until the transition to Plasma 6. Breeze, Plasma’s default theme, was changed giving the DE an improved visual consistency. Breeze lets users pick accent colors and the light/dark color preference gives non-KDE apps that respect the FreeDesktop preferences an automatic light or dark switch based on the chosen color scheme. Discover gives users the option to automatically restart after an update is completed. By simply clicking the checkbox at the bottom of the Updates page, users can take a break and come back to a rebooted, updated system. There are several other feature improvements in the update and people can watch the creation of the Honeywave wallpaper that was developed for the release since it was streamed live on YouTube. The 5.16.5 Linux Kernel update brought several changes for semiconductor company and openSUSE sponsor Marvell; most of this centered on fixes and enablements for their OCTEON® TX2 processor family. Facebook’s fast compression package zstd updated to version 1.5.2, did some spec file cleanup and enabled a zlib/gzip compatible backend since compression library zlib is shown to be significantly faster than gzip. There were about 50 RubyGem packages updated in the snapshot and rubygem-spring 4.0.0 was the only major version update in the snapshot.

Some of the daily snapshots this week were large and the 20220206 snapshot had a variety of packages updated. An update of Mesa 21.3.5 and Mesa-drivers 21.3.5 fixed Zink driver bugs. Finnish translations were made with the update of libstorage-ng 4.4.79 and openSUSE’s 0.1.8 opensuse-welcome package also updated translations. A quarterly update of Google’s regular expression library re2 20220201 addressed a -Wunused-but-set-variable warning from Clang 13.x and yast2-storage-ng 4.4.35 improved integration with the yast2-nfs-client to offer a consistent user experience. Other packages to update in the snapshot were xwayland 22.0.99.902, llvm13 13.0.1 and python-Pillow 9.0.1.

A total of six packages arrived in snapshot 20220205. Text editor vim 8.2.4286 states in the changelog that entering a character with CTRL V may include modifiers. The texted editor update also fixed two Common Vulnerabilities and Exposures, CVE-2022-0417 and CVE-2022-0393. File system utility update e2fsprogs 1.46.5 fixed a crash and fixed the handling of resizing the file system, which would exceed inode limitations. Light-weight programming language lua54 5.4.4 fixed some bugs and removed two patches and ncurses 6.3.20220129 added a warning in the configuration script if the file specified for --with-caps does not exist. Both yast2-core 4.4.1 and yast2-country 4.4.11 were also updated; the latter fixed arguments for the localectl set-locale.

The 20220204 snapshot brought KDE Gear 21.12.2. File manager Dolphin improved zooming of the files. Video editor Kdenlive fixed an issue that sometimes could not move grouped clips to the right when only one empty frame was present. It also fixed a freeze when trying to drag a clip that was just added to Bin. Kmail fixed a build issue with GNU Compiler Collection 12 and learning tool kgeography fixed the color of Howland, Baker and Jarvis islands of United States minor. The Universal Command Line Interface for Amazon Web Services, aws-cli 1.22.46 updated requirements in the spec file from setup.py. The release added a new NNA accelerator compilation support for Sagemaker Neo, which is used for optimizing machine learning. A Qt 6 development pattern was added with the patterns-kde 20220203 update. Both findutils 4.9.0 and a git + version of mobile shell mosh 1.3.2 were updated in the snapshot.

The two snapshots starting off the week were loaded with several packages. While snapshot 20220202 was mostly filled with RubyGem packages, snapshot 20220203 had more updates of Python Package Index and YaST packages. Snapshot 20220203 had multiple package updates for a broad range of users. Audio package PipeWire should again be able to play sound in Zoom, telegram and other apps with the 0.3.45 update. The default sink and source names and properties in PipeWire are also improved. Support for seamless and saliency blending of a foreground and background image was made in the ImageMagick 7.1.0.22 update. The screen reader package for individuals with visual impairments, orca, updated to version 41.2, fixed a couple bugs, improved handling performance and added more event-flood detection. The kernel dump helpers package kdump 1.0.2 fixed the network interface naming and added a dependency on kdumptool. Other packages to update in 20220203 were sudo 1.9.9 and a git + update for samba 4.15.5. Snapshot 20220202 updated to the 5.16.4 Linux Kernel, which had a decent amount of improvements for Flash Memory through Memory Technology Device changes. XML parser library expat 2.4.4 fixed CVE-2022-23852 and CVE-2022-23990, which affected the integer overflow. Out of the 26 RubyGem packages to update in the snapshot, rubygem-hashie 5.0.0 was the only package to have a major version update, which added exporting a normal Hash from an indifferent one through the to_hash method.

a silhouette of a person's head and shoulders, used as a default avatar

FOSDEM 2022: my experiences, sudo talk answers

I spent my last weekend in Brussels at FOSDEM. Well, not really: while I had a couple of Belgian beers, the conference itself was a virtual event and I was at home in Budapest. It’s the second year that FOSDEM is virtual, and yet again I can state that it’s the best virtual event of the year. I had two talks this year. After my second talk, I got some questions during the Q & A session which I could not answer, so I will try to answer them. But before that, let me share my experience!

Experience

Why do I say that FOSDEM is the best virtual event? Of course, even they cannot re-create everything from a real-life event, but it is probably the closest and there are even some improvements compared to IRL events.

All talks are pre-recorded and recordings are played back automatically, so there are no schedule problems. As they are pre-recorded, even if the presenter has technical problems, like I had an unstable Internet connection due to storm damage, everyone can still watch the talks.

Talks are available as a simple video stream, but if you register, then there is a live chat where you can ask and upvote questions. There were lively discussions during both of my syslog-ng and sudo talks, and the questions are also answered live during the video stream after the playback is finished.

If the time is up, attendees can stay in the virtual room and watch the next talk starting automagically, or they can also have a hallway track with the presenter. Instructions are printed in the chat and I had some good discussions after my talks in this way.

Sudo talk answers

“do you happen to know why sudo -e has been broken on Fedora CoreOS has been broken for a while?”

This problem has already been fixed upstream, see: https://github.com/sudo-project/sudo/issues/122 It will be most likely fixed as soon as sudo is updated to the latest version (or the patch is picked).

“Does sudo support logfmt, which is somewhat more readable? Halfway between fully structured and human-readable.”

No. Regular sudo logs are pretty similar and slightly more complex, but syslog-ng can parse them: (sudo parser):

Feb  7 17:51:51 czplaptop sudo[21742]:   czanik : TTY=pts/1 ; PWD=/home/czanik ; USER=root ; COMMAND=/bin/bash

Starting with version 1.9.4 there is also JSON formatting, which is less human-readable, but can be parsed by just about anything and contains a lot more information:

Defaults log_format=json

“Do you have any feedback regarding using sudo with SELinux?”

It is possible to specify an SELinux role and optional type in sudoers rules. The role/type can also be specified on the command line (see -r and -t options). This makes it possible to do SELinux-style role-based access control using sudo. Basically, you can use sudo to run commands with a specific SELinux role/type just like you would with a traditional Linux user.

“What kind of sudo extensions are possible using the C/python API?”

I listed a few in my live answer, but here is the documentation listing all possibilities:

“Are session recordings encrypted?”

Session recordings are encrypted while in transit, see Securing the sudo to sudo_logsrvd connection. Session recordings are not stored in an encrypted format by sudo and sudo_logsrvd.

If you want to make sure that your sudo session recordings are tamper-proof, check out Safeguard for Privileged Sessions(a commercial product), which supports collecting sudo session recordings, saves them in an encrypted and time-stamped storage, and can play back recordings in a web-based interface.

Sudo logo

the avatar of Robert Riemann

Mastodon Setup with Docker and Caddy

In my previous post, we setup a Mastodon server using Docker and nginx-proxy. In this post, we use instead the web server Caddy. I’ve only discovered Caddy a week ago. The configuration of Mastodon is now even simpler and shorter. Caddy redirects traffic automatically from HTTP to HTTPS and configures HTTPS for all domains via Let’s Encryption. :rocket:

Our starting point is the docker-compose.yml shipped with the Mastodon code. Why is it not enough? It assumes you setup up proxy with HTTPS endpoints yourself. So let’s integrate this as well in Docker.

Setup

Few remarks to start with:

  • my testing system: Ubuntu 20.04.3 LTS (GNU/Linux 5.4.0-97-generic x86_64)
  • install first some software with apt install docker docker.io caddy jq git
  • create an unprivileged user account, e.g. mastodon

    adduser --disabled-login mastodon
    adduser mastodon docker
    adduser mastodon sudo # optional, remove later
    su mastodon # switch to that user
    
  • my docker compose: Docker Compose version v2.2.3 (based on go)

    install docker compose in 3 lines:

    mkdir -p ~/.docker/cli-plugins
    curl -sSL https://github.com/docker/compose/releases/download/v2.2.3/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
    chmod +x ~/.docker/cli-plugins/docker-compose
    
  • my testing domain (for this example): social.host

  • my dot-env file .env for docker compose:

    LETS_ENCRYPT_EMAIL=admin-mail@social.host
    MASTODON_DOMAIN=social.host
    FRONTEND_SUBNET="172.22.0.0/16"
    # check the latest version here: https://hub.docker.com/r/tootsuite/mastodon/tags
    MASTODON_VERSION=v3.4.6
    
  • I have commented out build: ., because I prefer to rely on the official images from Docker Hub.

  • With little effort, we enable as well full-text search with elasticsearch.

  • The setup places all databases and uploaded files in the folder mastodon.

  • We use a named volume mastodon-public to expose the static files from the mastodon-web container to the Caddy webserver. Caddy serves directy static files for improved speed. Awesome! :star2:

  • The setup comes with the Mastodon Twitter Crossposter. You need to setup an extra subdomain for it. Remove it from the docker-compose.yml in case you have no use for it.

  • Using extra_host, we expose with "host.docker.internal:host-gateway" the Docker host to the Mastodon Sidekiq container in case that you configure Mastodon to use a mail transfer agent (e.g. postfix) running on the host. In that case, use SMTP_SERVER=host.docker.internal.

  • Consider to replace the git repository to build the crossposter with a local copy for more control over updates.
# file: 'docker-compose.yml'
version: "3.7"

services:
  caddy:
    image: caddy:2-alpine
    restart: unless-stopped
    container_name: caddy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./caddy/etc-caddy:/etc/caddy
      - ./caddy/data:/data # Optional
      - ./caddy/config:/config # Optional
      - ./caddy/logs:/logs
      - mastodon-public:/srv/mastodon/public:ro
    env_file: .env
    # helps crossposter resolve the mastodon server internally
    hostname: "${MASTODON_DOMAIN}"
    networks:
       frontend:    
          aliases:
            - "${MASTODON_DOMAIN}"
    networks:
      - frontend
      - backend

  mastodon-db:
    restart: always
    image: postgres:14-alpine
    container_name: "mastodon-db"
    healthcheck:
      test: pg_isready -U postgres
    environment:
      POSTGRES_HOST_AUTH_METHOD: trust
    volumes:
      - "./mastodon/postgres:/var/lib/postgresql/data"
    networks:
      - backend

  mastodon-redis:
    restart: always
    image: redis:6.0-alpine
    container_name: "mastodon-redis"
    healthcheck:
      test: redis-cli ping
    volumes:
      - ./mastodon/redis:/data
    networks:
      - backend

  mastodon-elastic:
    restart: always
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
    container_name: "mastodon-elastic"
    healthcheck:
      test: curl --silent --fail localhost:9200/_cluster/health || exit 1
    environment:
      ES_JAVA_OPTS: "-Xms512m -Xmx512m"
      cluster.name: es-mastodon
      discovery.type: single-node
      bootstrap.memory_lock: "true"
    volumes:
      - ./mastodon/elasticsearch:/usr/share/elasticsearch/data
    networks:
      - backend
    ulimits:
      memlock:
        soft: -1
        hard: -1

  mastodon-web:
    restart: always
    image: "tootsuite/mastodon:${MASTODON_VERSION}"
    container_name: "mastodon-web"
    healthcheck:
      test: wget -q --spider --proxy=off localhost:3000/health || exit 1
    env_file: mastodon.env.production
    environment:
      LOCAL_DOMAIN: "${MASTODON_DOMAIN}"
      SMTP_FROM_ADDRESS: "notifications@${MASTODON_DOMAIN}"
      ES_HOST: mastodon-elastic
      ES_ENABLED: true
    command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
    expose:
      - "3000"
    depends_on:
      - mastodon-db
      - mastodon-redis
      - mastodon-elastic
    volumes:
      # https://www.digitalocean.com/community/tutorials/how-to-share-data-between-docker-containers
      - mastodon-public:/opt/mastodon/public # map static files in volume for caddy
      - ./mastodon/public/system:/opt/mastodon/public/system
    networks:
      - frontend
      - backend
    extra_hosts:
      - "host.docker.internal:host-gateway"

  mastodon-streaming:
    restart: always
    image: "tootsuite/mastodon:${MASTODON_VERSION}"
    container_name: "mastodon-streaming"
    healthcheck:
      test: wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1
        ]
    env_file: mastodon.env.production
    environment:
      LOCAL_DOMAIN: "${MASTODON_DOMAIN}"
      SMTP_FROM_ADDRESS: "notifications@${MASTODON_DOMAIN}"
      ES_HOST: mastodon-elastic
      ES_ENABLED: true
    command: node ./streaming
    expose:
      - "4000"
    depends_on:
      - mastodon-db
      - mastodon-redis
    networks:
      - frontend
      - backend

  mastodon-sidekiq:
    restart: always
    image: "tootsuite/mastodon:${MASTODON_VERSION}"
    container_name: "mastodon-sidekiq"
    healthcheck:
      test: ps aux | grep '[s]idekiq\ 6' || false
    env_file: mastodon.env.production
    environment:
      LOCAL_DOMAIN: "${MASTODON_DOMAIN}"
      SMTP_FROM_ADDRESS: "notifications@${MASTODON_DOMAIN}"
      ES_HOST: mastodon-elastic
      ES_ENABLED: true
    command: bundle exec sidekiq
    depends_on:
      - mastodon-db
      - mastodon-redis
    volumes:
      - ./mastodon/public/system:/mastodon/public/system
    networks:
      - frontend
      - backend
    extra_hosts:
      - "host.docker.internal:host-gateway"

  crossposter-db:
    restart: always
    image: postgres:14-alpine
    container_name: "crossposter-db"
    healthcheck:
      test: pg_isready -U postgres
    environment:
      POSTGRES_HOST_AUTH_METHOD: trust
    volumes:
      - ./crossposter/postgres:/var/lib/postgresql/data
    networks:
      - backend

  crossposter-redis:
    restart: always
    image: redis:6.0-alpine
    container_name: "crossposter-redis"
    healthcheck:
      test: redis-cli ping
    volumes:
      - ./crossposter/redis:/data
    networks:
      - backend

  crossposter-web:
    restart: always
    build: https://github.com/renatolond/mastodon-twitter-poster.git#main
    image: mastodon-twitter-poster
    container_name: "crossposter-web"
    env_file: crossposter.env.production
    environment:
      CROSSPOSTER_DOMAIN: "https://crossposter.${MASTODON_DOMAIN}"
    expose:
      - "3000"
    depends_on:
      - crossposter-db
    networks:
      - frontend
      - backend

  crossposter-sidekiq:
    restart: always
    build: https://github.com/renatolond/mastodon-twitter-poster.git#main
    image: mastodon-twitter-poster
    container_name: "crossposter-sidekiq"
    healthcheck:
      test: ps aux | grep '[s]idekiq\ 6' || false
    env_file: crossposter.env.production
    environment:
      ALLOWED_DOMAIN: "${MASTODON_DOMAIN}"
      CROSSPOSTER_DOMAIN: "https://crossposter.${MASTODON_DOMAIN}"
    command: bundle exec sidekiq -c 5 -q default
    depends_on:
      - crossposter-db
      - crossposter-redis
    networks:
      - frontend
      - backend

volumes:
  mastodon-public:

networks:
  frontend:
    name: "${COMPOSE_PROJECT_NAME}_frontend"
    ipam:
      config:
        - subnet: "${FRONTEND_SUBNET}"
  backend:
    name: "${COMPOSE_PROJECT_NAME}_backend"
    internal: true

The web server Caddy is configured using a Caddyfile stored in ./caddy/etc-caddy. I started with a config I found on Github.

# file: 'Caddyfile'
# kate: indent-width 8; space-indent on;

{
        # Global options block. Entirely optional, https is on by default
        # Optional email key for lets encrypt
        email {$LETS_ENCRYPT_EMAIL}
        # Optional staging lets encrypt for testing. Comment out for production.
        # acme_ca https://acme-staging-v02.api.letsencrypt.org/directory

        # admin off
}

{$MASTODON_DOMAIN} {
        log {
                # format single_field common_log
                output file /logs/access.log
        }

        root * /srv/mastodon/public

        encode gzip

        @static file

        handle @static {
                file_server
        }

        handle /api/v1/streaming* {
                reverse_proxy mastodon-streaming:4000
        }

        handle {
                reverse_proxy mastodon-web:3000
        }

        header {
                Strict-Transport-Security "max-age=31536000;"
        }

        header /sw.js  Cache-Control "public, max-age=0";
        header /emoji* Cache-Control "public, max-age=31536000, immutable"
        header /packs* Cache-Control "public, max-age=31536000, immutable"
        header /system/accounts/avatars* Cache-Control "public, max-age=31536000, immutable"
        header /system/media_attachments/files* Cache-Control "public, max-age=31536000, immutable"

        handle_errors {
                @5xx expression `{http.error.status_code} >= 500 && {http.error.status_code} < 600`
                rewrite @5xx /500.html
                file_server
        }
}

crossposter.{$MASTODON_DOMAIN} {
        log {
                # format single_field common_log
                output file /logs/access-crossposter.log
        }

        encode gzip

        handle {
                reverse_proxy crossposter-web:3000
        }

}

With these file in place, create a few more folders and launch the setup of the instance. If the instance has been setup before, a database setup may be enough.

# mastodon
touch mastodon.env.production
sudo chown 991:991 mastodon.env.production
mkdir -p mastodon/public
sudo chown -R 991:991 mastodon/public
mkdir -p mastodon/elasticsearch
sudo chmod g+rwx mastodon/elasticsearch
sudo chgrp 0 mastodon/elasticsearch

# first time: setup mastodon
# https://github.com/mastodon/mastodon/issues/16353 (on RUBYOPT)
docker compose run --rm -v $(pwd)/mastodon.env.production:/opt/mastodon/.env.production -e RUBYOPT=-W0 web bundle exec rake mastodon:setup

# subsequent times: skip generation of config and only setup database
docker compose run --rm -v $(pwd)/mastodon.env.production:/opt/mastodon/.env.production web bundle exec rake db:setup

# crossposter
mkdir crossposter
docker-compose run --rm crossposter-web bundle exec rake db:setup

# launch mastodon and crossposter
docker compose run -d

# look into the logs, -f for live logs
docker compose logs -f

Troubleshooting

  1. I had much problems to let the mastodon container connect to a mail transport agent (MTA) of my host. Eventually, I solved it with an extra filewall rule: ufw allow proto tcp from any to 172.17.0.1 port 25.

  2. The mail issue can be avoided by a) using a SaaS such as (mailgun/mailjet/sendinblue) or b) using another Docker container with postfix that is in the frontend network as well. Look at Peertube’s docker-compose file for some inspiration.

References

the avatar of FreeAptitude

openSUSE 15.2 to 15.3 upgrade notes

In a previous article I have shown how to upgrade a distro using zypper but after the first reboot some issue might always happen, that’s why I collected all the changes and the tweaks I applied switching from openSUSE 15.2 to 15.3.
a silhouette of a person's head and shoulders, used as a default avatar

dracut: fix hibernate after fresh install

Just for fun, I finally installed a fresh Tumbleweed on one of my machines to actually find out if I'm missing out on new features that are masked by just always updating the old installation.

One feature that I had missed was, that hibernation was no longer working. Or, to be more exact, hibernation was working, but resume was not. Investigating the issue, I found that dracut's "resume" module was not included in the initramfs, which in turn lead to initramfs not even trying to resume.

The dracut mechanism has some logic to actually find out if suspend and resume is configured, and when it decides it is not, it will just skip adding the "useless" resume module.

Unfortunately, the check is faulty IMHO. It checks, if the resume device is configured in the kernel, and if it is, it adds the module to dracut. The problem is, that this kernel config is written by the resume code in the initrd... so during installation this is not the case, and as a result the module is not added to initrd, which leads to the config not being set on next reboot... 

The trivial fix is to call dracut once with the option to include the resume module:

dracut -a resume -f

After a reboot, the kernel config will be set and in the future the resume module will always be included in the initrd.
For an even saver setup, adding

add_dracutmodules+=" resume "

to /etc/dracut.conf.d/99-local.conf might be even better.

Of course I wanted to report a bug about the issue, then found out that there are plenty already:
a silhouette of a person's head and shoulders, used as a default avatar

syslog-ng-future.blog? Is this a fork or what?

Seemingly a boring topic, Balázs Scheidler finds open source licensing fascinating. It allows him to work on syslog-ng even though Balabit was acquired. He writes:

“I mentioned in the previous post that I would like to focus on syslog-ng and put it more into the spotlight. I also mentioned that Balabit, the company I was a founder of and the commercial sponsor behind syslog-ng, was acquired by One Identity ~4 years ago. How does this add up? Who owns the Intellectual Property (IP) for syslog-ng? Who am I or this blog affiliated with?”

Read the rest of his blog at https://syslog-ng-future.blog/syslog-ng-future-blog-is-this-a-fork-or-what/

syslog-ng logo

a silhouette of a person's head and shoulders, used as a default avatar

cvtsudoers: merging multiple sudoers files into one

We learned in my previous sudo blog that cvtsudoers is not just for LDAP. Version 1.9.9 of sudo extends the querying possibilities of cvtsudoers further and adds a brand new feature: merging multiple sudoers files into one. Both are especially useful when you have complex configurations. Querying lets you to better understand what the various rules allow in your sudoers file. Merging helps you to combine multiple configurations into one, so you do not have to maintain a separate sudoers file on each of your hosts.

Read the rest of my blog on the sudo website at https://www.sudo.ws/posts/2022/02/cvtsudoers-merging-multiple-sudoers-files-into-one/

Sudo logo

the avatar of Robert Riemann

Mastodon Setup with Docker and nginx-proxy

I have been working on a setup with Mastodon that is easy to repeat and share. A setup with very few steps. Please consider that this setup is not enough for a production environment. It requires additional security measures. Please put your recommendations in the comments! :grinning:

Our starting point is the docker-compose.yml shipped with the Mastodon code. Why is it not enough? It assumes you setup up proxy with HTTPS endpoints yourself. So let’s integrate this as well in Docker.

Consider also the compact setup with the Caddy webserver.

Setup

Few remarks to start with:

  • my testing system: Ubuntu 20.04.3 LTS (GNU/Linux 5.4.0-97-generic x86_64)
  • install first some software with apt install docker docker.io jq git
  • create an unprivileged user account, e.g. mastodon

    adduser --disabled-login mastodon
    adduser mastodon docker
    adduser mastodon sudo # optional, remove later
    su mastodon # switch to that user
    
  • my docker compose: Docker Compose version v2.2.3 (based on go)

    install docker compose in 3 lines:

    mkdir -p ~/.docker/cli-plugins
    curl -sSL https://github.com/docker/compose/releases/download/v2.2.3/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
    chmod +x ~/.docker/cli-plugins/docker-compose
    
  • my testing domain (for this example): social.host

  • my dot-env file .env for docker compose:

    LETS_ENCRYPT_EMAIL=admin-mail@social.host
    MASTODON_DOMAIN=social.host
    
  • I have commented out build: ., because I prefer to rely on the official images from Docker Hub.

  • With little effort, I enable as well full-text search with elasticsearch.

  • The support of VIRTUAL_PATH is brand-new in nginx-proxy. It is not yet in the main branch, so that we rely on nginxproxy/nginx-proxy:dev-alpine.

  • The Mastodon code also ships an nginx configuration. However, nginx-proxy creates much of it as well, so that I currently believe no further configuration is required here. However, nginx-proxy allows to add custom elements to the generated configuration.

  • The setup places all databases and uploaded files in the folder mastodon
# file: 'docker-compose.yml'
version: "3.7"

services:
  nginx-proxy:
    image: nginxproxy/nginx-proxy:dev-alpine
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/conf:/etc/nginx/conf.d
      - ./nginx/vhost:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - ./nginx/certs:/etc/nginx/certs:ro
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./nginx/logs:/var/log/nginx
    networks:
      - external_network
      - internal_network

  acme-companion:
    image: nginxproxy/acme-companion
    container_name: nginx-proxy-acme
    volumes_from:
      - nginx-proxy
    volumes:
      - ./nginx/certs:/etc/nginx/certs:rw
      - ./nginx/acme:/etc/acme.sh
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      DEFAULT_EMAIL: "${LETS_ENCRYPT_EMAIL}"
    networks:
      - external_network

  db:
    restart: always
    image: postgres:14-alpine
    shm_size: 256mb
    networks:
      - internal_network
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "postgres"]
    volumes:
      - .mastodon//postgres14:/var/lib/postgresql/data
    environment:
      POSTGRES_HOST_AUTH_METHOD: trust

  redis:
    restart: always
    image: redis:6-alpine
    networks:
      - internal_network
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
    volumes:
      - ./mastodon/redis:/data

  # elasticsearch
  es:
    restart: always
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.10
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "cluster.name=es-mastodon"
      - "discovery.type=single-node"
      - "bootstrap.memory_lock=true"
    networks:
      - internal_network
    healthcheck:
      test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
    volumes:
      - ./mastodon/elasticsearch:/usr/share/elasticsearch/data
    ulimits:
      memlock:
        soft: -1
        hard: -1

  web:
    # build: .
    image: tootsuite/mastodon:v3.4.6
    restart: always
    env_file: mastodon.env.production
    command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
    networks:
      - external_network
      - internal_network
    healthcheck:
      test: ["CMD-SHELL", "wget -q --spider --proxy=off localhost:3000/health || exit 1"]
    ports:
      - "127.0.0.1:3000:3000"
    depends_on:
      - db
      - redis
      - es
    volumes:
      - ./mastodon/public/system:/mastodon/public/system
    environment:
      VIRTUAL_HOST: "${MASTODON_DOMAIN}"
      VIRTUAL_PATH: "/"
      VIRTUAL_PORT: 3000
      LETSENCRYPT_HOST: "${MASTODON_DOMAIN}"
      ES_HOST: mastodon-elastic
      ES_ENABLED: true

  streaming:
    # build: .
    image: tootsuite/mastodon:v3.4.6
    restart: always
    env_file: mastodon.env.production
    command: node ./streaming
    networks:
      - external_network
      - internal_network
    healthcheck:
      test: ["CMD-SHELL", "wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1"]
    ports:
      - "127.0.0.1:4000:4000"
    depends_on:
      - db
      - redis
    environment:
      VIRTUAL_HOST: "${MASTODON_DOMAIN}"
      VIRTUAL_PATH: "/api/v1/streaming"
      VIRTUAL_PORT: 4000

  sidekiq:
    # build: .
    image: tootsuite/mastodon:v3.4.6
    restart: always
    env_file: mastodon.env.production
    command: bundle exec sidekiq
    depends_on:
      - db
      - redis
    networks:
      # - external_network
      - internal_network
    volumes:
      - ./mastodon/public/system:/mastodon/public/system

volumes:
  html:

networks:
  external_network:
  internal_network:
    internal: true

With this file in place, create a few more folders and launch the setup of the instance. If the instance has been setup before, a database setup may be enough.

# mastodon
touch mastodon.env.production
sudo chown 991:991 mastodon.env.production
mkdir -p mastodon/public
sudo chown -R 991:991 mastodon/public
mkdir -p mastodon/elasticsearch
sudo chmod g+rwx mastodon/elasticsearch
sudo chgrp 0 mastodon/elasticsearch

# first time: setup mastodon
# https://github.com/mastodon/mastodon/issues/16353 (on RUBYOPT)
docker compose run --rm -v $(pwd)/mastodon.env.production:/opt/mastodon/.env.production -e RUBYOPT=-W0 web bundle exec rake mastodon:setup

# subsequent times: skip generation of config and only setup database
docker compose run --rm -v $(pwd)/mastodon.env.production:/opt/mastodon/.env.production web bundle exec rake db:setup

# launch mastodon
docker compose run -d

# look into the logs, -f for live logs
docker compose logs -f

Mastodon Twitter Crossposter

To setup the Mastodon Twitter Poster for crossposting, add the following services to the docker-compose.yml

crossposter-db:
  restart: always
  image: postgres:14-alpine
  container_name: "crossposter-db"
  healthcheck:
    test: pg_isready -U postgres
  environment:    
    POSTGRES_HOST_AUTH_METHOD: trust
  volumes:
    - ./crossposter/postgres:/var/lib/postgresql/data
  networks:
    - internal_network

crossposter-redis:
  restart: always
  image: redis:6.0-alpine
  container_name: "crossposter-redis"
  healthcheck:
    test: redis-cli ping
  volumes:
    - ./crossposter/redis:/data
  networks:
    - internal_network

crossposter-web:
  restart: always
  build: https://github.com/renatolond/mastodon-twitter-poster.git#main
  image: mastodon-twitter-poster
  container_name: "crossposter-web"
  env_file: crossposter.env.production
  environment:
    ALLOWED_DOMAIN: "${MASTODON_DOMAIN}"
    DB_HOST: crossposter-db
    REDIS_URL: "redis://crossposter-redis"
  networks:
    - internal_network
    - external_network
  expose:
    - "3000"
  depends_on:
    - crossposter-db

crossposter-sidekiq:
  restart: always
  build: https://github.com/renatolond/mastodon-twitter-poster.git#main
  image: mastodon-twitter-poster
  container_name: "crossposter-sidekiq"
  env_file: crossposter.env.production
  environment:
    ALLOWED_DOMAIN: "${MASTODON_DOMAIN}"
    REDIS_URL: "redis://crossposter-redis"
    DB_HOST: crossposter-db
  command: bundle exec sidekiq -c 5 -q default
  healthcheck:
    test: ps aux | grep '[s]idekiq\ 6' || false
  networks:
    # - external_network
    - internal_network
  depends_on:
    - crossposter-db
    - crossposter-redis

The crossposter requires a database setup before the containers can be launched:

docker-compose run --rm crossposter-web bundle exec rake db:setup

References