My Google Pixel C: the end of an era

I got my Google Pixel C tablet in early 2016, well over five years ago. I use it ever since almost every day. A big part of it is that I also have the Pixel C keyboard accessory. I prefer touch typing and funnily enough that does not work on a touch screen. It needs a real keyboard. And that keyboard died today. My Pixel C can still recognize the attached keyboard, but it does not work any more.

Setting up OwnTracks Recorder and OAuth2 with nginx, oauth2-proxy and podman

One thing I always wanted to do when going on holiday is to track where I go, the places I’ve been, and see how much I’ve travelled around. This is true in particular when going to places where I walk around a lot (Japan stays at the top of the list, also for other reasons that are not related to this post). Something like viewing a map showing where you were, and where did you go, with optional export to KML or GPX to import into other programs like Marble.

To my knowledge, there are a number of proprietary solutions to this problem, but in this case I value these kind of data quite a lot, so they are out of the question from the start. And so, I began my search for something I could use.

I initially set up with Traccar but the sheer complexity of the program (not to mention that I relied on some features from a popular fork was off-putting. Also it did far more than what I wanted it to do, and the use case of the program was completely different from mine anyway. After a couple of unsuccessful tries, I left it to gather virtual dust on my server until I deleted it completely.

A possible solution: OwnTracks

More recently (and more than once) I got interested in OwnTracks, which at least on paper promised exactly what I wanted. In particular, the web page mentioned

OwnTracks allows you to keep track of your own location. You can build your private location diary or share it with your family and friends. OwnTracks is open-source and uses open protocols for communication so you can be sure your data stays secure and private.

Therefore that was worthy of investigation. Unfortunately, I found the documentation to be really sub par. Actually the program is very powerful and can do many things, but the guide is essentially a collection of topics rather than “how to do X”. Thus, even simple tasks like a proper installation procedure were hard to grasp.

To give a simple explanation on how the whole thing works: OwnTracks collects location data from one’s smartphone and submits information to a recorder, which is hosted wherever you prefer, which can then store, process and display the information. The Recorder can either use a protocol called MQTT, widely used for IoT, or work through HTTP.

As the author of OwnTracks himself recommends HTTP when using the OwnTracks app on Android 6.0+, I didn’t bother investigating MQTT and went straight to HTTP.

The rest of this post is how I set up everything.

Installing the OwnTracks Recorder with Podman

As the OwnTracks application for smartphones can be acquired either from F-Droid or the relevant app stores, that was not a problem (the app is also FOSS, by the way, or I wouldn’t have considered it). Then I turned my attention to the Recorder.

The Recorder is a C application which must be compiled manually, and that can have options disabled or enabled at compile time. To be honest I wasn’t too keen in building it from source, so I looked at other ways to obtain it. I first searched for it in the Open Build Service, but as I didn’t find it, I had to install it manually.

Owntracks has its own Docker image for the Recorder, but while I admit that containerization has its uses, I’m not too keen on having it controlled by a daemon that runs as root. And since I had a Docker daemon update that broke many services I used (the daemon would just crash on startup), I didn’t want to add more to an already fragile system.

Enter podman. Podman is another way of running containers, without any daemon running, and more importantly it can run them rootless. That means that you can run individual containers (or groups of them) as a non privileged user, which is better than having a daemon running as root. The various podman commands are also made to mimic Docker, and that eases adaptation.

The rest of this entry deals on how to install and set up the OwnTracks Recorder and authentication.

Preparations

For this guide I used podman version 3.2.3, installed through my distribution’s package manager (zypper on openSUSE). Your mileage may vary with other versions.

First of all, I created a new user. I used useradd but any way is fine, for example thorugh YaST or other tools.

useradd --system -c "OwnTracks" owntracks --create-home

This is a real login, so that you can do things interactively. I set up a strong password and configured SSH to refuse any login from this user, to make sure it stays local only.

Before we proceed further, we have to set up subuids and subgids for our user, or podman will not function properly. The reason is that podman (by default, at least) when running rootless will use user namespaces to map user and group IDs inside the container to other IDs outside the container itself (as an example, root inside your container will be seen as UID 100001 in the host system). These IDs are called subuids and subgids.

To make this work you have to assign ranges of subuids and subgids to your user to use, ensuring they don’t overlap with anything existing in your system. At least in openSUSE Leap 15.2 the generation of these is not automatic, meaning you have to resort to usermod to do the job:

usermod --add-subuids <min-subuid>-<max-subuid> \
    --add-subgids <min-subgid>-<max-subgid> <login>

Once that is done, you’ll see something like this in /etc/subuid and /etc/subgid:

owntracks:100000:65536

That means that owntracks will use subuids (or subgids) starting from 100000 up to 65536 more. (man subuid and man subgid are your friends).

If we want to run containers at boot, we have to enable lingering in systemd for the user:

loginctl enable-linger owntracks

Getting the container images

Once that is a done deal, we can finally switch to our user and retrieve the Docker images. OwnTracks has two images that can be used:

  • owntracks/recorder, which is the actual Recorder application;
  • owntracks/frontend, which is a fancy HTML/JS frontend to display data gathered from the Recorder.

Getting the images is straightforward, as the podman syntax closely mimics the one of Docker:

podman pull owntracks/backend
podman pull owntracks/frontend

Now that we have the images, we have to make sure to run them together and able to see each other, because the frontend requires a valid connection to the Recorder to be able to display information. In the Docker world, this is often done with docker-compose, but although podman can be used with docker-compose it also has its own way, called “pods” of keeping containers tightly knit together.

First, we create some directories to host the configuration and other container data:

mkdir -p owntracks/recorder/config owntracks/recorder/data owntracks/frontend/config owntracks/recorder/store owntracks/recorder/output

Then we create our pod. Note that all container:host port mappings need to be created here, or they won’t work when adding individual containers:

podman pod create \
    --name owntracks \
    -p 127.0.0.1:8083:8083 \
    -p 127.0.0.1:6666:80

This creates the pod owntracks and maps port 8083, the one used by the Recorder, to 8083 on the host (only from localhost). Likewise, port 80 on the frontend is mapped to port 6666 (the default is different, but I had something else listening there already).

At this point, we need to create a couple of files before starting the containers. Most importantly, you need to decide on which domain host the Recorder and the frontend. For this guide we will:

  • Host both the Recorder and the frontend on the same domain (we will use tracks.example.com)
  • The frontend will be accessible on the root of the domain, while the various parts of the Recorder will live in the owntracks subdirectory (tracks.example.com/owntracks).

We then create owntracks/recorder/config/recorder.conf with this content:

#(@)ot-recorder.default
#
# Specify global configuration options for the OwnTracks Recorder
# and its associated utilities to override compiled-in defaults.

OTR_TOPICS = "owntracks/#"
OTR_HTTPHOST = "0.0.0.0"
OTR_PORT = 0
OTR_HTTPPORT = 8083

The important bit here is OTR_PORT. Setting it to 0 disables looking for a MQTT broker, which I was not interested in running. If it is non-zero, it will try to connect to that port and startup will fail if there is no service running. There are plenty of other configuration options: consult this list for more information.

For the frontend, we create instead owntracks/frontend/config/config.js:

window.owntracks = window.owntracks || {};
window.owntracks.config = {
    api: {
        baseUrl: "https://tracks.example.com/owntracks/",
    },
};

baseUrl is where your Recorder lives (see above). It can also be somewhere else entirely if need be. Like with the Recorder, there are many more configuration options to explore.

Starting the containers

It is now time to start the actual containers, and for this we use podman run. First we start the Recorder, mapping the folders we have created earlier:

podman run \
    --pod owntracks \
    -d \
    --rm \
    --name ot_recorder \
    -v /path/to/owntracks/recorder/config:/config \
    -v /path/to/owntracks/recorder/store:/store \
    -v /path/to/owntracks/recorder/output:/output \
    owntracks/recorder

As you can see, the syntax is very close to what Docker does. You can omit the --rm switch if you don’t want the container to be deleted when you stop it (it might be relevant for startup, as I’ll explain later).

Check if the Recorder is actually running (you can just use podman ps but here I show a slightly more complex approach to make output slimmer):

podman ps \
    -f 'name=ot_.*' \  # Only show containers starting with "ot"
    --format '{{ .ID }}\t{{ .Image}}\t{{.Names}}\t{{.Status}}\t{{.Pod}}'

843222dacfd7  docker.io/owntracks/recorder:latest  ot_recorder  Up 2 weeks ago  ff8bb4e9dac2

You might want to check also the logs for possible errors with podman logs ot_recorder.

Then it’s the turn of the frontend:

podman run \
    --pod owntracks \
    -d \
    --rm \
    --name ot_frontend \
    --env SERVER_HOST=127.0.0.1 \
    --env SERVER_PORT=8083 \
     -v /path/to/owntracks/frontend/config/config.js:/usr/share/nginx/html/config/config.js \
    owntracks/frontend

There are a couple of things worth noting here. The env option sets environment variables, and tell the frontend to connect to localhost on port 8083 to find the Recorder. In a podman pod, a service of a container can be accessed by another by connecting through localhost. There are other ways to use proper DNS names like docker-compose does, but they’re not essential in this specific case. Secondly, although we use --env you can use proper environment files if you desire so (see man podman).

Check with podman logs ot_frontend whether the container has connected correctly to the Recorder, and we’re done.

Automatic startup

We can use systemd user units to do the job to start our container at boot. Before that we need to do one more thing, which is important especially if you never log in directly with this user (and that’s why we made it in the first place).

In order to systemd to work, it needs (as far as I can see) XDG_RUNTIME_DIR set. So I put this in the user’s shell startup configuration:

export XDG_RUNTIME_DIR=/run/user/$(id -u)

then I logged out and back in. I’m not sure if this can be done without a logout/login.

Afterwards, it’s matter of creating the relevant directories and generating the systemd units:

mkdir -p ~/.config/systemd/user
cd ~/.config/systemd/user
podman generate systemd --name owntracks -f # optional: add --new to re-create containers at each restart
# Make systemd aware of the units
systemctl --user daemon-reload

This will create a bunch of files there, namely pod-owntracks.service, container-ot_recorder.service and container-ot_frontend.service. You can then enable owntracks at boot with

systemctl --user enable pod-owntracks.service

You can add --new to make podman recreate pod and containers on each restart. This is a prerequisite if you want podman to actually update images easily, but at least in my limited testing, when pods are involved often restarts break, so I made the containers persistent. YMMV.

Point a browser on the server (even links will suffice) to localhost:8083 and localhost:6666 if you want to verify everything is done correctly.

On the OwnTracks user front, everything is done, so the next steps are carried out as root (or with sudo).

Hooking up the web server

Right now both the Recorder and the frontend are accessible only via localhost. This was the plan all along, because we’ll put nginx in front of them. I assume you have already a functional web server with proper SSL set up (with Let’s Encrypt, there is no reason not to).

First of all, create a new configuration file for your site. Then you can add the following bits to it (largely copied from the OwnTracks docs):

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    server_name tracks.example.com;
    server_tokens off;

    client_max_body_size 40m;

    # Put SSL and other configuration bits here

    # OwnTracks frontend
    location / {
        proxy_pass              http://127.0.0.1:6666/;
        proxy_http_version      1.1;
        proxy_set_header        Host $host;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Real-IP $remote_addr;
        auth_request_set $auth_cookie $upstream_http_set_cookie;
        add_header Set-Cookie $auth_cookie;
    }

    # OwnTracks backend

    # Proxy and upgrade WebSocket connection
    location /owntracks/ws {
        rewrite ^/owntracks/(.*)    /$1 break;
        proxy_pass      http://127.0.0.1:8083;
        proxy_http_version  1.1;
        proxy_set_header    Upgrade $http_upgrade;
        proxy_set_header    Connection "upgrade";
        proxy_set_header    Host $host;
        proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    location /owntracks/ {
        proxy_pass      http://127.0.0.1:8083/;
        proxy_http_version  1.1;
        proxy_set_header    Host $host;
        proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header    X-Real-IP $remote_addr;
    }

    # OwnTracks Recorder Views
    location /owntracks/view/ {
         proxy_buffering         off;            # Chrome
         proxy_pass              http://127.0.0.1:8083/view/;
         proxy_http_version      1.1;
         proxy_set_header        Host $host;
         proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header        X-Real-IP $remote_addr;
    }
    location /owntracks/static/ {
         proxy_pass              http://127.0.0.1:8083/static/;
         proxy_http_version      1.1;
         proxy_set_header        Host $host;
         proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header        X-Real-IP $remote_addr;
    }

        # HTTP Mode
    location /owntracks/pub {
        proxy_http_version      1.1;
        proxy_set_header        Host $host;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Real-IP $remote_addr;

        # Optionally force Recorder to use username from Basic
        # authentication user. Whether or not client sets
        # X-Limit-U and/or uses ?u= parameter, the user will
        # be set to $remote_user.
        proxy_set_header        X-Limit-U $remote_user;
    }

}

Check the configuration with nginx -t and then restart your webserver. If you access https://tracks.example.com you should see a map (OpenStreetMap) and if you access https://tracks.example.com/owntracks you should be presented with a list of locations and users. Of course everything is empty, because we haven’t added any device yet.

Before we actually start recording, we need to secure access, otherwise anyone could see where you’re going (not good). That means adding some form of authentication.

Authentication: is httpasswd the only way?

The simplest solution would be to use HTTP Basic Authentication and secure the root, /owntracks and /owntracks/pub paths. However, that’s not what I wanted: as I planned to allow a few trusted users for viewing, I didn’t want to have them remember another series of usernames and passwords. I had already a central source of authentication (more on that below), so I wanted to use that.

On the other hand, the OwnTracks app only understands Basic Authentication and nothing else. So, what to do?

oauth2-proxy

I could have used LDAP instead, but I don’t have LDAP on my system and I didn’t want to retrofit it in the existing services. The solution was to use the OAuth 2.0 standard in conjunction with nginx’s auth_request module to allow authentication through another source.

For the actual OAuth2 service, I wanted something simple so I looked at oauth2-proxy. It is used often in conjunction with Kubernetes and the nginx_ingress controller, but it can be used also with the “regular” nginx.

A warning before going further. This guide assumes you put the OAuth2 service in a top-level domain called auth.example.com.

As oauth2-proxy is written in Go, you can just clone the git repo and build it yourself, or download a pre-built binary (I built it and installed it in /usr/local/bin). While a Docker image is offered, in my opinion there’s no need for Docker containers for a single application. You’re free to use whatever option you want, of course.

With oauth2-proxy installed, it’s time to set things up. Create a path to host the configuration (I used /etc/oauth2-proxy) and write the following in the configuration file (oauth2-proxy.cfg; some comments are from the sample configuration plus some of my own):


## OAuth2 Proxy Config File
## https://github.com/oauth2-proxy/oauth2-proxy

# There are plenty of other options; see the sample configuration for details

## <addr>:<port> to listen on for HTTP/HTTPS clients
http_address = "127.0.0.1:4180"

## Are we running behind a reverse proxy? Will not accept headers like X-Real-Ip unless this is set.
reverse_proxy = true
## Alternative users for Basic Authentication
htpasswd_file = "/etc/nginx/owntracks.htpasswd"
display_htpasswd_form = true

## the OAuth Redirect URL.
# defaults to the "https://" + requested host header + "/oauth2/callback"
redirect_url = "https://auth.example.com/oauth2/callback"

## oauth2-proxy can also acts a proxy for files, but we'll just use nginx
## So we make sure it doesn't proxy anything
upstreams = [
     "file:///dev/null"
]

# Put ALL domains you want oauth2-proxy to redirect to after authentication
# otherwise redirection will *NOT* work
whitelist_domains = [
    ".example.com",
]

## pass HTTP Basic Auth, X-Forwarded-User and X-Forwarded-Email information to upstream
# These are needed for the OwnTracks application
pass_basic_auth = true
pass_user_headers = true
pass_authorization_header = true
set_basic_auth = true
## pass the request Host Header to upstream
## when disabled the upstream Host is used as the Host Header
pass_host_header = true

## Email Domains to allow authentication for (this authorizes any email on this domain)
## for more granular authorization use `authenticated_emails_file`
## To authorize any email addresses use "*"

# I use my own mail domains here: adjust this configuration to your liking

email_domains = [
     "example.com"
]

## The OAuth Client ID, Secret
provider = "YOUR_PROVIDER"
client_id = "CLIENT_ID"
client_secret = "CLIENT_SECRET"

# Put provider specific options here

# Basic authentication users - for the OwnTrack apps ONLY

## Additionally authenticate against a htpasswd file. Entries must be created with "htpasswd -B" for bcrypt encryption
## enabling exposes a username/login signin form
htpasswd_file = "/etc/nginx/owntracks.htpasswd"
display_htpasswd_form = false

## Cookie Settings
## Name     - the cookie name
## Secret   - the seed string for secure cookies; should be 16, 24, or 32 bytes
##            for use with an AES cipher when cookie_refresh or pass_access_token
##            is set
## Domain   - (optional) cookie domain to force cookies to (ie: .yourcompany.com)
## Expire   - (duration) expire timeframe for cookie
## Refresh  - (duration) refresh the cookie when duration has elapsed after cookie was initially set.
##            Should be less than cookie_expire; set to 0 to disable.
##            On refresh, OAuth token is re-validated.
##            (ie: 1h means tokens are refreshed on request 1hr+ after it was set)
## Secure   - secure cookies are only sent by the browser of a HTTPS connection (recommended)
## HttpOnly - httponly cookies are not readable by javascript (recommended)
cookie_name = "YOUR_COOKIE_NAME"
# See the oauth2-proxy docs on how to generate this
cookie_secret = "YOUR_COOKIE_SECRET"
cookie_domain = "example.com"
cookie_secure = true
cookie_httponly = true

Note there are a few options to fill in, in particular the OAuth2 provider. There are plenty of options to choose from: they range from external services (GitHub, Google) to self-hosted ones (Nextcloud, Gitea, Gitlab, Keycloak…). Refer to the oauth2-proxy documentation on how to set them up.

Personally, I wanted to use mailcow, which can also act as an OAuth2 authentication source, but it does not support OpenID Connect fully, therefore I settled for my Gitea instance, which then in turn authenticates against mailcow. Complicated, but does the job.

Lastly, you also need to create the static user(s) for the OwnTracks app. I have found no other way to avoid it. If you know a better way that avoids this, let me know.

htpasswd -c /etc/nginx/owntracks.htpasswd
# The -B switch is *mandatory* if you want to use htpasswd with oauth2-proxy
htpasswd -c -B /etc/nginx/owntracks.htpasswd myusername
# Insert the password when prompted
# Repeat if you have more than one

Once you have oauth2-proxy set up, try running it:

oauth2-proxy --config /etc/oauth2-proxy/oauth2-proxy.cfg

Quit with Ctrl-C. If it starts up correctly, it’s time to make it start on boot. This assumes you are using a recent enough version of systemd (I used 234).

Create this unit file (/etc/systemd/system/oauth2-proxy.service):

[Unit]
Description=oauth2-proxy daemon service
After=syslog.target network.target

[Service]
# Change it to any non-privileged user you want; "nobody" may work too
# DynamicUser may also work, but I have not tested it
User=nginx
Group=nginx

ExecStart=/usr/local/bin/oauth2-proxy --config=/etc/oauth2-proxy/oauth2-proxy.cfg
ExecReload=/bin/kill -HUP $MAINPID

# As it only needs to listen and forward requests, limit its access
# With later systemd versions you can also sandbox it further
ProtectHome=true
ProtectSystem=full
PrivateTmp=true

KillMode=process
Restart=always

[Install]
WantedBy=multi-user.target

Afterwards, it’s time to enable and start the service:

systemctl daemon-reload
systemctl enable --now oauth2-proxy.service

If you want to be fancier and want it to start only when required, you can always use systemd’s socket activation. However, I had no need for this so I left it always running.

Adjusting nginx configuration

Now, we need to add the relevant information to nginx.

First we create a server stanza for our authentication service:

server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;

  server_name auth.example.com;
  server_tokens off;

  # Add SSL data, etc. here

  location / {
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header Host       $host;
        proxy_set_header X-Scheme   $scheme;
        proxy_pass       http://127.0.0.1:4180;

    }

}

Then we create a file called /etc/nginx/oauth2.conf which will contain an internal (not accessible from outside) location to handle authentication requests:

  location /internal-auth/ {
    proxy_pass       http://127.0.0.1:4180/;
    internal;
    proxy_set_header Host                    $host;
    proxy_set_header X-Real-IP               $remote_addr;
    proxy_set_header X-Scheme                $scheme;
    proxy_set_header X-Auth-Request-Redirect $scheme://$host$request_uri;
  }

Lastly, we create a file called /etc/nginx/oauth2_parameters to specify the parameters used in the location stanzas we actually want to protect with authentication. The contents:

auth_request /internal-auth/oauth2/auth;
# the ?rd parameter ensures you are properly redirected after authentication
error_page 401 = https://auth.example.com/oauth2/start?rd=$scheme://$host$request_uri;
# pass information via X-User and X-Email headers to backend,
# requires running with --set-xauthrequest flag
auth_request_set $user   $upstream_http_x_auth_request_user;
auth_request_set $email  $upstream_http_x_auth_request_email;
proxy_set_header X-User  $user;
proxy_set_header X-Email $email;
# If you have set cookie refresh in oauth2-proxy, uncomment the two lines below
# auth_request_set $auth_cookie $upstream_http_set_cookie;
# add_header Set-Cookie $auth_cookie;

You can use either start or sign_in for the error_page line. The former forwards you immediately to your authentication source, while the latter offers a “Sign up with XXX” and a form for httpasswd authentication, if enabled in the oauth2-proxy configuration. As the OwnTracks app will send the Basic Authentication requests directly, I decided not to enable the form and have users go directly to authentication.

Finishing touches

Once that’s all and done, you just need to tell nginx to use what you just wrote. Modify the OwnTracks stanzas as follows:

include oauth2.conf;

location / {
    include oauth2_params;
    proxy_pass              http://127.0.0.1:6666/;
    proxy_http_version      1.1;
    proxy_set_header        Host $host;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header        X-Real-IP $remote_addr;
}

# Only changed sections are shown

location /owntracks/ {
    include oauth2_params;
    proxy_pass      http://127.0.0.1:8083/;
    proxy_http_version  1.1;
    proxy_set_header    Host $host;
    proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header    X-Real-IP $remote_addr;
}

# And further below...

location /owntracks/pub {
    include oauth2_params;
    proxy_pass              http://127.0.0.1:8083/pub;
    proxy_http_version      1.1;
    proxy_set_header        Host $host;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header        X-Real-IP $remote_addr;

    # Optionally force Recorder to use username from Basic
    # authentication user. Whether or not client sets
    # X-Limit-U and/or uses ?u= parameter, the user will
    # be set to $remote_user.
    # proxy_set_header        X-Limit-U $remote_user;
}

Ensure oauth2-proxy is running, then restart nginx. Try accessing https://tracks.example.com and you should be redirected to your authentication provider’s login form. Once logged in, you should be redirected again to the OwnTracks frontend.

Adding our server to the OwnTracks app

Now that authentication is set up on the web page, we need to configure the OwnTracks app.

The following screenshots were made on Android. The IOS app may be different. YMMV.

First of all, tap the hamburger menu and select Preferences (click on all the images for a larger version):

Hamburger menu

Hamburger menu

Then, tap on Connection:

Settings page

Settings page

Once on this screen, you can set both the URL and your username/password combination. First, ensure that mode is set to HTTP (if not, change it).

Host setup form

Host setup form

Then tap on host, and insert the url as https://tracks.example.com/owntracks/pub (the pub is important, as it’s the HTTP API endpoint).

Connection page

Connection page

Then, insert the username and password combination you generated earlier with httpasswd.

To test that the connection is actually working, exit the settings and enable location services on your phone. An up arrow icon in the OwnTracks app will light up (it’s to manually upload the location). Hit it, and then tap again on the hamburger menu, selecting “Status”, this time. If everything has gone well, you should see something like this:

OwnTracks connection status page

OwnTracks connection status page

If you then go to the frontend, you should see a pin where your location has been successfully recorded.

And thus, everyhing is set up. We’re done.

Trouble? What kind of trouble?

In case something goes wrong, there are a few places you can check:

  • For OAuth2-proxy, the oauth2-proxy logs with journalctl -u oauth2-proxy;
  • For OwnTracks, the podman logs with podman logs ot_frontend or podman logs ot_recorder
  • For all the rest, the nginx logs.

Wrap up

A nice bonus of this setup is that you can potentially add OAuth2 authentication to any application you have running if it does not provide authentication itself, as long as it runs in the same domain.

I wrote this guide because the (not large) resources on oauth2-proxy mainly deal with Kubernetes, and many, other essential bits of information are scattered throughout the net. I hope it will be useful to someone (well, at least it was useful to me, so that’s a start).

Have fun!

References

Recently and soon in openSUSE #1

Community meeting: Tell us everything!

Today (Saturday 31st of July, 17:00 UTC) is the third installment of the recently rejuvenated Community meetings! Taking place on Jitsi Meet, it will be an excellent opportunity to discuss and coordinate on solutions for improving things in the Project.

One important topic will be openSUSE Membership, soon to be affected by the shutdown of connect-o-o.

Python developer wanted for the “defrag” community API

In the last weaks, we’ve been working on “defrag” 1, a user-friendly REST API allowing users across all the platforms (e.g. Matrix, Telegram, Discord, forums, web) to perform searches for all kinds of things, e.g.:

  • zypper search
  • documentation search
  • bugzilla search
  • wiki search
  • progress/pagure search
  • news
  • events and other activites

… and possibly more. The current state of the project can be seen at GitHub: https://github.com/KaratekHD/defrag. Don’t worry, it will get moved to code.opensuse.org in the future.

Right now, two people are working on defrag, KaratekHD and Nycticorax. A third person has suspended their participation for personal reasons.

However, working on a project like this is pretty hard with only two people. So, we are looking for at least a third person to help us building our API. If you’re a Python developer and would like to help out in defrag, please get in touch with us. We’d love to hear from you!

contact:

Documentation: Share your top “must-do” after installing an openSUSE distribution

The new documentation platform – slightly more focused on Tumbleweed – is closing in on the beta release date and the team would be interested to know if the Community would like to add (or remove!) items to our Post-installation Best Of.

If you have any tip that’s not covered already, your suggestions will be wholeheartedly welcome! And if you have some time on your hands, let us know about your best of / most helpful wiki pages! We will be happy to migrate them to the new platform.

contact:

Bonus: Interview of Dominique Leuenberger

Dominique Leuenberger, release manager of Tumbleweed, has kindly agreed to an interview to be held in the upcoming weeks. Even though the interview will not be held live for reasons of simplicity, questions from the Community are very welcome! Join us on one of the two channels below and let us know if you want to hear Dominique on something we didn’t think of!

contact:

  • Matrix: #newscom:opensuse.org
  • Telegram
  1. Because we try and fight fragmentation in openSUSE. 

openSUSE Tumbleweed – Review of the week 2021/30

Dear Tumbleweed users and hackers,

Solid and predictable – that’s what openSUSE Tumbleweed tries to offer to the users. This also shows in the number of snapshots we release. 5 – 7 snapshots a week is absolutely normal – and was also achieved this week, in which we have published 6 snapshots (0723, 0724, 0725, 0726, 0727, and 0728).

The main changes in these snapshots included:

  • Mozilla Firefox 90.0.1
  • NetworkManager 1.32.4
  • cURL 7.78.0
  • VirtualBox 6.1.24
  • Meson 0.58.2
  • Linux kernel 5.13.4
  • Node.JS 16.5.0
  • GCC 11.2 RC1
  • LibreOffice 7.1.5
  • Poppler 21.07.0

Stagings are getting fuller again – with a few things causing different breakages being collected temporarily in Staging:F. The main changes being worked on are:

  • Mozilla Firefox 90.0.2
  • KDE Plasma 5.22.4
  • Mesa 21.1.6
  • system 248.6
  • Linux kernel 5.13.6
  • Inkscape 1.1 (needs openQA test adjusments)
  • rpmlint 2.0

Node.js, curl update in Tumbleweed

Six openSUSE Tumbleweed snapshots were released this week.

Among the updated packages that landed this week in the rolling release were curl, GNU Compiler Collection, Node.js, redis and LibreOffice.

The office suite package LibreOffice came in snapshot 20210728. The update to version 7.1.5.2 provided bugfixes addressing some regressions and a few fixes were made to prevent crashes in Writer. Linux Kernel firmware was updated in the snapshot and PDF rendering library poppler 21.07.0 provided some minor code improvements for build systems while also fixing a memory leak on broken files. The 2.32.3 webkit2gtk3 fixed several crashes and rendering issues and addressed a dozen Common Vulnerabilities and Exposures.

The 20210727 snapshot provided just a single package update to gcc11. The update of the head branch included the 11.2 release candidate and a corrected adjustment to the General Public License version 3.0. The package update also provided a libc-bootstrap cross compiler for AArch64 and RISC-V.

Snapshot 20210726 provided four package updates. Updated packages include gnome-sudoku 40.2 that fixed complex text for printing sudokus, The Linux networking package iputils 20210722 added a build requirement and fixed a broken start of services function. The two openSUSE packages updated in the snapshot were to polkit-default-privs and module manager yast2-nfs-server 4.4.1, which had a fix to properly determine a client name.

Node.js upgraded some dependencies in version 16.5.0 and has an experimental implementation of the Web Streams API in snapshot 20210725. The 6.2.5 version of redis, which supports different kinds of abstract data structures, fixed a CVE integer overflow. A few YaST packages were updated in the snapshot like yast2-control-center 4.4.1 and yast2-iscsi-client 4.4.2. The 0.17.3 version of createrepo_c dropped Python2 support and removed some distutils, which were deprecated in Python3. An update to the newest python-setuptools 57.4 was made in the snapshot; the jump from the 57.0 version revamped the backward and cross-tool compatibility section to remove confusion and the package now relies on a native SSL implementation.

Just two packages were update in snapshot 20210724. The 5.13.4 version of the Linux Kernel brought the patch for the Sequoia CVE-20212-33909. The kernel also fixed some ethernet plugin detections problems for arm as well as a duplication of a USB4 target module node. The same version for kvm_stat added a restart patch to enable a kvm service reboot as systemd’s initial attempt to start the kvm unit file may fail; this appears to be done in case the kvm module is not loaded.

The snapshot that started off the week, 20210723, brought some fixes to Mozilla Firefox 90. The 90.0.1 version updated a rare crash on shutdown and fixed a looping process of some HTTP3 responses. Daniel Stenberg provided an update of the curl 7.78.0 security fixes, which is a popular library and command-line tool that transfers data using various network protocols. The curl team addressed a few CVEs including CVE-2021-22924 that had a bad connection based on the config matching function. GTK3 3.24.30 had some accessibility improvements and fixed a memory leak. The updated version NetworkManager 1.32.4 changed some IPv4 configuration and fixed a nftables backend. The compiler plugin that allows clang to understand Qt semantics, called clazy, updated to version 1.10 and fixed a crash when the Platform Controller Hub is enabled. Other packages to updated in the snapshot were virtualbox 6.1.24, ncurses, yast2-network 4.4.21 and webkit2gtk3 2.32.2.

Syslog-ng 3.33: the MQTT destination

Syslog-ng 3.33: the MQTT destination Version 3.33 of syslog-ng introduced an MQTT destination. It uses the paho-c client library to send log messages to an MQTT broker. The current implementation supports version 3.1 and 3.1.1 of the protocol over non-encrypted connections, but this is only a first step. From this blog, you can learn how to configure and test the mqtt() destination in syslog-ng. Read my blog at https://www.syslog-ng.com/community/b/blog/posts/syslog-ng-3-33-the-mqtt-destination

Running openSUSE in a FreeBSD jail using Bastille

Why? Last week, when the latest version of Bastille, a jail (container) management system for FreeBSD was released, it also included experimental Linux support. Its author needed Ubuntu, so that was implemented. I prefer openSUSE, so with some ugly hacks I could get openSUSE up and running in Bastille. I was asked to document it in a blog. This topic does not fit the sudo or syslog-ng blogs, where I regularly contribute.

Syslog-ng 3.33: the MQTT destination

Version 3.33 of syslog-ng introduced an MQTT destination. It uses the paho-c client library to send log messages to an MQTT broker. The current implementation supports version 3.1 and 3.1.1 of the protocol over non-encrypted connections, but this is only a first step.

From this blog, you can learn how to configure and test the mqtt() destination in syslog-ng.

Before you begin

To use the MQTT destination of syslog-ng, you need to use at least syslog-ng version 3.33. It is not yet included in most Linux distributions, but luckily it is available in some 3rd-party syslog-ng repositories. You can find a list of them at https://www.syslog-ng.com/products/open-source-log-management/3rd-party-binaries.aspx MQTT support is usually included in a sub-package of syslog-ng, often called syslog-ng-mqtt. You also need an MQTT broker, where you can forward log messages using the mqtt() destination.,

For my tests, I used Fedora 34 with the Mosquitto MQTT broker and syslog-ng installed from the unofficial Copr repository.

Configuring syslog-ng

If your distribution of choice supports it, create a new configuration file under the /etc/syslog-ng/conf.d/ directory with a .conf extension. Otherwise, append the configuration below to syslog-ng.conf.

destination d_mqtt {
  mqtt (
    address("tcp://localhost:1883"),
    topic("test/$HOST"),
    fallback-topic("syslog/fallback")
    template("$MESSAGE")
    qos(1)
  );
};

log {
    source(s_sys);
    destination(d_mqtt);
};

The MQTT destination has three mandatory options:

  • address() defines where to send log messages: it includes the hostname (or IP address) and the port number of the MQTT broker. These values work if the broker is running on localhost at the default port.

  • topic() defines in which topic syslog-ng stores the log message. You can also use templates here, and use, for example, the $HOST macro in the topic name hierarchy.

  • fallback-topic() is used when syslog-ng cannot post a message to the originally defined topic (which can include invalid characters coming from templates).

Optional parameters of the MQTT destination include:

  • template(), where you can configure the message template sent to the MQTT broker. By default, the template is: “$ISODATE $HOST $MSGHDR$MSG”

  • qos stands for quality of service and can take three values in the MQTT world. Its default value is 0, where there is no guarantee that the message is ever delivered. Setting it to 1 makes sure that the message is delivered at least once, while 2 ensures that a message is delivered exactly once. Obviously, 0 has the best performance, while 2 can be much slower.

The log statement connects the local log sources with the MQTT destination. Note, that the name of the source might be different, depending on syslog-ng.conf. Configurations on Fedora and RHEL call the local log source s_sys, by default.

Testing

As mentioned in the introduction, I used the Mosquitto MQTT broker for testing. There are dozens of others listed at https://mqtt.org/software/, verifying incoming messages is different for each software.

Once you reloaded syslog-ng to ensure that the configuration takes effect, you are ready for testing. We use two utilities for testing:

  • logger sends syslog messages to the local server

  • mosquitto_sub is part of the Mosquitto MQTT broker software and can subscribe to messages arriving on a broker

First start mosquitto_sub in a terminal:

mosquitto_sub -h localhost -p 1883 -t 'test/+'

This will listen to incoming messages on the broker on all subtopics under the “test” topic. “+” is a wildcard here.

Next, you can send some log messages using the logger command:

logger this is a test

And you should see the line appear in the output of mosquitto_sub:

2021-07-26T16:39:55+02:00 fedora34.localdomain root[3077]: this is a test

What is next?

This is just the first step in adding MQTT support to syslog-ng. Your feedback is very welcome about this new feature. Detailed problem reports help to make the next version more robust while positive feedback lets us know that the feature is in use and it is worth developing it further.

-

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @Pczanik.

Jul 27th, 2021

YaST Team posted at 06:00

Digest of YaST Development Sprints 127 & 128

It’s summer in Europe and that means vacations for most members of the YaST Team at SUSE. Although that may imply less frequent blogging, we have some news to share with you today, like:

  • Taking over the development of the (open)SUSE Release Tools
  • Improvements in the new check-profile command
  • Finished migration from Travis CI to GitHub Actions
  • Several interesting bug fixes

Let’s go into some details

Release Tools: YaST Team to the Rescue!

As you all know, developing and maintaining complex software distributions like openSUSE Leap, Tumbleweed or SUSE Linux Enterprise is not an easy task. Specially since we want to ensure all of them stay independent but at the same time closely related, and since they keep evolving in new directions like Kubic, MicroOS and SLE Micro.

Our beloved Open Build Service is the key component that makes all that possible. But some extra tools are needed in addition to OBS in order to manage the complexity of the (open)SUSE distributions. Those extra tools are hosted and developed in a GitHub repository simply called openSUSE-release-tools. For years, the development process of those tools has been highly unstructured (not to say “slightly chaotic”), with more than 60 contributors but no clear mid-term strategy. Although that is not necessarily bad, some sustained and directed development is needed to solve some of the challenges we have ahead of us and to fix some pitfalls in the current development process of the openSUSE and SUSE products and distributions.

The YaST Team was chosen for such a task, so we will steadily take over development and maintenance of the tools in that repository. As first steps, we improved a lot the documenation. That includes extending the README file and adding new documents like an inventory of tools and a summary of the processes in which those tools are involved. We also extended and updated the automated tests and implemented an easy new check in the factory-auto bot.

We have way more ambitious plans for the future, but we are still learning and discovering new stuff in that repository every day.

Improvements in the AutoYaST Profile Validation

As you may know, we recently introduced a YaST client to validate complex profiles that include Embedded Ruby, rules and classes and/or scripts. Generally, such a validation could be done without root permissions, but there are some situations where superuser privileges are required.

To mitigate the implications, we introduced several improvements in the check-profile tool. You can see the details in the description of the corresponding pull request.

From Travis CI to GitHub Actions - Migration Completed

Some months ago, we started switching the continuous integration on all YaST repositories from using Travis CI to GitHub Actions. The main reason was that GitHub Actions are directly integrated in GitHub so it is easier to use - no need for extra account, less problems with authentication or permissions…

The transition is finished now. It was easy because both services are quite similar, although support for Docker is more straightforward in GitHub Actions. In this service, the actions are defined in YAML files in the .github/workflows subdirectory. We created several templates for the YaST packages.

If you want to know more, read the GitHub Actions documentation.

Interesting bug fixes

Although we spend a significant time of our sprints fixing bugs, we usually don’t blog about that part of the job because we understand is not the most exciting one. But this time we would like to highlight some pull request you may find interesting for several reasons. Including better handling of failures while analyzing the system, of variables in repository urls and of SSH authorized keys.

We keep working

As mentioned before, the YaST Team is not at full speed due to the vacation season. But we hope to keep delivering interesting stuff in many fronts and we will try to keep you all updated. Meanwhile, do as we do and have a lot of fun!

Jul 26th, 2021

Deactivating connect.opensuse.org

Our community portal, reachable via https://connect.opensuse.org, accompanied our community now since 2010. A long, long time. Especially, if you compare it with Facebook (which started in 2006) or LinkedIn (who became an international company in 2010).

While Facebook and LinkedIn are meanwhile multi-billion dollar markets, our community portal is meanwhile mainly used to organize the openSUSE members and being a “contact point” for members, who provide their profiles to help others to contact them.

Over 20,000 actively registered users and 100 groups might give an idea about the diversity and agility of the openSUSE community. From artists to musicians over to local user groups and groups for all the different window managers and their lovers. Everyone found a place here in the openSUSE universe.

But time flies by quickly - and especially the technology sector never stands still. Today, we need to announce the final shut down of our community portal. The reason is simple: while we asked multiple times for help and someone who wants to actively maintain and administrate the service, nobody stepped up. As we can not secure the application any longer without big time investments, we decided to shut it down and let it rest in peace instead.

By doing this, we apologize for the trouble that this decision might cause to some people. We tried our best to support you over all the years by running the service as long as possible. But if nobody steps up and wants to take over the work any longer, it’s better to say ‘good bye’ instead to wait until the Hackers of the world share the data of our users.

An era comes to an end. A nice one, indeed. But it also means that there might be a new era on the horizon. Some new tools that have a maintainer and someone who takes care of them.

Look at our Forums, our Wikis, Mailing Lists or the new Matrix service. We will not stop to support you, we will do our best to keep you and your data secure.

The plan for membership management and applications to take place using Pagure.

Remember to have a lot of fun!

Lars - in the name of the openSUSE heroes -