Use local go modules
When dealing with go modules, sometimes it’s handy to test some changes from a local repository instead of using the upstream one.
Now, go programs are typically relying only on the upstream packages. Take the module file of openqa-mon as example:
Elasticsearch 7.14 and Opensearch 1.0 Are Available and Work Fine With Syslog-ng
One of the most popular destinations in syslog-ng is Elasticsearch. Due to the license change of the Elastic stack, some people changed quickly to Grafana/Loki and other technologies. However, most syslog-ng users decided to wait and see. Version 1.0.0 of OpenSearch, a fork of the Elastic code base from before the license change is now available. Elastic also published a new release last week.
For this blog, I tested the latest and greatest from both product lines and I’m sharing my experiences. For the impatient: both work perfectly well.
For details read my blog at https://www.syslog-ng.com/community/b/blog/posts/elasticsearch-7-14-and-opensearch-1-0-are-available-and-work-fine-with-syslog-ng
Turris, syslog-ng and me
Yes, it’s a syslog-ng blog from me, and it’s not on https://syslog-ng.com/ :-) The reason is simple: this is not a technical blog. This is my story about how I found the Turris Omnia Linux router and how this lead to working together with the Turris guys.
The beginnings
When I ordered my Turris Omnia, I did not know that it ran syslog-ng. All I knew that it was an ARM device and that it ran Linux. It has a reasonable amount of RAM, storage can be extended, and I can also run containers on it, including my favorite OS: openSUSE. Crowdsourcing was very popular at that time. I ordered my box through Indiegogo.
Working together
Once my Turris Omnia was up and running, I realized that TurrisOS uses syslog-ng for logging. Even if it was an ancient version (3.0), I was very happy about it. I featured the Turris Omnia in some of my blogs as a log source. Finally, a device that could provide me with some interesting real-life log messages!
It turned out that the Turris team has a member whom I already knew from the openSUSE project. We met at conferences, exchanged some e-mails. Soon, syslog-ng was updated to 3.9 in TurrisOS, jumping almost a decade in code and adding many interesting features along the way.
Every now and then, I helped in version updates, suggested new features and changes to the configuration, which helped to greatly extend the feature set of syslog-ng in TurrisOS without adding too many extra dependencies. Using syslog-ng on the Turris Omnia (and most likely on all the other Turris devices), you can now parse many types of incoming log messages and send log messages to Elasticsearch and various cloud services.
Mutual benefits
To me, working with Turris is completely a hobby project. Still, there are mutual benefits both for Turris and Balabit (now One Identity Hungary). I help Turris with syslog-ng and include Turris Omnia in my blogs. It’s a fantastic log source. For example, if you click the YouTube icon in the upper right corner, you will see some timelapse videos built from Turris logs. The blogs describing how to make those heat maps and videos were quite popular for a while: https://www.syslog-ng.com/community/b/blog/posts/creating-heat-maps-using-new-syslog-ng-geoip2-parser
For Turris, it means that even more people learn about their products. For Balabit, it strengthens the syslog-ng everywhere image: running everywhere from embedded devices, like the Kindle e-book reader and Turris Omnia router up to some of the largest HPC clusters of the world.
It was a pleasure working with the Turris team at cz.nic as it allowed me to intersect my work interests with my hobby. Getting swag from them did not hurt either. I am a t-shirt maniac, and the USB converter with four different plugs and a Turris logo is not just nice, but also quite practical :-)
On the photo below you can see me in my latest t-shirt and mask:

CzP in Turris t-shirt and mask
HP EliteBook 840 G7 with openSUSE Continued
openSUSE Tumbleweed – Review of the week 2021/31
Dear Tumbleweed users and hackers,
How often did you update your machine during the last week? If you were to follow every single snapshot, you had to do it seven times. That’s how many snapshots passed openQA and had been pushed out to the mirrors.
What were the main changes in the snapshots 0729…0804?
- Mozilla Firefox 90.0.2
- KDE Plasma 5.22.4
- Pipewire 0.3.32
- Postfix 3.6.2
- Parted 3.4: supports F2FS
- Mesa 21.1.6
- Bash 5.1.8
- Linux kernel 5.13.6
- Systemd 248.6
- Zypper 1.14.48: experimental support for singletrans rpm commits
- Network Manager 1.32.6
- Freetype 2.11.0
During the next two weeks, Richard will be taking care of openSUSE Tumbleweed. Naturally, I tried to leave him only a small pile of things in the stagings. Currently, there are:
- Linux kernel 5.13.8
- systemd 249
- rpmlint 2.0
- openssl 3
- glibc 2.34
Most of them requiring a good bunch of build fixes to make them acceptable. So, I can only ask you all to support Richard during the next few days and offer him as many build fixes as you possibly can.
Noodlings 31 | Reflecting
Avoid Head Spinning

In a versatile tool like Inkscape, there are always features that aren’t for you. There are some that really get in your way though, like the recently added canvas rotation.
If you’re like me and constantly keep triggering it by accident (Blender zooming being Inkscape’s panning having to do with it), you’ll be happy to learn it can be completely disabled. Sip on your favorite beverage and dive into the thick preferences dialog again (Edit>Preferences), this time you’re searching for Lock canvas rotation by default in the Interface section. One more thing that might throw you off is that you need to restart Inkscape for the change to have any effect.
If you don’t wish to go nuclear on the function, do note it can be reset from the status bar bottom right.
Deescalating Tensions

One of the great attributes of SVG is that its text nature lends itself to be easily version controlled. Inkscape uses SVG as its native format (and extends it using its private namespace).
Unfortunately it uses the documents themselves to store things like canvas position and zoom state. This instantly erases one of the benefits for easy version control as every change instantly turns into unsolvable conflict.
Luckily you can at least give up the ability to store the canvas position for the greater good of not having merge conflicts, if you manage to convince your peers to change its defaults. Which is what this blog post is about :)
To change these defaults, you have to dive into the thick forrest that is Inkscape’s preferences (Edit > Preferences). You’ll find then in the Interface > Windows section. The default being the unfortunate Save and restore window geometry for each document needs to be changed either to Don't save window geometry or Remember to use last window's geometry.
From now on, rebasing icon-development-kit won’t cause any more grey hair for you!
Update: Turns out, despite me testing before posting, only Don't save window geometry is safe. Even window geometry appears to be saved into the document.
My Google Pixel C: the end of an era
I got my Google Pixel C tablet in early 2016, well over five years ago. I use it ever since almost every day. A big part of it is that I also have the Pixel C keyboard accessory. I prefer touch typing and funnily enough that does not work on a touch screen. It needs a real keyboard. And that keyboard died today. My Pixel C can still recognize the attached keyboard, but it does not work any more. Most likely it is a battery problem. And – as a nice coincidence – all this happened on the day, when Google announced its first own mobile CPU, called Tensor.
A bit of history
When it comes to technology I like to be an early adopter. Not always the very first experimental devices, but still ahead of mostly everyone else. For example I bought my first digital camera in 2000, five years earlier than most people started to see it as an alternative to film cameras and 10 years before they became widely used. The first iPad came out in 2010, but I do not really like Apple products, to me the UX is a disaster. A year later there were already some Android alternatives, but those were problematic both on the software and the hardware side. The first good Android tablet was the Nexus 7 (2012) by Google. That was my first tablet. I missed a keyboard but still used it a lot. While everyone else struggled with tiny mobile screens and poor & expensive Internet connections, I used large and detailed off-line maps on my Nexus 7 while abroad. Over the years it received many updates, and each year it felt slower.
In 2015 I started to look for a new tablet. I checked the iPad again, and I still did not like it. Then came the announcement of Pixel C and I knew immediately what I want. And next spring I had the Pixel C with the keyboard accessory.
The Pixel C
Even after five years, the Pixel C specifications are quite nice. The 3GB of RAM is nothing extra any more. But the 2560 x 1800 screen resolution is still higher than most Android tablets have today. Its aspect ratio is better suited to photos, reading or working than the usual HD or FullHD screens. It has nice colors and the screen is perfectly readable even in direct sunlight, as I experienced this afternoon too. Browsing the web with multiple tabs can be slow nowadays, but my suspicion is that it is not a CPU problem, but rather the lack of RAM. Rendering complex PDF files is still often faster than on my laptop.
It’s a tablet, so its main function is media consumption. And for that the Pixel C is perfect. I can read books or websites for many hours from its screen without my eyes getting tired. It’s good for occasional Youtube or Vimeo videos. And thanks to its keyboard it can be used for light work as well. Often I read the news and tweeted right away from my Pixel C. Or I wrote even longer e-mails on the tablet. Most of the time the keyboard was either next to me or attached to the back of the tablet, so I could use it any time quickly when needed.
The Pixel C received many Android updates. Currently even lucky devices receive just two years worth of security updates. The Pixel C received more than two years worth of OS upgrades, from 6.0 to 8.1, and almost full four years of security updates.
What is next?
Without the keyboard I’m not sure how much I’ll use it in the future. Obviously, it will be still good for reading or listening to music. But I’ll need to switch to other devices a lot more often, as I prefer typing on real keyboards instead of screens.
After reading today’s Google announcement I started dreaming of a new Pixel tablet featuring the new Google Tensor CPU. With a bit of luck it will be used not just in the Pixel 6 mobiles, but also in a new Google tablet.
Setting up OwnTracks Recorder and OAuth2 with nginx, oauth2-proxy and podman
One thing I always wanted to do when going on holiday is to track where I go, the places I’ve been, and see how much I’ve travelled around. This is true in particular when going to places where I walk around a lot (Japan stays at the top of the list, also for other reasons that are not related to this post). Something like viewing a map showing where you were, and where did you go, with optional export to KML or GPX to import into other programs like Marble.
To my knowledge, there are a number of proprietary solutions to this problem, but in this case I value these kind of data quite a lot, so they are out of the question from the start. And so, I began my search for something I could use.
I initially set up with Traccar but the sheer complexity of the program (not to mention that I relied on some features from a popular fork was off-putting. Also it did far more than what I wanted it to do, and the use case of the program was completely different from mine anyway. After a couple of unsuccessful tries, I left it to gather virtual dust on my server until I deleted it completely.
A possible solution: OwnTracks
More recently (and more than once) I got interested in OwnTracks, which at least on paper promised exactly what I wanted. In particular, the web page mentioned
OwnTracks allows you to keep track of your own location. You can build your private location diary or share it with your family and friends. OwnTracks is open-source and uses open protocols for communication so you can be sure your data stays secure and private.
Therefore that was worthy of investigation. Unfortunately, I found the documentation to be really sub par. Actually the program is very powerful and can do many things, but the guide is essentially a collection of topics rather than “how to do X”. Thus, even simple tasks like a proper installation procedure were hard to grasp.
To give a simple explanation on how the whole thing works: OwnTracks collects location data from one’s smartphone and submits information to a recorder, which is hosted wherever you prefer, which can then store, process and display the information. The Recorder can either use a protocol called MQTT, widely used for IoT, or work through HTTP.
As the author of OwnTracks himself recommends HTTP when using the OwnTracks app on Android 6.0+, I didn’t bother investigating MQTT and went straight to HTTP.
The rest of this post is how I set up everything.
Installing the OwnTracks Recorder with Podman
As the OwnTracks application for smartphones can be acquired either from F-Droid or the relevant app stores, that was not a problem (the app is also FOSS, by the way, or I wouldn’t have considered it). Then I turned my attention to the Recorder.
The Recorder is a C application which must be compiled manually, and that can have options disabled or enabled at compile time. To be honest I wasn’t too keen in building it from source, so I looked at other ways to obtain it. I first searched for it in the Open Build Service, but as I didn’t find it, I had to install it manually.
Owntracks has its own Docker image for the Recorder, but while I admit that containerization has its uses, I’m not too keen on having it controlled by a daemon that runs as root. And since I had a Docker daemon update that broke many services I used (the daemon would just crash on startup), I didn’t want to add more to an already fragile system.
Enter podman. Podman is another way of running containers, without any daemon running, and more importantly it can run them rootless. That means that you can run individual containers (or groups of them) as a non privileged user, which is better than having a daemon running as root. The various podman commands are also made to mimic Docker, and that eases adaptation.
The rest of this entry deals on how to install and set up the OwnTracks Recorder and authentication.
Preparations
For this guide I used podman version 3.2.3, installed through my distribution’s package manager (zypper on openSUSE). Your mileage may vary with other versions.
First of all, I created a new user. I used useradd but any way is fine, for example thorugh YaST or other tools.
useradd --system -c "OwnTracks" owntracks --create-home
This is a real login, so that you can do things interactively. I set up a strong password and configured SSH to refuse any login from this user, to make sure it stays local only.
Before we proceed further, we have to set up subuids and subgids for our user, or podman will not function properly. The reason is that podman (by default, at least) when running rootless will use user namespaces to map user and group IDs inside the container to other IDs outside the container itself (as an example, root inside your container will be seen as UID 100001 in the host system). These IDs are called subuids and subgids.
To make this work you have to assign ranges of subuids and subgids to your user to use, ensuring they don’t overlap with anything existing in your system. At least in openSUSE Leap 15.2 the generation of these is not automatic, meaning you have to resort to usermod to do the job:
usermod --add-subuids <min-subuid>-<max-subuid> \
--add-subgids <min-subgid>-<max-subgid> <login>
Once that is done, you’ll see something like this in /etc/subuid and /etc/subgid:
owntracks:100000:65536
That means that owntracks will use subuids (or subgids) starting from 100000 up to 65536 more. (man subuid and man subgid are your friends).
If we want to run containers at boot, we have to enable lingering in systemd for the user:
loginctl enable-linger owntracks
Getting the container images
Once that is a done deal, we can finally switch to our user and retrieve the Docker images. OwnTracks has two images that can be used:
-
owntracks/recorder, which is the actual Recorder application; -
owntracks/frontend, which is a fancy HTML/JS frontend to display data gathered from the Recorder.
Getting the images is straightforward, as the podman syntax closely mimics the one of Docker:
podman pull owntracks/backend
podman pull owntracks/frontend
Now that we have the images, we have to make sure to run them together and able to see each other, because the frontend requires a valid connection to the Recorder to be able to display information. In the Docker world, this is often done with docker-compose, but although podman can be used with docker-compose it also has its own way, called “pods” of keeping containers tightly knit together.
First, we create some directories to host the configuration and other container data:
mkdir -p owntracks/recorder/config owntracks/recorder/data owntracks/frontend/config owntracks/recorder/store owntracks/recorder/output
Then we create our pod. Note that all container:host port mappings need to be created here, or they won’t work when adding individual containers:
podman pod create \
--name owntracks \
-p 127.0.0.1:8083:8083 \
-p 127.0.0.1:6666:80
This creates the pod owntracks and maps port 8083, the one used by the Recorder, to 8083 on the host (only from localhost). Likewise, port 80 on the frontend is mapped to port 6666 (the default is different, but I had something else listening there already).
At this point, we need to create a couple of files before starting the containers. Most importantly, you need to decide on which domain host the Recorder and the frontend. For this guide we will:
- Host both the Recorder and the frontend on the same domain (we will use
tracks.example.com) - The frontend will be accessible on the root of the domain, while the various parts of the Recorder will live in the
owntrackssubdirectory (tracks.example.com/owntracks).
We then create owntracks/recorder/config/recorder.conf with this content:
#(@)ot-recorder.default
#
# Specify global configuration options for the OwnTracks Recorder
# and its associated utilities to override compiled-in defaults.
OTR_TOPICS = "owntracks/#"
OTR_HTTPHOST = "0.0.0.0"
OTR_PORT = 0
OTR_HTTPPORT = 8083
The important bit here is OTR_PORT. Setting it to 0 disables looking for a MQTT broker, which I was not interested in running. If it is non-zero, it will try to connect to that port and startup will fail if there is no service running. There are plenty of other configuration options: consult this list for more information.
For the frontend, we create instead owntracks/frontend/config/config.js:
window.owntracks = window.owntracks || {};
window.owntracks.config = {
api: {
baseUrl: "https://tracks.example.com/owntracks/",
},
};
baseUrl is where your Recorder lives (see above). It can also be somewhere else entirely if need be. Like with the Recorder, there are many more configuration options to explore.
Starting the containers
It is now time to start the actual containers, and for this we use podman run. First we start the Recorder, mapping the folders we have created earlier:
podman run \
--pod owntracks \
-d \
--rm \
--name ot_recorder \
-v /path/to/owntracks/recorder/config:/config \
-v /path/to/owntracks/recorder/store:/store \
-v /path/to/owntracks/recorder/output:/output \
owntracks/recorder
As you can see, the syntax is very close to what Docker does. You can omit the --rm switch if you don’t want the container to be deleted when you stop it (it might be relevant for startup, as I’ll explain later).
Check if the Recorder is actually running (you can just use podman ps but here I show a slightly more complex approach to make output slimmer):
podman ps \
-f 'name=ot_.*' \ # Only show containers starting with "ot"
--format '{{ .ID }}\t{{ .Image}}\t{{.Names}}\t{{.Status}}\t{{.Pod}}'
843222dacfd7 docker.io/owntracks/recorder:latest ot_recorder Up 2 weeks ago ff8bb4e9dac2
You might want to check also the logs for possible errors with podman logs ot_recorder.
Then it’s the turn of the frontend:
podman run \
--pod owntracks \
-d \
--rm \
--name ot_frontend \
--env SERVER_HOST=127.0.0.1 \
--env SERVER_PORT=8083 \
-v /path/to/owntracks/frontend/config/config.js:/usr/share/nginx/html/config/config.js \
owntracks/frontend
There are a couple of things worth noting here. The env option sets environment variables, and tell the frontend to connect to localhost on port 8083 to find the Recorder. In a podman pod, a service of a container can be accessed by another by connecting through localhost. There are other ways to use proper DNS names like docker-compose does, but they’re not essential in this specific case. Secondly, although we use --env you can use proper environment files if you desire so (see man podman).
Check with podman logs ot_frontend whether the container has connected correctly to the Recorder, and we’re done.
Automatic startup
We can use systemd user units to do the job to start our container at boot. Before that we need to do one more thing, which is important especially if you never log in directly with this user (and that’s why we made it in the first place).
In order to systemd to work, it needs (as far as I can see) XDG_RUNTIME_DIR set. So I put this in the user’s shell startup configuration:
export XDG_RUNTIME_DIR=/run/user/$(id -u)
then I logged out and back in. I’m not sure if this can be done without a logout/login.
Afterwards, it’s matter of creating the relevant directories and generating the systemd units:
mkdir -p ~/.config/systemd/user
cd ~/.config/systemd/user
podman generate systemd --name owntracks -f # optional: add --new to re-create containers at each restart
# Make systemd aware of the units
systemctl --user daemon-reload
This will create a bunch of files there, namely pod-owntracks.service, container-ot_recorder.service and container-ot_frontend.service. You can then enable owntracks at boot with
systemctl --user enable pod-owntracks.service
You can add --new to make podman recreate pod and containers on each restart. This is a prerequisite if you want podman to actually update images easily, but at least in my limited testing, when pods are involved often restarts break, so I made the containers persistent. YMMV.
Point a browser on the server (even links will suffice) to localhost:8083 and localhost:6666 if you want to verify everything is done correctly.
On the OwnTracks user front, everything is done, so the next steps are carried out as root (or with sudo).
Hooking up the web server
Right now both the Recorder and the frontend are accessible only via localhost. This was the plan all along, because we’ll put nginx in front of them. I assume you have already a functional web server with proper SSL set up (with Let’s Encrypt, there is no reason not to).
First of all, create a new configuration file for your site. Then you can add the following bits to it (largely copied from the OwnTracks docs):
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name tracks.example.com;
server_tokens off;
client_max_body_size 40m;
# Put SSL and other configuration bits here
# OwnTracks frontend
location / {
proxy_pass http://127.0.0.1:6666/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
auth_request_set $auth_cookie $upstream_http_set_cookie;
add_header Set-Cookie $auth_cookie;
}
# OwnTracks backend
# Proxy and upgrade WebSocket connection
location /owntracks/ws {
rewrite ^/owntracks/(.*) /$1 break;
proxy_pass http://127.0.0.1:8083;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /owntracks/ {
proxy_pass http://127.0.0.1:8083/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
# OwnTracks Recorder Views
location /owntracks/view/ {
proxy_buffering off; # Chrome
proxy_pass http://127.0.0.1:8083/view/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
location /owntracks/static/ {
proxy_pass http://127.0.0.1:8083/static/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
# HTTP Mode
location /owntracks/pub {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
# Optionally force Recorder to use username from Basic
# authentication user. Whether or not client sets
# X-Limit-U and/or uses ?u= parameter, the user will
# be set to $remote_user.
proxy_set_header X-Limit-U $remote_user;
}
}
Check the configuration with nginx -t and then restart your webserver. If you access https://tracks.example.com you should see a map (OpenStreetMap) and if you access https://tracks.example.com/owntracks you should be presented with a list of locations and users. Of course everything is empty, because we haven’t added any device yet.
Before we actually start recording, we need to secure access, otherwise anyone could see where you’re going (not good). That means adding some form of authentication.
Authentication: is httpasswd the only way?
The simplest solution would be to use HTTP Basic Authentication and secure the root, /owntracks and /owntracks/pub paths. However, that’s not what I wanted: as I planned to allow a few trusted users for viewing, I didn’t want to have them remember another series of usernames and passwords. I had already a central source of authentication (more on that below), so I wanted to use that.
On the other hand, the OwnTracks app only understands Basic Authentication and nothing else. So, what to do?
oauth2-proxy
I could have used LDAP instead, but I don’t have LDAP on my system and I didn’t want to retrofit it in the existing services. The solution was to use the OAuth 2.0 standard in conjunction with nginx’s auth_request module to allow authentication through another source.
For the actual OAuth2 service, I wanted something simple so I looked at oauth2-proxy. It is used often in conjunction with Kubernetes and the nginx_ingress controller, but it can be used also with the “regular” nginx.
A warning before going further. This guide assumes you put the OAuth2 service in a top-level domain called auth.example.com.
As oauth2-proxy is written in Go, you can just clone the git repo and build it yourself, or download a pre-built binary (I built it and installed it in /usr/local/bin). While a Docker image is offered, in my opinion there’s no need for Docker containers for a single application. You’re free to use whatever option you want, of course.
With oauth2-proxy installed, it’s time to set things up. Create a path to host the configuration (I used /etc/oauth2-proxy) and write the following in the configuration file (oauth2-proxy.cfg; some comments are from the sample configuration plus some of my own):
## OAuth2 Proxy Config File
## https://github.com/oauth2-proxy/oauth2-proxy
# There are plenty of other options; see the sample configuration for details
## <addr>:<port> to listen on for HTTP/HTTPS clients
http_address = "127.0.0.1:4180"
## Are we running behind a reverse proxy? Will not accept headers like X-Real-Ip unless this is set.
reverse_proxy = true
## Alternative users for Basic Authentication
htpasswd_file = "/etc/nginx/owntracks.htpasswd"
display_htpasswd_form = true
## the OAuth Redirect URL.
# defaults to the "https://" + requested host header + "/oauth2/callback"
redirect_url = "https://auth.example.com/oauth2/callback"
## oauth2-proxy can also acts a proxy for files, but we'll just use nginx
## So we make sure it doesn't proxy anything
upstreams = [
"file:///dev/null"
]
# Put ALL domains you want oauth2-proxy to redirect to after authentication
# otherwise redirection will *NOT* work
whitelist_domains = [
".example.com",
]
## pass HTTP Basic Auth, X-Forwarded-User and X-Forwarded-Email information to upstream
# These are needed for the OwnTracks application
pass_basic_auth = true
pass_user_headers = true
pass_authorization_header = true
set_basic_auth = true
## pass the request Host Header to upstream
## when disabled the upstream Host is used as the Host Header
pass_host_header = true
## Email Domains to allow authentication for (this authorizes any email on this domain)
## for more granular authorization use `authenticated_emails_file`
## To authorize any email addresses use "*"
# I use my own mail domains here: adjust this configuration to your liking
email_domains = [
"example.com"
]
## The OAuth Client ID, Secret
provider = "YOUR_PROVIDER"
client_id = "CLIENT_ID"
client_secret = "CLIENT_SECRET"
# Put provider specific options here
# Basic authentication users - for the OwnTrack apps ONLY
## Additionally authenticate against a htpasswd file. Entries must be created with "htpasswd -B" for bcrypt encryption
## enabling exposes a username/login signin form
htpasswd_file = "/etc/nginx/owntracks.htpasswd"
display_htpasswd_form = false
## Cookie Settings
## Name - the cookie name
## Secret - the seed string for secure cookies; should be 16, 24, or 32 bytes
## for use with an AES cipher when cookie_refresh or pass_access_token
## is set
## Domain - (optional) cookie domain to force cookies to (ie: .yourcompany.com)
## Expire - (duration) expire timeframe for cookie
## Refresh - (duration) refresh the cookie when duration has elapsed after cookie was initially set.
## Should be less than cookie_expire; set to 0 to disable.
## On refresh, OAuth token is re-validated.
## (ie: 1h means tokens are refreshed on request 1hr+ after it was set)
## Secure - secure cookies are only sent by the browser of a HTTPS connection (recommended)
## HttpOnly - httponly cookies are not readable by javascript (recommended)
cookie_name = "YOUR_COOKIE_NAME"
# See the oauth2-proxy docs on how to generate this
cookie_secret = "YOUR_COOKIE_SECRET"
cookie_domain = "example.com"
cookie_secure = true
cookie_httponly = true
Note there are a few options to fill in, in particular the OAuth2 provider. There are plenty of options to choose from: they range from external services (GitHub, Google) to self-hosted ones (Nextcloud, Gitea, Gitlab, Keycloak…). Refer to the oauth2-proxy documentation on how to set them up.
Personally, I wanted to use mailcow, which can also act as an OAuth2 authentication source, but it does not support OpenID Connect fully, therefore I settled for my Gitea instance, which then in turn authenticates against mailcow. Complicated, but does the job.
Lastly, you also need to create the static user(s) for the OwnTracks app. I have found no other way to avoid it. If you know a better way that avoids this, let me know.
htpasswd -c /etc/nginx/owntracks.htpasswd
# The -B switch is *mandatory* if you want to use htpasswd with oauth2-proxy
htpasswd -c -B /etc/nginx/owntracks.htpasswd myusername
# Insert the password when prompted
# Repeat if you have more than one
Once you have oauth2-proxy set up, try running it:
oauth2-proxy --config /etc/oauth2-proxy/oauth2-proxy.cfg
Quit with Ctrl-C. If it starts up correctly, it’s time to make it start on boot. This assumes you are using a recent enough version of systemd (I used 234).
Create this unit file (/etc/systemd/system/oauth2-proxy.service):
[Unit]
Description=oauth2-proxy daemon service
After=syslog.target network.target
[Service]
# Change it to any non-privileged user you want; "nobody" may work too
# DynamicUser may also work, but I have not tested it
User=nginx
Group=nginx
ExecStart=/usr/local/bin/oauth2-proxy --config=/etc/oauth2-proxy/oauth2-proxy.cfg
ExecReload=/bin/kill -HUP $MAINPID
# As it only needs to listen and forward requests, limit its access
# With later systemd versions you can also sandbox it further
ProtectHome=true
ProtectSystem=full
PrivateTmp=true
KillMode=process
Restart=always
[Install]
WantedBy=multi-user.target
Afterwards, it’s time to enable and start the service:
systemctl daemon-reload
systemctl enable --now oauth2-proxy.service
If you want to be fancier and want it to start only when required, you can always use systemd’s socket activation. However, I had no need for this so I left it always running.
Adjusting nginx configuration
Now, we need to add the relevant information to nginx.
First we create a server stanza for our authentication service:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name auth.example.com;
server_tokens off;
# Add SSL data, etc. here
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Scheme $scheme;
proxy_pass http://127.0.0.1:4180;
}
}
Then we create a file called /etc/nginx/oauth2.conf which will contain an internal (not accessible from outside) location to handle authentication requests:
location /internal-auth/ {
proxy_pass http://127.0.0.1:4180/;
internal;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Auth-Request-Redirect $scheme://$host$request_uri;
}
Lastly, we create a file called /etc/nginx/oauth2_parameters to specify the parameters used in the location stanzas we actually want to protect with authentication. The contents:
auth_request /internal-auth/oauth2/auth;
# the ?rd parameter ensures you are properly redirected after authentication
error_page 401 = https://auth.example.com/oauth2/start?rd=$scheme://$host$request_uri;
# pass information via X-User and X-Email headers to backend,
# requires running with --set-xauthrequest flag
auth_request_set $user $upstream_http_x_auth_request_user;
auth_request_set $email $upstream_http_x_auth_request_email;
proxy_set_header X-User $user;
proxy_set_header X-Email $email;
# If you have set cookie refresh in oauth2-proxy, uncomment the two lines below
# auth_request_set $auth_cookie $upstream_http_set_cookie;
# add_header Set-Cookie $auth_cookie;
You can use either start or sign_in for the error_page line. The former forwards you immediately to your authentication source, while the latter offers a “Sign up with XXX” and a form for httpasswd authentication, if enabled in the oauth2-proxy configuration. As the OwnTracks app will send the Basic Authentication requests directly, I decided not to enable the form and have users go directly to authentication.
Finishing touches
Once that’s all and done, you just need to tell nginx to use what you just wrote. Modify the OwnTracks stanzas as follows:
include oauth2.conf;
location / {
include oauth2_params;
proxy_pass http://127.0.0.1:6666/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
# Only changed sections are shown
location /owntracks/ {
include oauth2_params;
proxy_pass http://127.0.0.1:8083/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
# And further below...
location /owntracks/pub {
include oauth2_params;
proxy_pass http://127.0.0.1:8083/pub;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
# Optionally force Recorder to use username from Basic
# authentication user. Whether or not client sets
# X-Limit-U and/or uses ?u= parameter, the user will
# be set to $remote_user.
# proxy_set_header X-Limit-U $remote_user;
}
Ensure oauth2-proxy is running, then restart nginx. Try accessing https://tracks.example.com and you should be redirected to your authentication provider’s login form. Once logged in, you should be redirected again to the OwnTracks frontend.
Adding our server to the OwnTracks app
Now that authentication is set up on the web page, we need to configure the OwnTracks app.
The following screenshots were made on Android. The IOS app may be different. YMMV.
First of all, tap the hamburger menu and select Preferences (click on all the images for a larger version):
Hamburger menu
Then, tap on Connection:
Settings page
Once on this screen, you can set both the URL and your username/password combination. First, ensure that mode is set to HTTP (if not, change it).
Host setup form
Then tap on host, and insert the url as https://tracks.example.com/owntracks/pub (the pub is important, as it’s the HTTP API endpoint).
Connection page
Then, insert the username and password combination you generated earlier with httpasswd.
To test that the connection is actually working, exit the settings and enable location services on your phone. An up arrow icon in the OwnTracks app will light up (it’s to manually upload the location). Hit it, and then tap again on the hamburger menu, selecting “Status”, this time. If everything has gone well, you should see something like this:
OwnTracks connection status page
If you then go to the frontend, you should see a pin where your location has been successfully recorded.
And thus, everyhing is set up. We’re done.
Trouble? What kind of trouble?
In case something goes wrong, there are a few places you can check:
- For OAuth2-proxy, the oauth2-proxy logs with
journalctl -u oauth2-proxy; - For OwnTracks, the podman logs with
podman logs ot_frontendorpodman logs ot_recorder - For all the rest, the nginx logs.
Wrap up
A nice bonus of this setup is that you can potentially add OAuth2 authentication to any application you have running if it does not provide authentication itself, as long as it runs in the same domain.
I wrote this guide because the (not large) resources on oauth2-proxy mainly deal with Kubernetes, and many, other essential bits of information are scattered throughout the net. I hope it will be useful to someone (well, at least it was useful to me, so that’s a start).
Have fun!