Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.

22 January, 2020


The information below might fall into the "unsung heroes of openSUSE" category - we think it is clearly worth to be mentioned and getting some applause (not saying that every user should owe the author a beer at the next conference ;-).

  • You are searching for a nice font for the next document?
  • You want to install such a font directly via 1-click-install once you had a closer look?
  • You want to know more about rendering or language information or the character set for a font you want to install?

Just have a look at https://fontinfo.opensuse.org/, which provides all these information for you + some more. Special thanks to Petr Gajdos, who maintains the page and the package with the same name since years.


For many years – especially after syslog-ng changed to a rolling release model – users I talked to asked for up-to-date RPM packages. They also asked for a separate repository for each new release to avoid surprises (a new release might accidentally or even intentionally break old features) and to be able to use a given release if they want to (“if it works, do not fix it”). That is how my unofficial RPM repositories were born.

Recently some long-time syslog-ng users and members of the Splunk community started to ask for a repository, which always has the latest syslog-ng version available. Most users still prefer to use separate repositories. That is how I came up with the idea for the syslog-ng-stable repository: I push a new release to this new rolling repo only after at least a week of delay. This is enough to spot most major problems. Once the delay is over and everything seems to be OK, I can push the latest release to the syslog-ng-stable repo. If there is a bigger problem, I can skip the release in the stable repo or wait for a fix.

Which package to install?

You can use many log sources and destinations in syslog-ng. The majority of these require additional dependencies to be installed. If all the features would be included in a single package, installing syslog-ng would also install dozens of smaller and larger dependencies, including such behemoths as Java. This is why the syslog-ng package includes only the core functionality, and features requiring additional dependencies are available as sub-packages. The most popular sub-package is syslog-ng-http (or syslog-ng-curl on openSUSE), which installs the HTTP destination driver used to store messages to Elasticsearch, Splunk or Slack, but there are many others as well. Depending on your distribution: “zypper search syslog-ng” or a similar command will list you all the possibilities.

Installing syslog-ng on RHEL and CentOS 7 (& 8)

1. Depending on whether you have RHEL or CentOS 7 (or 8), do the following:

  • On RHEL 7: Enable the so-called “optional” repository, which contains a number of packages that are required to start syslog-ng:

subscription-manager repos --enable rhel-7-server-optional-rpms
  • On RHEL 8: Enable the so-called "supplementary" repository

subscription-manager repos --enable rhel-8-for-x86_64-supplementary-rpms
  • On CentOS: The content of this repo is included on CentOS, so you do not have to enable it there separately

2. The Extra Packages for Enterprise Linux (EPEL) repository contains many useful packages, which are not included in RHEL. A few dependencies of syslog-ng are available in this repo. You can enable it by downloading and installing an RPM package (replace 7 with 8 for RHEL 8):

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -Uvh epel-release-latest-7.noarch.rpm

3. Add my syslog-ng-stable repo from the Copr build service, which contains the latest unofficial stable build of syslog-ng. Download the repo file to /etc/yum.repos.d/, so you can install and enable syslog-ng:

cd /etc/yum.repos.d/
wget https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng-stable/repo/epel-7 

21 January, 2020


These days, I came across spontaneous kiwi image build failures in a private OBS instance.
The images were SLES15-SP1, they had not been touched for quite some time, rebuilds were only triggered due to new packages in the update channel.
The error was grub2-install failing with the error message "not a directory".
Looking at the recent changes in the update repo showed no obvious reason (some python packages that had nothing to do with grub2-install), so I started to investigate...

... 3 days later, after following some detours, I finally found the issue.

grub2-install scans the installation device for filesystems, and probes all known (to grub2) fs types. The probe of "minix_be" fails fatally. Sometimes.

After building my own grub2 package with lots of debug-printf's, I finally found out, that the minix fs detection of grub2 is a little "fragile". It does the following (pseudo code):

  • grub_mount_minix(device) || return "not minix fs"
  • grub_minix_find_file("/") || fatal_error()
The problem is, that grub_mount_minix() only does pretty simple magic numbers checks, which can lead to false positives.

Comparing the superblock structures of ext[234] and minix filesystems (from the grub2 source code) side by side, you see this:

struct grub_minix_sblock         |struct grub_ext2_sblock
{ |{
grub_uint16_t inode_cnt; | grub_uint32_t total_inodes;
grub_uint16_t zone_cnt; |
grub_uint16_t inode_bmap_size; | grub_uint32_t total_blocks;
grub_uint16_t zone_bmap_size; |
grub_uint16_t first_data_zone; | grub_uint32_t reserved_blocks;
grub_uint16_t log2_zone_size; |
grub_uint32_t max_file_size; | grub_uint32_t free_blocks;
grub_uint16_t magic; | grub_uint32_t free_inodes;

This already hints at the issue: at the same disk location where ext2 stores the free inodes number, minix stores its magic number, which is used by grub to detect if it is a minix file system.

Now if you happen to have an ext3 file system with a free_inodes number whose lower 16 bits resemble one of the GRUB_MINIX_MAGIC numbers, chances are grub_mount_minix() will succeed, but then the attempt to acces the root directory will fail with a fatal error.

This is a plain grub2 bug, which I will probably report upstream and try to get fixed.
However, I need a fix to have my images build again, and the chances of getting a fix into SLES15-SP1 are ... low (and it is a daunting task, even if you are reporting this bug as a big SLES customer), so I built a workaround in my (locally built, lucky me...) python-kiwi package.

It basically does the following, before calling the "chroot grub2-install ...".

  • statvfs() to get the free_inodes number
  • check if the lower 16 bits resemble one of the MINIX_MAGIC numbers
    • if it does, touch a temporary file in
    • unmount and mount again to update the superblock (I missed this at first and wondered why it did not work)
    • unlink the temporary file
  • continue as before
This workaround is ugly as hell, but it does work for me.

P.S.: the detours included first noticing that almost every change I made to the image, like wrapping grub2-install into a wrapper script for debugging) made the issue go away (because of a different free_inodes number), so I always needed to check after every change that the issue was still present


As much as I like playing in the terminal, the jury is still out as to how much I like working with Cisco. To be as objective as possible, I need to tell myself that: 1, I am not familiar with the command set or how they like to do things so I must be open minded; 2, Relax, the command line is a happy place to be and 3, this is new territory, don’t get frustrated, just write it down and enjoy the learning process. Also, my brother in-law, whose career is in network administration just loves this Cisco business so it turned out to be quite educational. The scope of this article is not how to set up a router, just, this is how I was able to get going with it.

The specific Cisco switch I configured was a Catalyst 3560 series PoE-48. I am sure these direction will work with other similar devices. Since I am an openSUSE user, the directions are tailored as such.

Minicom Installation

My first step was to find a piece of software that would work for me for this and I am sure that there are a ton of solutions but the one that worked the easiest for me was minicom. I am open to other suggestions, of course.

This is in the official repository so you can go into the terminal and type this to install it:

sudo zypper install minicom

I would give the alternative option to do the Direct Installation but since you will be in the terminal anyway, why would you do that?


Set User Permissions

Before you run minicom you will need to add your user as a member of the groups: dialout, lock and uucp.

In all fairness, I don’t know if you actually need uucp but since I use it for serial transfers to Arduino type devices, I am just assuming.

To do this in YaST, select the Security and Users section, open the User and Group Management module and make the changes required for the user.

Alternatively, you can do this from the command line, enter the following as root:

usermod -a -G dialout,lock,uucp

The terminal method is way cooler, just saying.

Minicom Configuration

Before you can set up Minicom, you will have to determine where the serial port is that is connected to your computer. In my case, I have ttyS0 but if you have a USB serial port device, you may have something like ttyUSB0 or similar.

Now that you have an idea as to the name of your serial port you can begin the setup process. Some adjustments are needed so that you can successfully communicate with the router. In the terminal type:

minicom -s

This will bring you to a ncurses style menu system. Arrow down to Serial port setup entry.

To change the serial device to what you have, select A and adjust it to your particular serial

20 January, 2020

Chromecast devices in the sound settings

This morning, while browsing the web, I wanted to listen to a Podcast from my laptop, and thought “Hey, can I stream this to our stereo?”. As it turns out, that’s rather easy to achieve, so I thought I’d share it here.
The media devices in our home are connected using Chromecasts, which are small dongles that allow playing media on them, meaning you can play for example a video from your phone on the projector in the living room: very convenient.

I didn’t know if that was easily achievable with audio from my Plasma/Linux laptop and a quick search turned up “pulseaudio-dlna” which adds Chromecast devices on the local networks as output devices.

On my KDE Neon laptop, it’s as easy as installing pulseaudio-dlna from the standard repositories and then starting “pulseaudio-dlna” from the commandline. Then, I can pick an output device from the panel popup and the audio stream changed to my stereo.

$ sudo apt install pulseaudio-dlna
$ sudo pulseaudio-dlna
Added the device "Stereo (Chromecast)".

18 January, 2020


Dear Tumbleweed users and hackers,

This has been a busy week when looking at the snapshots. Tumbleweed has received 6 fully tested snapshots that were published (0110, 0111, 0112, 0113, 0114 and 0115).

The most noteworthy updates there were:

  • RPM 4.15.1
  • Mozilla Firefox 72.0.1
  • Mozilla Thunderbird 68.4.1
  • KDE Applications 19.12.1
  • KDE Plasma 5.17.5
  • KDE Frameworks 5.66.0
  • Linux kernel 5.4.10
  • Kubernetes 1.17.0

That was quite a bunch of what was on the todo list of last week, but definitively not all of it and a few new things are already incoming again. Things currently being worked on or being shipped in the near future

  • systemd 244 (Snapshot 0116+)
  • Mesa 19..3.2 (also snapshot 0116+)
  • Qt 5.14: we are awaiting some legal reviews. Tests by openQA have all passed
  • Python 3.8: Salt seems to be the main blocker here, as it still relies on Tornado 4.x
  • Removal of python 2

Jason Evans: Thoughts on LBRY

10:04 UTCmember


At the behest of people like Bryan Lunduke and DTLive on YouTube, I have started using LBRY more and last night I even uploaded a few test videos of my own. I would eventually like to put up some of my own tutorial videos.

With that said, LBRY has some serious issues. So, let’s be frank. LBRY has no rules against hardcore porn or if they do, they are not enforced. That’s fine, and I don’t care. It’s not hard to find porn on YouTube also. However if a porn channel doesn’t flag their own content as mature, then it will be in your search results and there’s no way right now to flag it yourself. The suggestions that I got in the help forum (aka the discord server) was to report it to the #report-spam room which I did. Will that result in these channels being told to reflag their content? Who knows. It seems a little iffy.

I realize that this is a startup and there is only so much time and energy to put into such things for a small team. I am rooting for them to make LBRY a great alternative to YouTube.

17 January, 2020

FOSSCOMM 2019 - Lamia

FOSSCOMM (Free and Open Source Software Communities Meeting) is a Greek conference aiming at free-software and open-source enthusiasts, developers, and communities. This year was held at Lamia from October 11 to October 13.

It is a tradition for me to attend this conference. Usually, I have presentations and of course, booths to inform the attendees about the projects I represent.

This year the structure of the conference was kind of different. Usually, the conference starts on Friday with a "beer event". Now it started with registration and a presentation. Personally, I made my plan to leave Thessaloniki by bus. It took me about 4 hours on the road. So when I arrived, I went to my hotel and then waited for Pantelis to go to University and set up our booths.

FOSSCOMM 2019 - Stathis at openSUSE and GNOME booth

ALERT: Long projects presentation...

Our goal was to put the stickers and leaflets on the right area. This year we had plenty of projects at our booths. We met a lot of friends at Nextcloud conference and we asked them for brochures and stickers. So this year our basic projects were Nextcloud and openSUSE (we had table cloths). We had stickers from GNOME (I had couple of T-Shirts from GUADEC just in case someone wanted to buy one). Since openSUSE sponsorts GNU Health, I was there to inform students about it (it was great opportunity since the department organizing was Bioinformatics department). We had brochures, stickers, chocolate and pencils from ONLYOFFICE, also we had promo material from our friends Turris. We are happy that Free Software Foundation Europe gave us brochures when we were in Berlin, and we were able to inform attendees about the campaigns and the work they are doing for us. We met Collabora guys also and we asked them if they want to promote them, since Collabora and Nextcloud are working together. Finally, our friends from DAVx5, gave us their promo material since the program works with Nextcloud so well.

I warned you!!! Well, the first day we met the organizers and the volunteers. I was surprised by the number of volunteers and they're willing to help us (even with setting up the booths). The first day ended with going out to eat something. Thank you, Olga, for introduce us to FRESCO. I used to eat at FRESCO when I was in Barcelona. I guess they're not franchise :-)

Well, Saturday started with registration. We put more swag on the booth (we saw that last night they took almost everything). Personally, I went to meet other projects. I was glad that my friend Julita applied to present what she's doing at the university (Linux on Supercomputers). I was kind of surprised but happy for her that her talk upgraded to Keynote. Glad I met her at GUADEC. Glad also that she had Fedora booth and gave some different aura to the conference. Check out her blogpost about her FOSSCOMM experience .

Glad I met Boris from Tirana. He did a presentation about Nextcloud

How to survive a health crisis during a FOSS conference

The title describes everything. This is not only for FOSS conferences but events in general. Attending a conference means to meet friends (usually you meet once a year) and have fun in general.

The organizers are responsible for everything that happens during the conference hours. We are grown people, so we have to be responsible for the rest of the day. Sometimes bad things might happen (bad: the critical meaning is health issues). Although the organizers aren't responsible for that, they are the key people, who know the system in their country and it's a good and human thing to help the person with the problem. Everyone wants to have fun and be happy at the end of the conference.

Being an organizer and volunteer, I felt the frustration of having everything covered. I lived a couple of times the health crisis during the conference.

Here are some points to cover before and during the conference. Please leave a comment if you want to share your experience.

Before the conference:

0. If the conference takes place within European Union, you can ask the European citizens to get the European Insurance Card. It doesn't cover everything, but at least you'll avoid some stupid bureaucracy. If they're not EU citizens or they don't have insurance, ask them to buy one for their trip. Usually, when you buy the plane ticket, they suggest buying a medical insurance. Do it, because it will save you from a lot of trouble.

1. During the online registration, ask for an emergency contact (ask for name and phone number). It will help you and doctors just in case they need the medical history of the person, the allergies, etc.

2. If they're not feeling comfortable to provide name and number, ask them to enable a feature of emergency call on their phone or even better to install a program with the emergency call and some medical history and their medication. Personally I have the Medical ID (Free) installed (you can choose other). It will help the doctors because it has a field to add your medical history so they can treat you correctly.

3. This might be very personal, but ask them to enter where they will stay. You might need to go to their hotel and bring a special medication they "forgot" to bring with them. And that leads to another issue. Ask them to bring the medication at the conference with them.

4. During registration, it's good to have a field for write down a disease that the attendee feels comfortable to share. There is the code of conduct committee that can guarantee this information. For example, if the attendee has epilepsy, it's good for the organizers to know and be prepared. Usually, people with epilepsy tell that information to their friends, so they won't be surprised just in case of a seizure.

5. During registration, ask for special types of food. It's not


With all the talk of VPN (Virtual Private Network) services to keep you safe and my general lack of interest in the subject, I was talking to Eric Adams, my co-host on the DLN Xtend podcast about the subject. He was telling me that he was hesitant to recommend any service so he gave me some option to try out. The one I chose, after doing a little reading was Windscribe.

I am new to the VPN game so I want to be careful in saying, I am recommending this as the perfect solution but rather demonstrating how I set it up and how I am using it on my openSUSE Tumbleweed system. Much in the same way Eric informed me about it.


For starters, I navigated to the Windscribe website, https://windscribe.com/

It’s a nice looking site and I like they have, front and center a Download Windscribe button. I am always annoyed when you have to go digging around to download anything. I give a resounding, “boo” when I am forced to play a scavenger hunt game to find the download link. Thank you Windscribe for not making this part difficult.

Another well presented download for Linux button. No hunting here either. Although, I did notice that there was a lack of definition of my favorite Linux distribution. They have left out openSUSE and that makes me just a bit frowny faced. No matter, I am not a complete “noob” to the Linux-ing and since Fedora and openSUSE packages are like close cousins (in my experience, but I am often wrong), setting this up for openSUSE was pretty darn straight forward.

These instructions are easily adapted to the fantastic Zypper package manager. This is my adaptation of their instructions for openSUSE and is well tested on Tumbleweed.

1. Get a Windscribe Account

Create a free account if you don’t have one already

2. Download and Install the repo as root

zypper ar https://repo.windscribe.com/fedora/ windscribe

This is telling zypper to add the repository (ar) https://repo.windscribe.com/fedora and naming it “windscribe”.

3. Update Zypper

zypper refresh

4. Install Windscribe-CLI

zypper install windscribe-cli

5. Switch to non-root user


6. Login to Windscribe

windscribe login

Follow the steps with your newly created account

7. Connect to Windscribe

windscribe connect

And that is all there is to it. You will be connected and ready to be part of the cool-kid VPN club.

Side Note

If you need further help about how to use the different functions of Windscribe.

windscribe --help

If you need further information on how to use these other features, please visit the windscribe.com site as I am just using the basic functionality of it here.

If the windscribe daemon service does not automatically start up, you may have to start it manually as root.

systemctl start windscribe

and if you want to have it enabled at startup

systemctl enable windscribe

Those may or may not be necessary


In a recent official communiqué, AFRINIC, the Regional Internet Registry (RIR) for Africa, wished its members and the community a happy new year while giving updates about some changes in the different committees.

In the same email, AFRINIC CEO, Eddy Kayihura, informed the community that following allegations of fraudulent manipulation of the AFRINIC WHOIS database, the matter has been referred to the Central Criminal Investigation Division (CCID) of the Mauritius Police Force for further investigation. Also, due to the serious nature of the allegation, an employee of AFRINIC, whose name has been cited in the allegation, has been immediately dismissed on grounds of very serious professional misconduct.

AFRINIC is working to "restore" the WHOIS database accuracy, as per the email.


When you host your own website on a Virtual Private Server or on a DigitalOcean droplet, you want to know if your website is down (and receive a warning when that happens). Plus it’s fun to see the uptime graphs and the performance metrics. Did you know these services are available for free?

I will compare 3 SaaS vendors who offer uptime performance monitoring tools. Of course, you don’t get the full functionality for free. There are always limitations as these vendors like you to upgrade to a premium (paid) account. But for an enthousiast website, having access to these free basic options is already a big win!

I also need to address the elephant in the room: Pingdom. This is the golden standard of uptime performance monitoring tools. However, you will pay at least €440 per year for the privilege. That is a viable option for a small business. Not for an enthousiast like myself.

The chosen free alternatives are StatusCake, Freshping and UptimeRobot. There are many other options, but these ones are mentioned in multiple lists of ‘the best monitoring tools’. They also have user friendly dashboards. So let’s run with it.


The first option that I have tried is StatusCake. This tool not only offers the uptime monitoring, but also offers a free SpeedTest. Of course you can use the free speedtests of Google and Pingdom. But this won’t allow you to see the day-to-day differences.

The images below show the default dashboard for uptime monitoring, the detailed uptime graph for one URL and the webform for creating a new uptime test.

Default dashboard
Uptime for 1 URL
Creation of a new uptime test

The next images show the default dashboard for speed tests, the detailed speed graph results and the webform for creating a new speed test.

Overview of speed tests
Speed test results
Creation of a new speed test

The basic plan is free and this gets you 10 uptime tests and 1 page speed test. A paid superior plan will set you back €200 per year, which is way to much for an enthousiast like myself.


Freshping is the second alternative that I tried. This is a Freshworks tool, so that means that it works together with other Freshworks tools such as Freshdesk, Freshsales, Freshmarketeer, Freshchat and Freshstatus. And all of these tools have a free tier. Wow, that is a lot of value for a start-up business. Of course they will like you to upgrade later on and stay with their company.

However, we are now focussing on Freshping. The below images show the default dashboard, the detailed uptime graph for one URL and the webform for creating a new uptime test.

Default dashboard
Uptime for 1 URL
Creation of a new uptime test

It looks nice, but it gives you much less information than StatusCake. However they do provide value. The sprout plan is free and this gets you 50 uptime tests. A paid blossom account


Our openSUSE Chairman has some questions for the candidates for the openSUSE Board. My answers are here:

1. What do you see as three, four strengths of openSUSE that we should cultivate and build upon?

1) Fantastic coherence in the community
2) Great products with collaboration all over the world
3) Quality

2. What are the top three risks you see for openSUSE? (And maybe ideas how to tackle them?)
1) Dependency on SUSE (humans and financials)
Background: I was asked really often at open source events how another company can sponsor openSUSE. We had to say that it would not possible because all of our money is going via a SUSE credit card and the money would be lost (same with the GSoC money, which has to be transferred to other organizations because of this issue). No company wants to pay Open Source Developers with such a background of an open-source project. Therefore, most openSUSE Contributors are working for SUSE or SUSE Business Partners. This topic popped up more than 3 times during my last Board Membership (really created by SUSE employees each time!).
Solution: Creation of the foundation! I had to suggest this solution more than 3 times before that was accepted by SUSE employees in the Board. I told about all the benefits how we can manage our own money then, receive new sponsors, SUSE can use more money for their own, SUSE can sponsor us continuously and we would be able to receive more Contributors.

2) openSUSE infrastructure in Provo
Background: I am one of the Founders of the openSUSE Heroes Team and was allowed to coordinate our first wiki project between Germany and Provo. The openSUSE infrastructure is in Microfocus hands and they need very long to respond on issues and we are not allowed to receive access as a community. Additionally, SUSE is not part of Microfocus any more which makes it more difficult to receive good support in the future.
Solution 1: Migration of all openSUSE systems from Provo to Nuremberg / Prague (perhaps missing space?)
Solution 2: Migration of all openSUSE systems from Provo to any German hosting data centre with access for openSUSE Heroes

3) Bad reputation of openSUSE Leap & openSUSE Tumbleweed
Background: We are the openSUSE project with many different sub-projects. We don’t offer only Linux distributions, but we are well known for that and most people are associating us with that. I had given many presentations about openSUSE during my last Board Membership and represented us at different open source events. The existing openSUSE Board does not do that very much. They have another focus at the moment.
Solution: We need more openSUSE Contributors representing openSUSE and I can do that as an openSUSE Board Member again. After that, we can be one of the top Linux distributions again. 😉

3. What should the board do differently / more of?
The existing openSUSE Board is working mostly on the topic with the foundation. That is good. Thank you! But the role


Being a performance engineer (or an engineer who’s mainly concerned with software performance) is a lot like being any other kind of software engineer, both in terms of skill set and the methods you use to do your work.

However, there’s one area that I think is sufficiently different to trip up a lot of good engineers that haven’t spent much time investigating performance problems: analysing performance data.

The way most people get this wrong is by analysing numbers manually instead of using statistical methods to compare them. I’ve written before about how humans are bad at objectively comparing numbers because they see patterns in data where none exist.

I watched two talks recently (thanks for the recommendation Franklin!) that riffed on this theme of “don’t compare results by hand”. Emery Berger spends a good chunk of his talk, “Performance matters” explaining why it’s bad idea to use “eyeball statistics” (note eyeball statistics is not actually a thing) to analyse numbers, and Alex Kehlenbeck’s talk “Performance is a shape, not a number” really goes into detail on how you should be using statistical methods to compare latencies in your app. Here’s my favourite quote from his talk (around the 19:21 mark):

The ops person that’s responding to a page at two in the morning really has no idea how to compare these things. You’ve gotta let a machine do it.

And here’s the accompanying slide that shows some methods you can use.

"Did behavior change?" slide from Alex Kehlenbeck's talk

Yet despite knowing these things I stil have a bad habit of eyeballing perf.data files when comparing profiles from multiple runs. And as we’ve established now, that’s extremely dumb and error prone. What I really should be doing is using perf-diff.

Here’s a gist for random-syscall.c, a small program that runs for some duration in seconds and randomly calls nanosleep(2) or spins in userspace for 3 milliseconds. To generate some profile data, I ran it 5 times and recorded a profile with perf-record like this:

$ gcc -Wall random-syscall.c -o random-syscall
$ for i in $(seq 1 5); do sudo perf record -o perf.data.$i -g ./random-syscall 10; done
Executing for 10 secs
Loop count: 253404
[ perf record: Woken up 14 times to write data ]
[ perf record: Captured and wrote 3.577 MB perf.data.1 (31446 samples) ]
Executing for 10 secs
Loop count: 248593
[ perf record: Woken up 12 times to write data ]
[ perf record: Captured and wrote 3.185 MB perf.data.2 (28252 samples) ]
Executing for 10 secs
Loop count: 271920
[ perf record: Woken up 12 times to write data ]
[ perf record: Captured and wrote 3.573 MB perf.data.3 (31587 samples) ]
Executing for 10 secs
Loop count: 263291
[ perf record: Woken up 13 times to write data ]
[ perf record: Captured and wrote 3.641 MB perf.data.4 (32152 samples) ]
Executing for 10 secs
Loop count 

16 January, 2020


There comes a time in the lifespan of a computer where you decide that the performance becomes a little lacking. That was my case with this computer and the state of the drive was becoming a little dubious as it felt like it was getting slower and having periodic file system errors. Rather than just reinstall openSUSE on the same drive, I decided, I wanted to make an inexpensive upgrade so I purchased a Solid State Drive (SSD) for it.

After completing this write-up, I realized that this is very uninteresting… so… for what it’s worth, this is basically a blathering for my own records. If you find this useful, great, if you don’t care about this bit of hardware, this is not worth your time… so… go ahead and click that [X} in the corner now.


Since this computer hangs above my sink using a VESA mount, there is a bit of work involved in pulling it down. My preferred method is to use a

Since I have taken this unit apart before, I already knew what I was doing with it. The back of computer comes off but does take some time to get all the snaps to release. I would really prefer that this was held together with screws instead of snaps.

Upon releasing the back cover from the chassis, it exposes the 2.5″ drive which sits in the lower left corner of the machine.

The drive is held in a caddy that snaps in to the chassis, no tools required to remove or insert the drive. I think this is actually quite the clever design.

The last bit of assembly is snapping the back cover back together, with a little family assistance.

After hanging the computer back on it’s VESA mount, I proceeded to install openSUSE Tumbleweed once again, creating a 60 GiB Root partition

The one thing I can note is that the software installation proceeded so much faster on the Solid State Drive than on the traditional Hard Disk Drive. I didn’t time it but I can assure you it is noticeably faster.

Application Installation

After installation of openSUSE Tumbleweed, I began the setup of my applications that I wanted, specifically on this machine. Things to note, this computer has a touch screen interface, so I made some changes as compared to my more traditional desktop setups. For starters, I switched the menu to the “Application Dashboard” because it is more “touch friendly” than the “Application Launcher” and “Application Menu”. At least, from my perspective.

Some of my applications have shifted a bit from the original setup. Many are the same but the core set of applications I use, not likely to change, on this machine are as follows:

Kontact Personal Information Manager

Mostly for the calendar application and it works fantastically well. I use this to synchronize my activities using the Google Calendar plugin. I am not proud of using Google Calendar but I am sort of


Already in the beginning of 2019 I have been a candidate for the board of openSUSE. Since there are now two places open again, I am again available for the task and run for election.

A general overview of my ideas and goals can be found here.

In the run-up to the election all candidates of the community are of course open for questions. I have answered a catalogue of 5 questions from Gerald Pfeifer, currently chairman of the board, and would like to make it available here.

1. What do you see as three, four strenghts of openSUSE that we should cultivate and build upon?

1) Variety of features
OpenSUSE is mostly recognized for the distributions Leap and Tumbleweed. The broad scale of different sub-projects and them fitting together is a big strength. We should find a way to get oS associated with the whole project bits like YaST, OBS, openQA etc.

2) “Heritage”
Linux distributions come and go, some stay for years, others vanish in the depths of the internet. OpenSUSE has been around for decades now and needs to plot twist that fact towards a recognition as reliable friend for running computers.

3) Permission-free community
The openSUSE community lives from people that just do things and try to convince others by results. That is not usual everywhere in FOSS. Some other communities try to streamline things into committees and head honchos.

2. What are the top three risks you see for openSUSE? (And maybe ideas how to tackle them?)

1) Getting forgotten
2) Staying unknown in some parts
3) Stewing in our own juice

There might be other risks as well but these three all fit into one major topic: communication – or more the lack of it.

My idea to tackle this is to revive (somehow) and to push the marketing team as a driving spot for everything going from openSUSE towards the rest of the world. There are already initiatives to write articles, talk to community people in interviews and to create new marketing material. This of course needs coordination and manpower, but is already taking off quite good.

The marketing team I have in mind is an initiative working in two directions: Besides doing things on our own I want to have it open for people dropping ideas, wishes and projects. Members can open tickets, non-Members can send emails or join chat rooms to explain what they want or need. Marketing people then either fulfill the request by themselves or try to find someone willing to do that, e.g. in artwork, l10n or any other appropriate team.

The main goal is to keep a constant stream of interesting content towards the rest of the community and to anyone outside openSUSE.

The answer to the following question extends this point a bit more.

3. What should the board do differently / more of?

Communicate. Though most people here (including me) don’t like the hierarchy thing it’s something that should not be left unused. In

15 January, 2020


Librsvg exports two public APIs: the C API that is in turn available to other languages through GObject Introspection, and the Rust API.

You could call this a use of the facade pattern on top of the rsvg_internals crate. That crate is the actual implementation of librsvg, and exports an interface with many knobs that are not exposed from the public APIs. The knobs are to allow for the variations in each of those APIs.

This post is about some interesting things that have come up during the creation/separation of those public APIs, and the implications of having an internals library that implements both.

Initial code organization

When librsvg was being ported to Rust, it just had an rsvg_internals crate that compiled as a staticlib to a .a library, which was later linked into the final librsvg.so.

Eventually the code got to the point where it was feasible to port the toplevel C API to Rust. This was relatively easy to do, since everything else underneath was already in Rust. At that point I became interested in also having a Rust API for librsvg — first to port the test suite to Rust and be able to run tests in parallel, and then to actually have a public API in Rust with more modern idioms than the historical, GObject-based API in C.

Version 2.45.5, from February 2019, is the last release that only had a C API.

Most of the C API of librsvg is in the RsvgHandle class. An RsvgHandle gets loaded with SVG data from a file or a stream, and then gets rendered to a Cairo context. The naming of Rust source files more or less matched the C source files, so where there was rsvg-handle.c initially, later we had handle.rs with the Rustified part of that code.

So, handle.rs had the Rust internals of the RsvgHandle class, and a bunch of extern "C" functions callable from C. For example, for this function in the public C API:

void rsvg_handle_set_base_gfile (RsvgHandle *handle,
                                 GFile      *base_file);

The corresponding Rust implementation was this:

pub unsafe extern "C" fn rsvg_handle_rust_set_base_gfile(
    raw_handle: *mut RsvgHandle,
    raw_gfile: *mut gio_sys::GFile,
) {
    let rhandle = get_rust_handle(raw_handle);        // 1

    assert!(!raw_gfile.is_null());                    // 2
    let file: gio::File = from_glib_none(raw_gfile);  // 3

    rhandle.set_base_gfile(&file);                    // 4
  1. Get the Rust struct corresponding to the C GObject.
  2. Check the arguments.
  3. Convert from C GObject reference to Rust reference.
  4. Call the actual implementation of set_base_gfile in the Rust struct.

You can see that this function takes in arguments with C types, and converts them to Rust types. It's basically just glue between the C code and the actual implementation.

Then, the actual implementation of set_base_gfile looked like this:

impl Handle {
    fn set_base_gfile(&self, file: &gio::File) {
        if let Some(uri) = file.get_uri() {
        } else {
            rsvg_g_warning("file has no URI; will not set the base URI");

This is an actual method for a Rust Handle struct, and takes Rust types as arguments — no conversions



The Kubic Project is proud to announce that Snapshot 20200113 has just been released containing Kubernetes 1.17.0.

This is a particually exciting release with Cloud Provider Labels becoming a GA-status feature, and Volume Snapshotting now reaching Beta-status.

Release Notes are avaialble HERE.

Special Thanks

Special thanks to Sascha Grunert, Ralf Haferkamp and Michal Rostecki who all contributed to this release, either through debugging issues or by picking up other open Kubic tasks so other contributors could focus on this release.

Please look at our github projects board for an incomplete list of open tasks, everyone is free to help out so please join in!

Upgrade Steps

All newly deployed Kubic clusters will automatically be Kubernetes 1.17.0 from this point.

For existing clusters, you will need to follow the following steps after your nodes have transactionally-updated to 20200113 or later.

For a kubic-control cluster please run kubicctl upgrade.

For a kubeadm cluster you need to follow the following steps on your master node: kubectl drain <master-node-name> --ignore-daemonsets.

Then run kubeadm upgrade plan to show the planned upgrade from your current v1.16.x cluster to v1.17.0.

kubeadm upgrade apply v1.17.0 will then upgrade the cluster.

kubectl uncordon <master-node-name> will then make the node functional in your cluster again.

For then for all other nodes please repeat the above steps but using kubeadm upgrade node instead of kubeadm upgrade apply, starting with any additional master nodes if you have a multi-master cluster.

Thanks and have a lot of fun!

The Kubic Team


Several packages were updated this week for openSUSE Tumbleweed as was expected after the holiday season. Five snapshots of the rolling release have been delivered so far this week after passing the rigorous testing applied by openQA.

The releases are trending incredibly stable with trending or recorded ratings abovea 96 rating, according to the Tumbleweed snapshot reviewer.

The most recent snapshot, 20200112, updated Xfce desktop environment with an update for xfce4-session 4.14.1 and xfce4-settings 4.14.2. Various developer visible changes were made with Google’s 20200101 re2 library for regular expressions updates. GNOME’s application for managing images with a users Flickr account, frogr 1.6, removed the deprecated use of GTimeVal. The open source platform for the scale-out of public and private cloud storage, glusterfs 7.1, fixed storage rebalancing caused by an input error and fixed a memory leak in the glusterfsd process. ImageMagick version optimized the special effects performance of Fx and virglrenderer 0.8.1, which is a project to investigate the possibility of creating a virtual 3D GPU for use inside qemu virtual machines to accelerate 3D rendering, added some patches. The snapshot continued to update packages for KDE Applications 19.12.1 that started in the 20200111 snapshot. Improvements to the scroll wheel speed was made for KDE’s Dolphin, the video editing software Kdenlive had multiple fixes and an adjustment for faster rendering, and obsolete code was removed from Applications’ diagram package umbrello. Most of the KDE Applications packages also updated the Copyright year to 2020.

In addition to the  KDE Applications 19.12.1 packages that began arriving in snapshot 20200111, KDE’s Plasma 5.17.5 also arrived in the snapshot. The updated Plasma fixed a regression in the “Port the pager applet away from QtWidgets” and fixed the drag from Dolphin to a virtual desktop switcher widget. The Plasma NetworkManager also had a fix for a crash when changing advanced IPv4 configurations. The much-anticipated fix for the security vulnerability in Firefox was made with the Mozilla update to Firefox 72.0.1; there were eight Common Vulnerabilities and Exposures (CVE) fixes in the update from the previous 71 version included in Tumbleweed, but the 72.0.1 fixed the bug that hackers could use to access a computer of anyone using the browser because of incorrect alias information in the IonMonkey JIT compiler. LibreOffice added a patch to fix a button that allowed the wrong ordering of a Qt interface and curl 7.68.0 had a lengthy amount of fixes and changes to include adding a BearSSL vtls implementation for the Transport Layer Security (TLS). openSUSE’s snapper 0.8.8 version had a rewrite of a subpackage from Python to C++ and several YaST packages were updated, which included the fixing of an error during an upgrade if /var/lib/YaST2 was missing when using Btrfs.

Troubleshooting tool sysdig was updated in snapshot 20200110; it fixed a


I was in the openSUSE Board for 2 years in the past and I have enjoyed this time to bring along the openSUSE project.
I want to run for the openSUSE Board again after a short break about 1 year. I am happy that the existing openSUSE Board has proceeded my idea with the foundation so successfully. But I would be happy about being allowed to finalize this/ my topic together with the other Board Members as my old idea.
Additionally, I have watched the decreasing reputation. Public representations of openSUSE have been missing by the openSUSE Board in the last year. I would increase that on the same way I have done that at our university.

My main activity at openSUSE is the Coordination of the global Localization Team besides my studies. However, that is not the sole task. I contribute to QA, give presentations, update the wiki (English and German) and create PRs on Github. I would be able to contribute some code as a Computer Scientist, too. But SUSE does not want to see any Computer Scientists from our university. Therefore, my focus is on improving our study plan with open source development.

I love to represent openSUSE. So I have decided to run for reelection. 😉

Do you want to know more about my goals and thoughts? You can find more about my goals for these openSUSE Board elections on my openSUSE Board election platform.

  • Email: sarah.kriesch@opensuse.org
  • Blog: https://sarah-julia-kriesch.eu (my blog)
  • facebook: https://www.facebook.com/sarahjulia.kriesch
  • LinkedIn: https://www.linkedin.com/in/sarah-julia-kriesch-16874b82
  • Connect: https://connect.opensuse.org//pg/profile/AdaLovelace
  • I wish all candidates good luck, and hope that we‘ll see lots of voters!

    14 January, 2020


    I recently purchased a new Logitech wireless keyboard for my kitchen computer because the Bluetooth keyboard I previously used was driving me nuts. Mostly for the keyboard layout and sometimes because it didn’t want to connect. Possibly due to hardware failure or bad design. It also doesn’t have media keys so I thought it best just to replace it.

    I have previously used ltunify with success but I only used it because “L” comes before “S” so that was my first stop. Since I received feedback that I should try Solaar I did so this time. Since there isn’t an official Linux based application available from Logitech, the fine open source community has stepped in to make managing your devices simple and straight forward.


    Since this application is in the Official Repository for Tumbleweed and Leap you can use graphical direct installation method or the more fun terminal way.

    sudo zypper install solaar

    YaST Software is also an option too.

    Once it is installed, launch it using your preferred method, the menu, Krunner, etc.

    Application Usage

    Right off the cuff, this is a more user friendly application with some additional features. For starters, whatever devices you have connected to your Logitec receiver will display a battery status. In this case below. I have a keyboard and mouse already paired with the Unifying receiver.

    Logical layout of the device listing, and verbose device information, device options and battery status. What is nice about this application is having the ability to modify the status of the device. My K400 Plus keyboard has the Function Keys and the media keys set up as such that by default, they are media keys. This is not what I prefer so I can Swap the Fx function here.

    Pairing A New Device

    My reason for using this application was to pair my new keyboard with an existing receiver. I don’t see the value in having more than one USB port populated unnecessarily. To Pair a new device is very straight forward, select the root “Unifying Receiver” and select “Pair”. The dialog will pop up and ask you to turn on the device you want to pair.

    When you do that, the receiver will grab the new device, associate it and have it available to be used.

    That is all there is to it. Each device will have their own set of options that are adjustable per your preferences. This Performance MX Mouse has more options than the value M175 Mouse.

    That just about does it for Solar. There are some other fun features like getting device details but I don’t really want to post those here because I don’t really know if that is information I should be sharing!

    Final Thoughts

    Having Solaar in the system try is quite handy. Though, the reality is, I don’t need it all the time but having it to manage your devices is very handy. It’s nice to know that you

    13 January, 2020


    Fedora is a Linux distribution that has been around since the beginning of my Linux adventure and for which I have incredible respect. I have reviewed Fedora before, and it was a good experience. Last time I used Fedora, I used Gnome and since I am kind of Gnome fatigued right now, I thought it better to use a different desktop, one that I can easily shape my experience to my needs, clearly, there are only two options but I chose to go with the primer, most easily customized desktop, KDE Plasma, ultimately, I want to compare my Fedora Plasma experience with my openSUSE Tumbleweed Plasma experience. I have no intention of switching distros but I do like to, from time to time, see how other distributions compare. Of all the distributions available outside of openSUSE, Fedora and Debian are the two that interest me the most but for different reasons.

    This is my review as a biased openSUSE Tumbleweed user. Bottom Line Up Front. Fedora is a nearly perfect [for me] distribution that is architecturally and fundamentally sound from the base upward. It is themed just enough, out of the box, to not annoy me with any irritating impositions. It really feels like I have been given keys to a fantastic house, albeit a bit spartan, waiting for me to make it my own. Technically speaking, there is nothing I dislike about Fedora. I could get along just fine in Fedora Land but openSUSE Land edges out for me with the Tumbleweed convenience and the broader hardware support.


    I want to be careful how I describe my experience here, I do not want to disparage the installer at all and blame any issues I had with it on me. What I appreciate about the installation process, I grateful that I can go right into the installation immediately.

    There is something spectacularly simple and clean about the boot screen. No frills, no fluff. Just down to business. If that doesn’t say Fedora, I don’t know what does!

    The next step will be to set your language and location. The next screen is an Installation Summary screen. I like this and I also don’t like this. I like it because it allows me to jump around, I don’t like it because I am not used to this layout. You can’t proceed with the installation until you complete all the steps, so that is good.

    I started with the Root and User creation settings. This is very straight forward. I like the root options that are presented to lock the root account and whether or not to allow SSH Login with Password.

    For the Installation Source, I am less impressed with this section, as compared to the openSUSE installation method. Maybe I don’t understand this part exactly, I was a bit confused. The correct choice would be “On the Network” from here and leave it on “Closest mirror”.

    What I like about the openSUSE method

    11 January, 2020

    Sebastian Kügler: Lavalamp

    23:42 UTC

    My Lavalamp

    For our living room, I wanted to create a new lamp which isn’t as static as most other lamps, something dynamic but nothing too intrusive.
    I was also interested in individually addressable led strips for quite some time, so I started prototyping in last year’s late summer. In december, I finished this project, and called it lavalamp after the classic decorative lamp with rising blobs of fluid which was invented in the 60s.

    Lavalamp Show

    The lamp uses LEDs for its lighting, there are 576 individually addressable LEDs used, which shine in four directions. The lamp can produce dynamic patterns and effects. There’s a web interface I can use to change its effect, but it also reacts to its surroundings, steered by Home Assistant, my home automation platform of choice. This means that the lamp is automatically switched off when we go to bed, leave the house or want to watch a movie. It can also be voice-controlled.


    The ESP8266 microchip running the LEDs

    The lamp’s electronic components are an ESP8266 microchip and WS2812b addressable leds. (This means I can change the color and brightness of every single LED individually.) I’ve written custom firmware for the microcontroller (you can find it on my github page), it basically runs a small webserver on its Wifi interface, offering configuration data in JSON format to a mobile-friendly JavaScript application which can run in a browser on your phone. The LEDs are driven by the FastLED library, which is a fantastic piece of software offering fast mathematical functions and low-level data protocols to manipulate the LEDs in my lamp.
    With this setup, I achieve 50 frames per second for most of the effects that I’m using in the lamp, so the animations all look smoothly and feel quite natural. I think that’s really impressive, given the rather low specs of the microchip and its price point at around 2€.

    The case of the lamp has a wooden foot, sitting around a concrete block which holds the lamp firmly in place and provides some isolation in case anything goes wrong in the electronic parts. The light from the LEDs is diffused through frosted glass, giving it a nice glow.

    Building the lamp was a fun project. I didn’t give myself a deadline, but rather took all the time I needed spread out over a period of four months to get all the individual parts in place. I had to learn quite some new tricks, which made this project really interesting. From cutting and building the wooden case to soldering and programming the microchip. In the end, I’m really happy how the lamp turned out. It brings live into our place, while usually not being too distracting.
    For further improvements, I built a USB port into the foot of the lamp, so I can just plug in my laptop and add new effects or tweak existing ones.
    I’m not quite done with it


    The killer feature of the Plasma Desktop has been the KDE Personal Information Manager, Kontact. I have been using it since 2004 time frame and although we have had a tenuous relationship over the years, specifically the switch to the Akonadi and the pain that came with it in the early years. I actively use Kontact on multiple machines for the feature richness of it and haven’t found anything in existence that I like better. I also exclusively use Kontact on openSUSE Tumbleweed with the Plasma Desktop Environment.

    I have decided to publish my reference concerning the maintenance it requires. I could be an edge case since I have five mail accounts and multiple calendar accounts as well. Historically, I have had issues where losing network connection, regaining it, suspending and resuming my machine over a period of time would cause the thing to have fits. So, here are my fixes, whenever the need arises.

    One quick caveat, your results may vary and don’t hold me responsible for your data.

    Problem 1: Akonadi Gets Stuck and Stops Checking Email

    This is rare as of late but 3 or 4 years ago, this was indeed a problem. I think I have used this once in the last month (Jan 2020 at the time of writing) but this is what I do.


    In the terminal or even in Krunner type the following

    akonadictl stop

    This will stop all the processes. Sometimes they can hang and this will gracefully shut the thing down. At this point, you can start it back up in Kontact or in the terminal or krunner type:

    akonadictl start

    If you do this in the terminal, you can enjoy the scrolling of all the activity going on and gain some appreciation for what it is doing.

    After that, you should be good to go.

    Problem 2: Clearing out Cached Data

    From time to time, I notice that the Akonadi cache under ~/.local/share starts to grow an awful lot. Part of it is that I don’t delete emails, but there is a percentage of that data that is vestigial and can easily be cleared out. This requires two commands and a bit of patience on your end.

    Start out by running a “file system check” on the Akonadi database in the terminal.

    akonadictl fsck

    This takes a bit and will display all found unreferenced external files and such. Once complete, run this:

    akonadictl vacuum

    This process will optimize the tables and you will recover a bit of data. I admit, this doesn’t make a huge change but it will clear things out. The last time I did it, I only freed up a few megabytes of data but but it’s something.

    Final Thoughts

    You know those stories of people that have these crazy habits that don’t make sense, things they do that don’t really help or solve a problem like making sure the spoons are organized in just the right fashion


    Coming back strong in 2020… no… not coming back… I haven’t been gone, just delayed.

    12th Noodling, a dozen, a foot or a cup and a half of crap?

    AMD System from Yester-Years Parts

    I recently posted about my computer build. In short, this is a computer build on parts that are in no way considered top of the line. They are all quite old and that did pose a few problems. One, this motherboard would not boot from a software RAID pool. I was able to bootstrap the BTRFS RAID pool with a separate drive and root partition. It did add some complexity to my system but I think it works out okay.

    Building a system is something I have wanted to do for quite some time. As in, several years but time, finances and decision vapor-lock had kept me from it. What pushed me over was a fortuitous conversation at a Christmas gathering last year, I struck a nerdy conversation, with a computer store owner that ultimately gave me this giant Thermaltake case without a motherboard and a few weeks later, another fortuitous happening where I was given a bunch of old computer equipment and an AM3 motherboard was among the rest of the aged equipment which drove the rest of the build. My course of action was to stuff the most memory and fastest processor in that which is what I did and I am happy with it. I am not going to belabor that process as I have talked about it before and I have a link you can follow if you are interested in those details.

    As a result of this, I had tons of fun, it was a great learning experience and that same guy gave me another case, not as big but far more robust in design with a water cooler. I now want to build another machine but I am thinking a more pure gaming machine and leave this current machine to be my server workstation. I don’t know when I would get to this but I think this one will be a project I do with my kids. Use it as a teaching opportunity and turn it into a kind of family event. Currently, the machine has a Core 2 Duo CPU platform of some kind. I think I would probably do another AMD build, something newer that can take advantage of these new fancy CPUs coming out. I still wouldn’t go bleeding edge but certainly something closer than what I have now.

    Emby Server Summation

    I have fully evaluated my use of Emby and given a little write up on it. I described the installation process, setting it up, importing my media files and so forth. I want to just summarize the highlights and the lowlights of my experience before I begin my next testing of Plex.

    What I like

    Emby is super easy to set up. It is nothing more than copying one line into a

    10 January, 2020


    Dear Tumbleweed users and hackers,

    Week 2 this year brings you what many have been asking for: Kernel 5.4! YES, After the holiday season came to an end and the right people were back in business, we could address the issue around the invalid certificate chain and correct the issue. In total, Week 2 brought you five snapshots: 0103, 0105, 0106, 0107 and 0108, with all these nice changes:

    • Linux kernel 5.4.7 (Snapshot 0108, fresh off the press)
    • NetworkManager 1.22.2
    • Flatpak 1.6.0
    • Mesa 19.3.1
    • Rust 1.40

    Also, the staging projects are making progress and some of the long-standing areas are likely to be checked in rather soon. The major things coming your way in the near (or maybe distant) future are:

    • RPM 4.15.1: tests have all passed, you can expect this to come your way next week
    • systemd 244: one package’s test suite keeps on failing without anybody understanding what’s going on. Since this is a legacy package, it was decided to disable the test suite. So systemd 244 should now also be ready and be checked in soon
    • Qt 5.14: we are awaiting some legal reviews. Tests by openQA have all passed
    • Kubernetes 5.17: the last tests were still failing, but just today we received a submission that is supposedly fixing it
    • Python 3.8: not going to happen very soon – but it’s being worked on
    • Removal of python 2: python 2 is now officially no longer maintained by upstream. Work is undertaken to lower the usage footprint of python2, to the point where it can be removed from the distro
    • SQlite 3.30: python-Django is still failing
    • More kernels from the Linux 5.4 branch. You can expect them to come more frequent again
    • Mozilla Firefox 72
    • KDE Applications 19.12.1
    • KDE Plasma 5.17.5

    As you can see, the holiday is over – and things are being thrown at me at a high pace, in the hope to get it all integrated into Tumbleweed quickly. I’ll do what I can – but OBS and openQA resources tend to be limited 🙂


    Right of the cuff, I should note that this will work on other Linux distros too, I am just focusing on openSUSE because, that is my jam. I have been using this on openSUSE Tumbleweed as of Snapshot 20200103. It should also work on Leap as of 42 and newer (that means Leap 15.x is good to go, in case there was any question).

    The reason this application excites me so is that I use several AppImages on my system. Which ones you may ask? I’ll tell you, xLights, which I use for my Christmas Light display, VirtScreen that I use when I am remote and need to turn my laptop or phone into a second display. This is super handy as it will not only create links in my menu to the AppImages, it will also copy the *.AppImage file into a designated folder, in my case ~/Applicaitons which is the default. At first, I wasn’t sure about it but after noodling it around a bit, I am totally good with it.


    The RPM for this isn’t in the repository and if you are interested in the non-root user installation, there is a “Lite” version but it is still new and not a recommended solution at this time.

    Navigate to the GitHub page of the project for the RPM. I am using the 64-bit version and thinking about it, I don’t actually know if there are any 32-bit AppImages, at least, I wouldn’t likely consider running an AppImage on my 32-bit machines. Regardless, there are several packages to choose from. Pick the correct one.

    AppImageLauncher on GitHub

    Downloaded appropriate RPM for your openSUSE (or other Linux if that’s what you are into), at the time of writing the version I am using is:


    Installation is very straight forward, I download all my RPMs to ~/Downloads/rpms and use the zypper command to install it.

    sudo zypper install ~/Downloads/rpms/appimagelauncher-2.1.0-travis897.d1be7e7.x86_64.rpm

    The installation didn’t pull in any other packages from the repository. Zypper does, however give you a little warning.

    This is just telling you that it is not signed. That is a security concern, so, if you do not trust the source of the RPM do not trust this and you may as well bail here on the process because the rest of it isn’t going to work for you.

    Assuming you are okay with this situation and want to proceed, type “i” and hit enter. That will complete your installation.

    Side note, on most desktop environments in openSUSE, you can install the RPM graphically too, but I just happen to think the terminal is more fun.

    First Run

    When you first run AppImageLauncher, you are presented with some options. The important one is, where to put the AppImages you launch.

    AppImageLauncher runs a service in the background and when you launch an AppImage you are given two

    09 January, 2020


    The year of 2020, at least in the openSUSE world, is starting out to be pretty stable. In little more than a week into the new year, there have been five openSUSE Tumbleweed snapshots released.

    The releases, with the exception of one, are either posting a stable rating or are trending at a stable rating, according to the Tumbleweed snapshot reviewer.

    With the release of snapshot 20200107, more OpenGL and Vulkan driver features and improvements came in the update of the Mesa 19.3.1 package. The newer version update also provides better AMD Radeon Accelerated Processing Unit (APU) performance

    The bluez-tools package that is a set of tools to manage Bluetooth devices for Linux had a minor update from the previous three-year-old package included in Tumbleweed. GNOME’s web browser package epiphany provided some security AdBlocker preferences in the version. Message transfer agent exim reduced the start up process initialization with version and fixed more than a half dozen bugs. KDE’s kdevelop5 5.4.6 version fixed some wrong text in the license. Network detector, packet sniffer, and intrusion detection system package for wireless network kismet updated to its December release in the snapshot. One package update that stands out in the snapshot is the release of the finger reader package for Linux devices libfprint 1.0; this first major release provides better documentation and bug fixes related to restarting a failed verification immediately.The osc 0.167.2 package fixed regression in osc chroot. Other packages updated in the snapshot were rubygem-parser and tigervnc 1.10.0 among others.

    Snapshot 20200106 snapshot provided an update of ImageMagick that fixed the a bug for custom profile (CMYK or RGB) conversions and the -layers optimize option now requires a fully transparent previous image. Argyll Color Management System package argyllcms had a new major version update; the 2.1.1 version update removes bundled zlib source, that could trigger a fatal rpm check failure on Leap 15.x. The library for handling OpenGL function pointer management libepoxy 1.5.4 requires only the python3-base package for building instead of full python3 package. GNOME’s photo manager shotwell 0.30.8 updated translations and fixed Tumblr publishing.

    Several updated YaST packages came in the 20200105 snapshot. Improved sorting by device name and size in tables were made with the yast2-storage-ng 4.2.67 update and an improved warning when all repositories are disabled were made with the yast2-packager 4.2.42 update. The same version number for yast2-network added support for reading / writing Maximum Transmission Unit (MTU). The libstorage-ng 4.2.44 package improved the sort-key for block devices and libyui 3.9.1 added sort key to the table cell. Python-passlib 1.7.2 added some new features like supports Argon2 and utility program to control and monitor computer storage systems smartmontools 7.1 added enhancements for AT Attachment (ATA) ACS-4 and ACS-5

    08 January, 2020


    Making videos is not exactly my strong suit but it doesn’t have to be to enjoy it. Lately, I have been dipping my toes into the world of video content creation. Yes, most of it is into making videos as I haven’t really had the need. Recently, a need popped up for doing some video editing and I decided to give Kdenlive a try. You have to start somewhere and since many of the independently created shows out there use it, it is part of the KDE project and there are a LOT of tutorials on YouTube.

    Keep in mind, I have some very basic needs, simply, chaining clips together, title screen and a little background music. These are extremely minimal requirements. The nice thing about Kdenlive is, it is easy enough to get going with it, but brimming with features to keep you dinking around with it continually and even if you have come to learn every feature the Kdenlive Project will come along and bring you an update.


    Kdenlive is available in the main repositories for both Leap and Tumbleweed. To install the latest version for Leap, you will have to add the Experimental KDE:Applications repository. 19.12 is available in the official Tumbleweed repository.

    To install it with the graphical Direct Installation navigate here.


    For Tumbleweed, in terminal

    sudo zypper install kdenlive

    And that is all it takes.


    Right off the hitch, Kdenlive is a great looking application, it has a clean and pleasant interface that is incredibly functional. I use a modified version of Breeze Dark, what I call openSUSE Breeze Dark. The dark screen with the green tones make for a comfortably openSUSEy for extended hours of work.

    I have been using Kdenlive for about a year or so and it has been great since day one. I must make the caveat that I don’t do anything terribly complex in Kdenlive. I mostly use fades and dissolves. In fact that is my primary usage of it.

    For one video, I rotated the screen 180° because I purposely recorded it upside down so that I wouldn’t crash into the camera with by big stupid nose. In retrospect, this video of the hard drive caddy was probably a waste of time to do because it is so basic and elementary of a feature to highlight on the computer, but it was a good exercise in learning the some of the other various features in Kdenlive.

    What was handy and very quick to do were my Christmas light musical sequence videos. I recorded the video and added the music as a post edit. Kdenlive made it easy to do. I just lined up the flashes with the appropriate spot in the music.

    Kdenlive really has made all these little things easy to do and they made it possible without having to spend loads of cash for a nonsense hobby that fills the little voids and


    One of the main reasons I build a computer was for the purposes of hosting my video content on my system and serve it to other machines. I had heard about having something like Netflix or Hulu in the form of Plex. I have known others that have done this and have always been impressed by it. My first stop in exploring media servers in Linux was Emby. I chose it largely because I heard of Plex and wanted to try something that was open source based, more on that later. At the very beginning of this exercise, I decided I want to try out three different server products, Plex, Emby and Jellyfin.

    This is my review, with no real expectations, other than to easily have access to my movies and TV shows from any device in the house. This is a review of only the free services, not the paid features. Bottom line up front. I like it and it has few issues.


    The installation was surprisingly easy to do with Emby on openSUSE. Instructions for openSUSE were right there, ready and waiting for me to utilize them. Navigate to:


    There is a nice little drop down where you can select “OpenSuse” very sadly cased incorrectly but that is a small detail, nothing terrible, I’ve made mistakes too in casing the project name.

    There are 6 options from which to choose. Two are for the x86_64 architecture, the other four are ARM options. Since I am installing on 64bit x86 architecture, and I am not interested in beta testing Emby, I chose the first option.

    The command uses zypper to install an RPM from a GitHub repository. This doesn’t install a repository or anything so at this point, I am unsure about how updates are handled. From what I can tell, I will have to install updates manually. I’m sure there is a better way.

    After the installation, open a web browser to http://localhost:8096 to perform the setup of the service. Things like your user information.

    The next step will be to set up your media library. You select your content type a display name for it, the location and other bits you and flags you find important, like language settings and metadata downloaders.

    There are more library settings here than I really know what to do with. I filled out what made sense, set the language preferences to English and moved on with the process.

    I added my movies, TV shows and Documentaries folders.

    Then moved onto the next section where I again set the metadata language, configured to allow remote access. I haven’t actually set my firewall to allow remote access to test the performance of this remotely.

    Lastly, you will have to agree to the terms of service and your done!

    First Run and Impressions

    Running this media server is as easy as navigating to http://localhost:8096 and signing into the service, not

    Older blog entries ->