Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.


Saturday
17 August, 2019


Klaas Freitag: ownCloud and CryFS

20:30 UTCmember

face

It is a great idea to encrypt files on client side before uploading them to an ownCloud server if that one is not running in controlled environment, or if one just wants to act defensive and minimize risk.

Some people think it is a great idea to include the functionality in the sync client.

I don’t agree because it combines two very complex topics into one code base and makes the code difficult to maintain. The risk is high to end up with a kind of code base which nobody is able to maintain properly any more. So let’s better avoid that for ownCloud and look for alternatives.

A good way is to use a so called encrypted overlay filesystem and let ownCloud sync the encrypted files. The downside is that you can not use the encrypted files in the web interface because it can not decrypt the files easily. To me, that is not overly important because I want to sync files between different clients, which probably is the most common usecase.

Encrypted overlay filesystems put the encrypted data in one directory called the cipher directory. A decrypted representation of the data is mounted to a different directory, in which the user works.

That is easy to setup and use, and also in principle good to use with file sync software like ownCloud because it does not store the files in one huge container file that needs to be synced if one bit changes as other solutions do.

To use it, the cypher directory must be configured as local sync dir of the client. If a file is changed in the mounted dir, the overlay file system changes the crypto files in the cypher dir. These are synced by the ownCloud client.

One of the solutions I tried is CryFS. It works nicely in general, but is unfortunately very slow together with ownCloud sync.

The reason for that is that CryFS is chunking all files in the cypher dir into 16 kB blocks, which are spread over a set of directories. It is very beneficial because file names and sizes are not reconstructable in the cypher dir, but it hits on one of the weak sides of the ownCloud sync. ownCloud is traditionally a bit slow with many small files spread over many directories. That shows dramatically in a test with CryFS: Adding eleven new files with a overall size of around 45 MB to a CryFS filesystem directory makes the ownCloud client upload for 6:30 minutes.

Adding another four files with a total size of a bit over 1MB results in an upload of 130 files and directories, with an overall size of 1.1 MB.

A typical change use case like changing an existing office text document locally is not that bad. CryFS splits a 8,2 kB big LibreOffice text doc into three 16 kB files in three directories here. When one word gets inserted, CryFS needs to create three new dirs in


face

Kata Containers is an open source container runtime that is crafted to seamlessly plug into the containers ecosystem.

We are now excited to announce that the Kata Containers packages are finally available in the official openSUSE Tumbleweed repository.

It is worthwhile to spend few words explaining why this is a great news, considering the role of Kata Containers (a.k.a. Kata) in fulfilling the need for security in the containers ecosystem, and given its importance for openSUSE and Kubic.

What is Kata

As already mentioned, Kata is a container runtime focusing on security and on ease of integration with the existing containers ecosystem. If you are wondering what’s a container runtime, this blog post by Sascha will give you a clear introduction about the topic.

Kata should be used when running container images whose source is not fully trusted, or when allowing other users to run their own containers on your platform.

Traditionally, containers share the same physical and operating system (OS) resources with host processes, and specific kernel features such as namespaces are used to provide an isolation layer between host and container processes. By contrast, Kata containers run inside lightweight virtual machines, adding an extra isolation and security layer, that minimizes the host attack surface and mitigates the consequences of containers breakout. Despite this extra layer, Kata achieves impressive runtime performances thanks to KVM hardware virtualization, and when configured to use a minimalist virtual machine manager (VMM) like Firecracker, a high density of microVM can be packed on a single host.

If you want to know more about Kata features and performances:

  • katacontainers.io is a great starting point.
  • For something more SUSE oriented, Flavio gave a interesting talk about Kata at SUSECON 2019,
  • Kata folks hang out on katacontainers.slack.com, and will be happy to answer any quesitons.

Why is it important for Kubic and openSUSE

SUSE has been an early and relevant open source contributor to containers projects, believing that this technology is the future way of deploying and running software.

The most relevant example is the openSUSE Kubic project, that’s a certified Kubernetes distribution and a set of container-related technologies built by the openSUSE community.

We have also been working for some time in well known container projects, like runC, libpod and CRI-O, and since a year we also collaborate with Kata.

Kata complements other more popular ways to run containers, so it makes sense for us to work on improving it and to assure it can smoothly plug with our products.

How to use

While Kata may be used as a standalone piece of software, its intended use is to serve as a runtime when integrated in a container engine like Podman or CRI-O.

This section shows a quick and easy way to spin up a Kata container using Podman on openSUSE Tumbleweed.

First, install the Kata packages:

$ sudo zypper in katacontainers

Make sure your system is providing the needed set of hardware virtualization features required by Kata:

$ sudo kata-runtime 

face

I’m one of those people you meet in life that ‘things’ happen to. I suppose I should be grateful; I can at least say my life is never boring? Every Saturday Charlie and I, Charlie is a dog, by the way, go to put fuel in my car and then we have a little ride … Continue reading "I’m staying in today!"


Friday
16 August, 2019


face

Dear Tumbleweed users and hackers,

Week 2019/33 ‘only’ saw three snapshots being published (3 more were given to openQA but discarded).  The published snapshots were 0809, 0810 and 0814 containing these changes:

  • NetworkManager 1.18.2
  • FFmpeg 4.2
  • LibreOffice 6.3 (RC4)
  • Linux kernel 5.2.7 & 5.2.8
  • Libinput 1.14

And those updates are underway for the next few snapshots:

  • Addition of ‘openSUSE-welcome’
  • KDE Applications 19.08.0
  • KDE Frameworks 5.61.0
  • Replace’pkg-config’ with a more modern implementation ‘pkgconf’. It is supposed to be transparent to the change, once things are all worked out.
  • GLibc 2.30
  • cmake3.15.x (breaks nfs-ganesha)
  • Swig 4 (still waiting for a libsolv fix 🙁 )
  • Inclusion of python-tornado 5 & 6
  • gawk 5.0: Careful: we’d seen some build scripts failing already
  • util-linux 2.34: no longer depends on libuuid. Some packages assumed liubuuid-devel being present to build. Those will fail.

Thursday
15 August, 2019


face

Kata

Kata Containers is an open source container runtime that is crafted to seamlessly plug into the containers ecosystem.

We are now excited to announce that the Kata Containers packages are finally available in the official openSUSE Tumbleweed repository.

It is worthwhile to spend few words explaining why this is a great news, considering the role of Kata Containers (a.k.a. Kata) in fulfilling the need for security in the containers ecosystem, and given its importance for openSUSE and Kubic.

What is Kata

As already mentioned, Kata is a container runtime focusing on security and on ease of integration with the existing containers ecosystem. If you are wondering what’s a container runtime, this blog post by Sascha will give you a clear introduction about the topic.

Kata should be used when running container images whose source is not fully trusted, or when allowing other users to run their own containers on your platform.

Traditionally, containers share the same physical and operating system (OS) resources with host processes, and specific kernel features such as namespaces are used to provide an isolation layer between host and container processes. By contrast, Kata containers run inside lightweight virtual machines, adding an extra isolation and security layer, that minimizes the host attack surface and mitigates the consequences of containers breakout. Despite this extra layer, Kata achieves impressive runtime performances thanks to KVM hardware virtualization, and when configured to use a minimalist virtual machine manager (VMM) like Firecracker, a high density of microVM can be packed on a single host.

If you want to know more about Kata features and performances:

  • katacontainers.io is a great starting point.
  • For something more SUSE oriented, Flavio gave a interesting talk about Kata at SUSECON 2019,
  • Kata folks hang out on katacontainers.slack.com, and will be happy to answer any quesitons.

Why is it important for Kubic and openSUSE

SUSE has been an early and relevant open source contributor to containers projects, believing that this technology is the future way of deploying and running software.

The most relevant example is the openSUSE Kubic project, that’s a certified Kubernetes distribution and a set of container-related technologies built by the openSUSE community.

We have also been working for some time in well known container projects, like runC, libpod and CRI-O, and since a year we also collaborate with Kata.

Kata complements other more popular ways to run containers, so it makes sense for us to work on improving it and to assure it can smoothly plug with our products.

How to use

While Kata may be used as a standalone piece of software, it’s intended use is serve as a runtime when integrated in a container engine like Podman or CRI-O.

This section shows a quick and easy way to spin up a Kata container using Podman on openSUSE Tumbleweed.

First, install the Kata packages:

$ sudo zypper in katacontainers

Make sure your system is providing the needed set of hardware virtualization features required by Kata:

$ sudo kata-runtime 

Wednesday
14 August, 2019


face

Given that the main development workflow for most kernel maintainers is with email, I spend a lot of time in my email client. For the past few decades I have used (mutt), but every once in a while I look around to see if there is anything else out there that might work better.

One project that looks promising is (aerc) which was started by (Drew DeVault). It is a terminal-based email client written in Go, and relies on a lot of other go libraries to handle a lot of the “grungy” work in dealing with imap clients, email parsing, and other fun things when it comes to free-flow text parsing that emails require.

aerc isn’t in a usable state for me just yet, but Drew asked if I could document exactly how I use an email client for my day-to-day workflow to see what needs to be done to aerc to have me consider switching.

Note, this isn’t a criticism of mutt at all. I love the tool, and spend more time using that userspace program than any other. But as anyone who knows email clients, they all suck, it’s just that mutt sucks less than everything else (that’s literally their motto)

I did a (basic overview of how I apply patches to the stable kernel trees quite a few years ago) but my workflow has evolved over time, so instead of just writing a private email to Drew, I figured it was time to post something showing others just how the sausage really is made.

Anyway, my email workflow can be divided up into 3 different primary things that I do:

  • basic email reading, management, and sorting
  • reviewing new development patches and applying them to a source repository.
  • reviewing potential stable kernel patches and applying them to a source repository.

Given that all stable kernel patches need to already be in Linus’s kernel tree first, the workflow of the how to work with the stable tree is much different from the new patch workflow.

Basic email reading

All of my email ends up in either two “inboxes” on my local machine. One for everything that is sent directly to me (either with To: or Cc:) as well as a number of mailing lists that I ensure I read all messages that are sent to it because I am a maintainer of those subsystems (like (USB), or (stable)). The second inbox consists of other mailing lists that I do not read all messages of, but review as needed, and can be referenced when I need to look something up. Those mailing lists are the “big” linux-kernel mailing list to ensure I have a local copy to search from when I am offline (due to traveling), as well as other “minor” development mailing lists that I like to keep a copy locally like linux-pci, linux-fsdevel, and a few other smaller vger lists.

I get these maildir folders synced with the mail server using (mbsync) which


face

July and August are very sunny months in Europe… and chameleons like sun. That’s why most YaST developers run away from their keyboards during this period to enjoy vacations. Of course, that has an impact in the development speed of YaST and, as a consequence, in the length of the YaST Team blog posts.

But don’t worry much, we still have enough information to keep you entertained for a few minutes if you want to dive with us into our summer activities that includes:

  • Enhancing the development documentation
  • Extending AutoYaST capabilities regarding Bcache
  • Lots of small fixes and improvements

AutoYaST and Bcache – Broader Powers

Bcache technology made its debut in YaST several sprints ago. You can use the Expert Partitioner to create your Bcache devices and improve the performance of your slow disks. We even published a dedicated blog post with all details about it.

Apart of the Expert Partitioner, AutoYaST was also extended to support Bcache devices. And this time, we are pleased to announce that … we have fixed our first Bcache bug!

Actually, there were two different bugs in the AutoYaST side. First, the auto-installation failed when you tried to create a Bcache device without a caching set. On the other hand, it was not possible to create a Bcache with an LVM Logical Volume as backing device. Both bugs are gone, and now AutoYaST supports those scenarios perfectly.

Configuring Bcache and LVM with AutoYaST

But Bcache is a quite young technology and it is not free of bugs. In fact, it fails when the backing device is an LVM Logical Volume and you try to set the cache mode. We have already reported a bug to the Bcache crew and (as you can see in the bug report) a patch is already been tested.

Enhancing Our Development Documentation

This sprint we also touched our development documentation, specifically we documented our process for creating the maintenance branches for the released products. The new branching documentation describes not only how to actually create the branches but also how to adapt all the infrastructure around (like Jenkins or Travis) which requires special knowledge.

We will see how much the documentation is helpful next time when somebody has to do the branching process for the next release. 😉

Working for a better world YaST

We do our best to write code free of bugs… but some bugs are smarter than us and they manage to survive and reproduce. Fortunately we used this sprint to do some hunting.


face

Version 7 of the Elastic stack was released a few months ago, and brought several breaking changes that affect syslog-ng. In my previous blog post, I gave details about how it affects sending GeoIP information to Elasticsearch. From this blog post you can learn about the Kibana side, which has also changed considerably compared to previous releases. Configuration files for syslog-ng are included, but not explained in depth, as that was already done in previous posts.

Note: I use a Turris Omnia as log source. Recent software updates made the Elasticsearch destination of syslog-ng available on the Turris Omnia. On the other hand, you might be following my blog without having access to this wonderful ARM Linux-based device. In the configuration examples below, Turris Omnia does not send logs directly to Elasticsearch, but to another syslog-ng instance (running on the same host as Elasticsearch) instead. This way you can easily replace Turris Omnia with a different device or loggen in your tests.

Before you begin

First of all, you need some iptables log messages. In my case, I used the logs from my Turris Omnia router. If you do not have iptables logs at hand, several sample logs are available on the Internet. For example, you can take Anton Chuvakin’s Bundle 2 ( http://log-sharing.dreamhosters.com/ ) and use loggen to feed it to syslog-ng.

Then, you also need a recent version of syslog-ng. The elasticsearch-http() destination was introduced in syslog-ng OSE version 3.21.1 (and PE version 7.0.14).

Last but not least, you will also need Elasticsearch and Kibana installed. I used version 7.3.0 of the Elastic Stack, but any version after 7.0.0 should be fine. Using version 7.1.0 (or later) has the added benefit of having basic security built in for free: neither payment nor installing 3rd party extensions are necessary.

Network test

It is easy to run into obstacles when collecting logs over the network. Routing, firewalls, SELinux can all prevent you from receiving (and sometimes even sending) log messages. This is the reason why it is worth doing an extra step and test collecting logs over the network. Before configuring Elasticsearch and Kibana, make sure that syslog-ng can actually receive those log messages.

Below is a very simple configuration to receive legacy syslog messages over TCP on port 514 and store them in a file under /var/log. Append it to your syslog-ng.conf, or place it in its own file under the /etc/syslog-ng/conf.d/ directory if your system is configured to use it.

# network source
source s_net {
    tcp(ip("0.0.0.0") port("514"));
};

# debug output to file
destination d_file {
     file("/var/log/this_is_a_test");
};

# combining all of above in a log statement
log {
    source(s_net);
    destination(d_file);
};

Once syslog-ng is reloaded, it listens for log messages on port 514 and stores any incoming message to a file called /var/log/this_is_a_test. Below is a simple configuration file for the Turris Omnia, sending iptables


Monday
12 August, 2019


face

We tackled typography issues after receiving feedback from multiple users. The Tackled Issues Beside defining the font-stack, we revisited the homepage, requests, projects and packages looking for the following issues and fixing them: Looking for different font-sizes and reducing the number of different font-sizes per page to at most 3. Looking for color contrasts which could be bad for people with visual issues. Reducing the usage of small classes for buttons and other components if...


Sunday
11 August, 2019


face
Distribution Forum Wiki Community Membership Bug Reporting Mailing List Chat
MX Linux Yes Technical Only No No Yes No No
Manjaro Yes Yes No No Forum Only Yes Yes
Mint Yes No Yes No Upstream or Github No IRC
elementary Stack Exchange No No No Yes No Slack
Ubuntu Yes Yes Yes Yes Yes Yes IRC
Debian Yes Yes Yes Yes Yes Yes IRC
Fedora Yes Yes Yes Yes Yes Yes IRC
Solus Yes No Yes No Yes No IRC
openSUSE Yes Yes Yes Yes Yes Yes IRC
Zorin Yes No No No Forum Only No No
deepin Yes Yes No No Yes Yes No
KDE neon Yes Yes Yes No Yes Yes IRC
CentOS Yes Yes Yes No Yes Yes IRC
ReactOS* Yes No Yes No Yes Yes Webchat
Arch Yes Yes Yes Yes Yes Yes Yes
ArcoLinux Yes No No No No No Discord
Parrot Yes Debian Wiki No No Forum Only No IRC/Telegram
Kali Yes No Yes No Yes No IRC
PCLinuxOS Yes No No No Forum Only No IRC
Lite Yes No Yes Yes Yes No No

*All are Linux distributions except ReactOS

Column descriptions:

  • Distribution: Name of the distro
  • Forum: Is there a support message board?
  • Wiki: Is there a user-editable wiki?
  • Community: Are there any links where I can directly contribute to the project?
  • Membership: Can I become a voting member of the community?
  • Bug Reporting: Is there a way to report bugs that I find?
  • Mailing list: Is there an active mailing list for support, announcements, etc?
  • Chat: Is there a way to talk to other people in the community directly?

What is this list?

This is the top 20 active projects distributions according to distrowatch.org in the past 12 months.

Things that I learned:

Only well-funded corporate sponsored Linux distributions (Fedora, Ubuntu, OpenSUSE) have all categories checked. That doesn’t mean that anyone is getting paid. I believe this means that employees are probably the chief contributors and that means there are more people putting in resources to help.

Some distributions are “Pat’s distribution”. Pat’s group owns it and Pat doesn’t want a steering committee or any other say in how the distro works. Though contributions by means of bug reports may be accepted.

A few distributions “outsource” resources to other distributions. Elementary allows Stack Exchange to provide their forum. Parrot Linux refers users to the Debian wiki. Mint suggests that you put in bug reports with the upstream provider unless it is a specific Mint create application.

There are a few Linux distributions that leave me scratching my head. How is this in the top 20 distros on distrowatch? There’s nothing here and the forum, if there is one, is nearly empty. Who uses this?

What do you want from an open source project?

Do you want to donate your time, make friends, and really help make a Linux distribution grow? Look at Fedora, Ubuntu, OpenSUSE, or Arch. These communities have ways to help you


face

Recently KDE had an unfortunate event. Someone found a vulnerability in the code that processes .desktop and .directory files, through which an attacker could create a malicious file that causes shell command execution (analysis). They went for immediate, full disclosure, where KDE didn't even get a chance of fixing the bug before it was published.

There are many protocols for disclosing vulnerabilities in a coordinated, responsible fashion, but the gist of them is this:

  1. Someone finds a vulnerability in some software through studying some code, or some other mechanism.

  2. They report the vulnerability to the software's author through some private channel. For free softare in particular, researchers can use Openwall's recommended process for researchers, which includes notifying the author/maintainer and distros and security groups. Free software projects can follow a well-established process.

  3. The author and reporter agree on a deadline for releasing a public report of the vulnerability, or in semi-automated systems like Google Zero, a deadline is automatically established.

  4. The author works on fixing the vulnerability.

  5. The deadline is reached; the patch has been publically released, the appropriate people have been notified, systems have been patched. If there is no patch, the author and reporter can agree on postponing the date, or the reporter can publish the vulnerability report, thus creating public pressure for a fix.

The steps above gloss over many practicalities and issues from the real world, but the idea is basically this: the author or maintainer of the software is given a chance to fix a security bug before information on the vulnerability is released to the hostile world. The idea is to keep harm from being done by not publishing unpatched vulnerabilities until there is a fix for them (... or until the deadline expires).

What happened instead

Around the beginning of July, the reporter posts about looking for bugs in KDE.

On July 30, he posts a video with the proof of concept.

On August 3, he makes a Twitter poll about what to do with the vulnerability.

On August 4, he publishes the vulnerability.

KDE is left with having to patch this in emergency mode. On August 7, KDE releases a security advisory in perfect form:

  • Description of exactly what causes the vulnerability.

  • Description of how it was solved.

  • Instructions on what to do for users of various versions of KDE libraries.

  • Links to easy-to-cherry-pick patches for distro vendors.

Now, distro vendors are, in turn, in emergency mode, as they must apply the patch, run it through QA, release their own advisories, etc.

What if this had been done with coordinated disclosure?

The bug would have been fixed, probably in the same way, but it would not be in emergency mode. KDE's advisory contains this:

Thanks to Dominik Penner for finding and documenting this issue (we wish however that he would have contacted us before making the issue public) and to David Faure for the fix.

This is an extremely gracious way of thanking the reporter.

I am not


Friday
09 August, 2019


face

Dear Tumbleweed users and hackers,

As you certainly know, there are more snapshots tested than we release in the end.  In the last two weeks, for example, we tested 9 snapshots. Of those, only 4 made it to the mirrors and to you – the users. During the last two weeks, these were snapshots 0726, 0730, 0805 and 0806.

The most interesting changes in these snapshots were:

  • KDE Frameworks 5.60.0
  • KDE Plasma 5.16.3 & 5.16.4
  • Linux kernel 5.2.2, 5.2.3, 5.2.5
  • CFLAGS was extended by -Werror=return-type. In the past, this was (attempted to be) detected by brppost checks, but as that scanned build logs, there were too many ways for it to be missed. Including gcc changing the wording
  • Mesa 19.1.3
  • More than two months worth of YaST changes. Have you followed the YaST spring reports?

Things currently being worked on in Stagings:

  • Replace’pkg-config’ with a more modern implementation ‘pkgconf’. It is supposed to be transparent to the change, once things are all worked out.
  • KDE Applications 19.08.0 (currently beta is staged)
  • LibreOffice 6.3
  • GLibc 2.30
  • Linux kernel 5.2.7
  • CMake 3.15.x

Tuesday
06 August, 2019


face

One of those distributions there is a lot of buzz about and I have mostly ignored for a significant number of years has been Zorin OS. I just shrugged my shoulders and kind of ignored its existence. None of the spoken or written selling points really stuck with me, like a warm springtime rain trickling off of a ducks back, I ignored it.

I think that was a mistake.

Instead of just acting like I know something about it, I made the time to noodle around in this rather nice Linux distribution. My review on Zorin OS is from the perspective of a deeply entrenched, biased openSUSE user. I won’t pretend that this is going to be completely objective, as it absolutely is not. So take that for what it’s worth.

Bottom line up front and to give you a quick escape from the rest of this blathering, I was pleasantly surprised by the Zorin OS experience. It is a highly polished experience molded with the Gnome Desktop Environment. It is such a nicely customized and smooth experience, I had to check twice to verify that it was indeed Gnome I was using. Although I am exceptionally satisfied with using openSUSE Tumbleweed with the Plasma desktop, the finely crafted distribution gave me pause and much to think about. So much so, I had to think about some of my life decisions. This was such an incredibly seamless and pleasant experience and I could easily recommend this for anyone that is curious about Linux but doesn’t have a lot of technical experience. I would put this right up next to Mint as an approachable introduction to the Linux world.

Installation

The installation media can be acquired here where I went for the “Free” edition called “Core”. I chose to run this in a virtual machine as the scope of this evaluation is is to test the ease of [basic] installation, how usable the interface is and the [subjective] quality of the system tools.

The Core edition gives you three options. All of which are to Try or Install. For my case, I am choosing the top option which is simply, “Try or Install Zorin OS”.

The system boots with a very modern or almost look to the future font, simply displaying, “Zorin.”

You are immediately greeted with two options, to “Try…” or to “Install…” for my purposes, I have chosen to Install Zorin OS. Following that choice, your next task is to set your keyboard layout and your preference on Updates and other software.

Next you are to select the Installation type. Since this is a simple setup, I have chosen to erase the disk. You are given one sanity check before proceeding. Selecting Continue is essentially the point of no return.

After you have past the point of no return, select your location and enter your user information and the hostname of the computer.

Following the final user-required input, the installation of Zorin OS 15 will commence


Jason Evans: Email Consolidation

06:55 UTCmember

face

I’ve got too many email addresses.

I have:

  • 2 for work
  • 1 alias for opensuse.org
  • 1 paid account with protonmail with 5 addresses shared in that account
  • 1 very old gmail account (I signed up the first day I heard about it).
  • 1 seznam account (Czech provider)
  • 1 installation of mail-in-a-box with 4 domains that I own but only one real account that I use
  • 1 librem.one account (this is a mistake and a disappointment)

The goal is to change all of the services, mailing lists, etc that I use to point to a single email account either directly or through aliases so that all of my email is in one place with the exception of my work email which should always stay separate. Also, to get people to only email me at the one account.

to be continued…


Monday
05 August, 2019


face

For all you terminal graphics connoisseurs out there (there must be dozens of us!), I released Chafa 1.2.0 this weekend. Thanks to embedded copies of some parallel image scaling code and the quite excellent libnsgif, it’s faster and better in every way. What’s more, there are exciting new dithering knobs to further mangle refine your beautiful pictures. You can see what this stuff looks like in the gallery.

Included is also a Python program by Mo Zhou that uses k-means clustering to produce optimal glyph sets from training data. Neat!

Thanks to all the packagers, unsung heroes of the F/OSS world. Shoutouts go to Michael Vetter (openSUSE) and Guy Fleury Iteriteka (Guix) who got in touch with package info and installation instructions.

The full release notes are on GitHub.

What’s next

I’ve been asked about sixel support and some kind of interactive mode. I think both are in the cards… In the meantime, here’s a butterfly¹.

A very chafa butterfly

¹ Original via… Twitter? Tumblr? Imgur? Gfycat? I honestly can’t remember.


Friday
02 August, 2019


face

Contributors of Uyuni Project have released a new version of Uyuni 4.0.2, which is an open-source infrastructure management solution tailored for software-defined infrastructure.

Uyuni, a fork of the Spacewalk project, modernizing Spacewalk with SaltStack, provides more operating systems support and better scalability capabilities. Uyuni is now the upstream for SUSE Manager.

With this release, Uyuni provides powerful new features such as monitoring, content lifecycle management and virtual machine management.

Both the Uyuni Server node and the optional proxy nodes work on top of openSUSE Leap 15.1 and support Leap 15.1, CentOS, Ubuntu and others as clients. Debian support is experimental. The new version of Uyuni uses Salt 2019.2, Grafana 6.2.5, Cobbler 3.0 and Python 3.6 in the backend.

“The upgrade involves the complete replacement of the underlying operating system,” according to a post on July 9 by Hubert Mantel on Github. “This is a very critical operation and it is impossible to handle any potential failure in a graceful way. For example, an error during upgrade of the base OS might lead to a completely broken system which cannot be recovered.

Given that the upgrade of Uyuni also involves upgrading the base operating system from Leap 42.3 to Leap 15.1, it is highly advisable to create a backup of the server before running the migration. If the Uyuni server is running in a virtual machine, it is recommended to take a snapshot of the machine before running the migration.

Migration is performed by first updating the susemanager package:

zypper ref && zypper in susemanager

Then run the migration script:

/usr/lib/susemanager/bin/server-migrator.sh

“This script will stop the services, subscribe the new software repositories and finally perform the actual update to the new version,” Mantel wrote on Github. “After successful migration, services will not be started automatically. The system needs to be rebooted and this will also re-start all the services. There is nothing additional the admin needs to do.”

The intention of the fork was to provide new inspiration to a Spacewalk, which had been perceived as idling in recent years. Uyuni is using Salt for configuration management, thereby inheriting its name: Uyuni refers to the world’s largest Salt flat, Salar de Uyuni in Southwest Bolivia.

Interested members can follow the project on https://github.com/uyuni-project, www.uyuni-project.org, via Twitter at @UyuniProject, or join #uyuni at irc.freenode.org.


Thursday
01 August, 2019


face

There have been three openSUSE Tumbleweed snapshots released since last week.

The snapshots brought a single major version update and new versions of KDE’s Plasma and Frameworks.

ImageMagick’s 7.0.8.56 version arrived in snapshot 20190730 and added support for the TIM2 image format, which is commonly used in PlayStation 2 and sometimes in PlayStation Portable games. The snapshot also delivered an update for Mesa 3D Graphics Library with version 19.1.3 that mostly provided fixes for ANV and RADV drivers, as well as NIR backend fixes. File searching tool catfish 1.4.8 provided some fixes with directories and a fix running on Wayland. The GNU Compiler Collection 7 added a patch and fixed for a Link Time Optimization (LTO) linker plugin. The 9.0.1 glu, which is the OpenGL Utility library for Mesa, fixed a possible memory leak. The Linux Kernel was updated to 5.2.3; the new version made a few fixes for PowerPC and added Bluetooth for some new devices. Serval Python packages were updated in the snapshot. LLVM tools and libraries were updated in Tumbleweed with llvm8 8.0.1 but the changelog states not to run LLVM tests on PowerPC because of sporadic hangs. The 2.4.7 version of openvpn in the snapshot added support for tls-ciphersuites for TLS 1.3 and updated openvpn.keyring with public key downloaded from https://swupdate.openvpn.net/community/keys/security-key-2019.asc. A lengthy list of fixes were made to the VIM text editor in version 8.1.1741. Other packages updated in the snapshot were ucode-intel 20190618, xapps 1.4.8, ypbind 2.6.1 and zstd 1.4.1. The snapshot is trending as moderately stable with a rating of 79, according to the Tumbleweed snapshot reviewer.

KDE’s Frameworks and Plasma were updated in the 20190726 snapshot. Frameworks 5.60.0 had multiple fixes for KTextEditor, KWayland, KIO and Baloo. The new version requires Qt 5.11 now that Qt 5.13 was released. Plasma 5.16.3 adds new translations and fixes including the fix of compilation without libinput and an improved appearance and reduce memory consumption with Plasma Audio Volume Control. There was a major version update for the checkmedia to version 5.2, which fixed a compat issue with older GCC. The new major version also allows to set a specific GPG key for signature verification. GNOME’s bijiben updated to version 3.32.2 and the update of curl 7.65.3 fixed several bugs and makes the progress meter appear again. A Common Vulnerabilities and Exposures that could allow remote attackers to execute other programs with root privileges was fixed in the message transfer agent exim 4.92.1. The 11.0.4.0 version of java-11-openjdk also fixed several CVEs and cleaned up the sources and code. Phonon, which is the multimedia Application Programming Interface (API) for KDE, removed the QFOREACH function in the headers when building for Qt 5 in version


Monday
29 July, 2019


face

Back in 2003, my dad was taken seriously ill when his aorta tore. Fortunately, he was taken to the hospital in time where they repaired the damage and inserted a stent. Little did anyone know, this was going to be that start of him steadily being poisoned by his own systems. Fast forward to 2019, … Continue reading "Dad’s ticking time bomb"


face

Unifying the Console Keyboard Layouts for SLE and openSUSE

The way of managing internationalization in Linux systems has changed through the years, as well as the technologies used to represent the different alphabets and characters used in every language. YaST tries to offer a centralized way of managing the system-wide settings in that regard. An apparently simple action like changing the language in the YaST interface implies many aspects like setting the font and the keyboard map to be used in the text-based consoles, doing the same for the graphical X11 environment and keeping those fonts and keyboard maps in sync, ensuring the compatibility between all the pieces.

For that purpose, YaST maintains a list with all the correspondences between keyboard layouts and its corresponding "keymap" files living under /usr/share/kbd/keymaps. Some time ago the content of that list diverged between openSUSE and SLE-based products. During this sprint we took the opportunity to analyze the situation and try to unify criteria in that regard.

We analyzed the status and origin of all the keymap files used in both families of distributions (you can see a rather comprehensive research starting in comment #18 of bug#1124921) and we came to the conclusions that:

  • The openSUSE list needed some minor adjustments.
  • Leaving that aside, the keymaps used in openSUSE were in general a better option because they are more modern and aligned with current upstream development.

So we decided to unify all systems to adopt the openSUSE approach. That will have basically no impact for our openSUSE users but may have some implications for users installing the upcoming SLE-15-SP2. In any case, we hope that change will be for the better in most cases. Time will tell.

Exporting User Defined Repositories to AutoYaST Configuration File.

With the call yast clone_system an AutoYaST configuration file will be generated which reflects the state of the running system. Up to now only SUSE Add-Ons have been defined in the AutoYaST configration module. Now also user defined repositories will be exported in an own subsection <add_on_others> of the <add-on> section.

<add-on>
  <add_on_others config:type="list">
    <listentry>
      <alias>yast_head</alias>
      <media_url>https://download.opensuse.org/repositories/YaST:/Head/openSUSE_Leap_15.1/</media_url>
      <name>Yast head</name>
      <priority config:type="integer">99</priority>
      <product_dir>/</product_dir>
    </listentry>
  </add_on_others>
  <add_on_products config:type="list">
    <listentry>
      <media_url>dvd:/?devices=/dev/sr1</media_url>
      <product>sle-module-desktop-applications</product>
      <product_dir>/Module-Desktop-Applications</product_dir>
    </listentry>
    <listentry>
      <media_url>dvd:/?devices=/dev/sr1</media_url>
      <product>sle-module-basesystem</product>
      <product_dir>/Module-Basesystem</product_dir>
    </listentry>
  </add_on_products>
</add-on>

The format of the <add_on_others> section is the same as the <add_on_products> section.

Better Handling of Broken Bootloader Setups during Upgrade

With the current versions of SLE and openSUSE, using the installation media to upgrade a system which contains a badly broken GRUB2 configuration (e.g. contains references to udev links that do not longer exist) can result in an ugly internal error during the process.

The


Saturday
27 July, 2019


face

I don’t know when I started writing a lot of bash scripts. It just seemed to happen over the last few years, possibly because bash is pretty much universally available, even on newly installed Linux systems.

Despite its benefits, one of the things I really hate about bash is logging. Rolling my own timestamps can be a real PITA, but they’re so useful that I can’t live without them (optimising software performance for a living gives you a rather unhealthy obsession with how long things take). Every script I write ends up using a different timestamp format because I just can’t seem to remember the date command I used last.

At least, that was the old me. The new me has discovered the perfect tool: ts from the moreutils package.

ts prepends a timestamp to each line it receives on stdin. Adding a timestamp in your log messages is a simple as:

$ echo bar | ts
Jul 28 22:27:51 bar

You can also specify a strftime(3) compatible format:

$ echo bar | ts "[%F %H:%M:%S]"
[2019-07-28 22:34:48] bar

But wait, there’s more! If a simple way to print timestamps wasn’t enough, ts can also parse existing timestamps in the input line (by feeding your ts-tagged logs back into ts) and preprend an additional timestamp with cumulative and relative times between consecutive lines.

This is fantastic for answering two questions:

  1. When was a log message printed relative to the start of the program? (-s)

  2. When was a log message printed relative to the previous line? (-i)

The -s option tells you how long it took to reach a certain point of your bash script. And the -i option helps you which parts of your script are taking the most time.

$ cat sleeper.sh && ./sleeper.sh 
#!/bin/bash
FMT="[%H:%M:%S]"
for i in 1 2 3; do
	echo "Step $i"
	sleep $((i*10))
done | ts $FMT | ts -s $FMT | ts -i "[+%s]" | awk '
BEGIN { print "Timestamp | Runtime | Delta";
	print "---------------------------" }

{
	# ts(1) only prepends. Rearrange the timestamps.

	printf "%s %s %s ", $3, $2, $1;

	for (i=4; i <= NF; i++) {
		printf "%s ", $i;
	}

	printf "\n";
}'
Timestamp | Runtime | Delta
---------------------------
[23:15:22] [00:00:00] [+0] Step 1 
[23:15:32] [00:00:10] [+10] Step 2 
[23:15:52] [00:00:30] [+20] Step 3 
$

Beat that, hand-rolled date timestamps.


Friday
26 July, 2019


face

Dear Tumbleweed users and hackers

During week 2019/30, I tried to slow down the check-in cadence a little bit, in order to allow the ARM build system to catch up. Especially the full rebuild with LTO enabled caused a much higher load on the ARM workers than would be normal. And I am happy to announce that on the morning of July 26, an ARM-based snapshot made it to QA. Usually, I don’t talk about architectures, and ‘just’ focus on x86_64, but there was some team effort to also bring aarch64 back into shape.

As for the ‘regular’ x86_64 Tumbleweed snapshots, there were 4 of them released during this week: 0718, 0721, 0723 and 0724, containing those updates:

  • gcc 9.1.1, with some more LTO-specific fixes
  • dracut 049
  • Linux kernel 5.2.1
  • Mozilla Firefox 68.0.1
  • squid 4.8

What can you expect to come your way in the next days/weeks?

  • KDE Frameworks 5.60.0
  • KDE Plasma 5.16.3
  • Addition of “-Werror=return-type” to CFLAGS for all builds
  • Almost two months worth of YaST changes/fixes/updates
  • Mesa 19.1.3

 

 


Thursday
25 July, 2019


face

One thing one can do in this amazing summer heat, is cut the 0.24 release of desktop-file-utils. It’s rather a small thing, but since the last few releases have been happening at roughly three-year intervals I felt it merited a quick post.

Changes since 0.23

desktop-file-validate

  • Allow desktop file spec version 1.2 (Severin Glöckner).
  • Add Budgie, Deepin, Enlightenment and Pantheon to list of registered desktop environments (fdo#10, fdo#11, fdo#16, oldfdo#97385) (Ikey Doherty, sensor.wen, Joonas Niilola, David Faure).

update-desktop-database

  • Sort output lines internally to conserve reproducibility (fdo#12) (Chris Lamb).
  • Use pledge(2) on OpenBSD to limit capabilities (fdo#13) (Jasper Lievisse Adriaanse).

Common

  • Fix missing ; when appending to a list not ending with one (oldfdo#97388) (Pascal Terjan).
  • Add font as valid media type (Matthias Clasen).
  • Fix broken emacs blocking compile (fdo#15) (Hans Petter Jansson, reported by John).

About

desktop-file-utils contains command line utilities for working with desktop entries:

  • desktop-file-validate: Validates a desktop file according to the desktop entry specification.
  • desktop-file-install: Installs a desktop file to the applications directory after applying optional modifications.
  • update-desktop-database: Updates the database containing a cache of MIME types handled by desktop files.

Thanks to everyone who contributed to this release! And an extra big thanks to Daniel Stone for his patient freedesktop.org support.


face

Two openSUSE Tumbleweed snapshots have been released since our last Tumbleweed update on Saturday.

The most recent snapshot, 20190723, updated Mozilla Firefox to version 68.0.1. The browser fixed the missing Full-Screen button when watching videos in full screen mode on HBO GO. The new 68 version enhanced the Dark Mode reader view to include darkening the controls, sidebars and toolbars. It also addressed several Common Vulnerabilities and Exposures (CVE). The snapshot provided an update to GNOME 3.32.4, which fixed an issue that led to some packages with multiple appdata files not correctly showing up on the updates page. The Guile programming language package update to 2.2.6 fixed regression introduced in the previous version that broke HTTP servers locale encoding. Hardware library hwinfo 21.67 fixed Direct Access Storage Devices (DASD) detection. A major 7.0 version of hylafax+ arrived in the snapshot. The Linux Kernel brought several new features with the 5.2.1 kernel and enhanced security for a hardware vulnerability affecting Intel processors. The open-source painting program Krita 4.2.3 version offered a variety of fixes including a copy and paste fix of the animation frames. A few libraries like libgphoto2, libuv and libva received update. There were also several Perl and Rubygem packages that were updated in the snapshot. The file manager for the Xfce Desktop Environment, thunar 1.8.8, fixed XML declaration in uca.xml and the 2.15 transactional-update package enable network during updates and allow updates of the bootloader on EFI systems. The snapshot is currently trending at a 93 rating, according to the Tumbleweed snapshot reviewer.

Among the top packages to update in snapshot 20190721 were gnome-builder 3.32.4, wireshark 3.0.3 and an update for GNU Compiler Collection 9. GNOME Builder fixed the initial selection in project-tree popovers, Wireshark fixed CVE-2019-13619 and GCC9 added a patch to provide more stable builds for single value counters. The dracut package updated from 044.2 to 049; this update removed several patches and added support for compressed kernel modules. The Distributed Replicated Block Device (drbd) 9.0.19 package fixed resync stuck at near completion and introduced allow-remote-read configuration option. GNOME’s personal information management application evolution updated to version 3.32.4, which added an [ECompEditor] to ensure attendee changes are stored before saving. GNOME’s Grilo, which is a framework focused on making media discovery and browsing easy for application developers, updated to 0.3.9 fixed core keys extraction. GNOME’s Virtual file system (gvfs) and programming language Vala were updated to versions 1.40.2 and 0.44.6 respectively. Krita was also updated in this snapshot. The 0.5.1 version of python-parso fixed some unicode identifiers that were not correctly tokenized.  The snapshot is currently trending at a 90 rating, according to the Tumbleweed snapshot reviewer.


Wednesday
24 July, 2019


Federico Mena-Quintero: Constructors

18:59 UTCmember

face

Have you ever had these annoyances with GObject-style constructors?

  • From a constructor, calling a method on a partially-constructed object is dangerous.

  • A constructor needs to set up "not quite initialized" values in the instance struct until a construct-time property is set.

  • You actually need to override GObjectClass::constructed (or was it ::constructor?) to take care of construct-only properties which need to be considered together, not individually.

  • Constructors can't report an error, unless you derive from GInitable, which is not in gobject, but in gio instead. (Also, why does that force the constructor to take a GCancellable...?)

  • You need more than one constructor, but that needs to be done with helper functions.

This article, Perils of Constructors, explains all of these problems very well. It is not centered on GObject, but rather on constructors in object-oriented languages in general.

(Spoiler: Rust does not have constructors or partially-initialized structs, so these problems don't really exist there.)

(Addendum: that makes it somewhat awkward to convert GObject code in C to Rust, but librsvg was able to solve it nicely with <buzzword>the typestate pattern</buzzword>.)


Tuesday
23 July, 2019


face

Backup-02

The lack of data security is something that has recently affected some municipal governments in a negative way. Atlanta in 2018 was attacked with a ransomware and demanded $51,000 before they would unlock it. Baltimore was hit a second time this past May [2019]. I am not a security expert but in my non-expert opinion, just keeping regular backups of your data would have prevented needing to spend a ransom to get your data back. It would also help to run openSUSE Linux or one of the many other Linux options on the desktop to reduce the impact of a user induced damage due to wayward link-clicking.

If you are interested in keeping your personal data “safe,” offline backups are an absolute requirement. Relying only on Google Drive, Dropbox, Nextcloud or whatever it may be is just not not adequate. Those are a synchronizing solution and can be a part of your data-safekeeping strategy but not the entirety of it.

I have been using Back In Time as my backup strategy, in this time, I have only had to restore a backup once but that was an elected procedure. Back In Time is great because it is a Qt based application so it looks good in KDE Plasma

Installation

For openSUSE users, getting the software is an easy task. The point and click method can be done here:

https://software.opensuse.org/package/backintime-qt

The more fun and engaging method would be to open a terminal and run:

sudo zypper install backintime-qt

It is, after all, in the main openSUSE repository and not playing in the terminal when the opportunity presents itself is a missed opportunity.

How it has been going

Since this is a retrospective on using Back In Time, you can find more about usage and other options backing up your system hereI am not going to claim that I was 100% disciplined performing weekly backups like I suggested. The sad reality is, I got busy and sometimes it was every other week… I may have forgotten to do it entirely in April… but for the most part, I was pretty good about keeping my system backed up.

Since Back In Time is really quite easy to use it is as simple as connecting a specially designated USB drive into my computer and I start “Back In Time”. Yes, in that order because I don’t I get a rather angry message.

BackInTime 04-Snapshots folder.png

Something else you have to do is either manually or automatically remove old snapshots. I didn’t pay attention and some of the snapshots completed “WITH ERRORS!” I am sharing this as a cautionary tale to pay closer attention to your backup medium, whatever that may be, to ensure you have enough space.

From there, all I would have to do is click the Save Snapshots Icon.

BackInTime 05-Take Snapshot Icon-box

The application will evaluate the last snapshot against your filesystem and create an incremental snapshot. The first snapshot is the most time consuming, the subsequent snapshots don


Monday
22 July, 2019


Michael Meeks: 2019-07-22 Monday.

21:00 UTCmember

face
  • Mail chew; project mgmt, meetings, code review, status chasing, sync with Kendy & Miklos. Plugged away at input until late, very hot. Bed.

Sunday
21 July, 2019


Michael Meeks: 2019-07-21 Sunday.

21:00 UTCmember

face
  • J's birthday, presents at breakfast; off to All Saints - violin. Becky & Steve kindly had us all, plus Markus & Zoe over for a fine roast turkey lunch. Good to catch up with Chris & his lady.
  • Home, slugged, read Dirk Gently to the family,

Saturday
20 July, 2019


face

Bodhi review title.png

Linux is a fun thing and trying out other distributions can result in a myriad of experiences. Some distributions concentrate on user experience or mostly the technical underpinnings. Some distributions put their own feel while others minimize their modifications. I am a long time openSUSE user and am perfectly content with all that it has to offer, not just as a distribution but as a project in its totality.  As a part of the Big Daddy Linux Community, there is an optional weekly challenge to try out a Linux distribution. My process for this is to put it in a VM first and then go to “bare metal” for further testing if my initial experience is compelling enough and I have the time.

The latest challenge is Bodhi Linux it is built on the Ubuntu 18.04 LTS but targeting machines with fewer resources. The Bodi Linux Project offers forums for help and advice, they have a wiki to help with configurating the system, and offer a live chat through Discord to get help or just get to know members of the community. Unfortunately, I didn’t notice any IRC options. I downloaded the ISO from here. There are few different options from which to choose. I went with the “AppPack” ISO as it has more applications bundled in it. For more information on choosing the correct ISO for you, see here.

Bottom Line Up Front, Bodhi Linux is well put together and the Moksha Desktop is a crisp, low resource, animated (almost excessively) environment that is worthy of giving it a spin. This distribution is certainly worth the time, especially if you have an older system you want to keep going a little longer. The Moksha Desktop looks good and is more functional than GNOME so that is already a leg up on many distributions.

Installation

When you first spin it up, you are greated with the typical boot menu you would get from a Linux media. My only complain here is that it doesn’t have an install option from here. That is always my preferred option.

Bodhi Linux 1 Live Installer

Booting from the Live Media was pretty rapid. The default desktop was clean and themed correctly, dark. I didn’t even see a light option so well done there! All the icons and menus lend themselves nicely for a dark themed desktop.

The Welcome Screen is nothing more than a local html file of places to go to get started using Bodhi Linux. You are almost immediately greeted with the notice that you are not running the latest Enlightenment. I know that this desktop, Moksha, is forked from it so, just odd that I would see “Enlightenment” there.

Since I wanted to play with this distribution and do things with it, I needed to install it. Although I prefer being able to install out of the gate, I can get along with the Live process well enough. My only issue was. I didn’t know where the installation laucher was to


Michael Meeks: 2019-07-20 Saturday.

21:00 UTCmember

face
  • Worked in the car on the way to Bruce & Annes; had a fine time with them, relaxed, slugged, pleasant walk along the Aldeburgh sea-front.
  • Setup Anne's new Alexa Show - an interesting device - E. very amused by its jokes - eleven seems an ideal age for stickiness there.

face
desk lamp with mirror behinddesk lamp with mirror behind

Some time ago, I wanted to make my own desk lamp. It should provide soft, bright task lighting above my desk, no sharp shadows that could cover part of my work area, but also some atmospheric lighting around the desk in my basement office. The lamp should have a natural look around it, but since I made it myself, I also didn’t mind exposing some of its internals.

desklamp-ledstripsSMD5050 LED strips

I had oak floor boards that I got from a friend (thanks, Wendy!) lying around. which I used as base material for the lamp. I combined these with some RGBW led strips that I had lying around, and a wireless controller that would allow me to connect the lamp to my Philips Hue lighting system, that I use throughout the house to control the lights. I sanded the wood until it was completely smooth, and then gave it an oild finish to make it durable and give it a more pronounced texture.

Fixed to the ceilingFixed to the ceiling
Internals of the desk lampInternals of the desk lamp

The center board is covered in 0.5mm aluminium sheets to dissipate heat from the LED strips (again, making them last longer) and provide some extra diffusion of the light. This material is easy to work with, and also very suitable to stick the led strips to. For the light itself, I used SMD5050 LED strips that can produce warm and cold white light, as well as RGB colors. I put 3 rows of strips next to each other to provide enough light. The strips wrap around at the top, so light is not just shining down on my desk, but also reflecting from walls and ceiling around it. The front and back are another piece of wood to avoid looking directly into the LEDs, which would be distractive, annoying when working and also quite ugly. I attached a front and back board as well to the lamp, making it into an H shape.

Light reflects nicely from surrounding surfacesLight reflects nicely from surrounding surfaces

The controller (a Gledopto Z-Wave controller, that is compatible with Philips Hue) is attached to the center board as well, so I just needed to run 2 12V wires to the lamp. I was being a bit creative here, and thought “why not use the power cables also to have the lamp hanging from the ceiling?”. I used coated steel wire, which I stripped here and there to have power run through steel hooks screwed into the ceiling to supply the lamp with power while also being able to adjust its height. This ended up creating a rather clean look for the whole lamp and really brought the whole thing together.

Older blog entries ->