### Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

## Wednesday20 March, 2019

The translation-finder module has been released in version 1.1. It is used by Weblate to detect translatable files in the repository making setup of translation components in Weblate much easier. This release brings lot of improvements based on feedback from our users, making the detection more reliable and accurate.

Full list of changes:

• Improved detection of translation with full language code.
• Improved detection of language code in directory and file name.
• Improved detection of language code separated by full stop.
• Added detection for app store metadata files.
• Added detection for JSON files.
• Ignore symlinks during discovery.
• Improved detection of matching pot files in several corner cases.
• Improved detection of monolingual Gettext.

Filed under: Debian English SUSE Weblate

## Saturday16 March, 2019

Password managers are all the rage these days, I guess… I haven’t ever been compelled to try as the password manager I have been using, my shoddy memory, has been working alright for me. The reality is, I have a lot more passwords to remember now and for those passwords I don’t use as frequently, I have to guess at it a few times before I get it… and that is just not a good look.

I have heard rave reviews about several different password manager solutions, waited until I heard more about them and was scared off but recently the rumble of Bitwarden, an open source, free but with a premium paid option came to my awareness. The option to roll your own is a huge deal for me, even if I don’t actually ever roll my own server,

## Installation

Bitwarden has several options for installation. I selected to download the AppImage. It should be noted that Your organization my vary but I have a designated AppImage folder for all my AppImages. Once you download it, make sure it is executable. Using Dolphin or your favorite file manager, access the properties and make it executable.

It should be noted that you can download Bitwarden for Windows and Mac OS as well. Not that those mater as much. There is are Deb, RPM and Snap options as well, if you so choose but it should be noted that Deb and RPM don’t have the ability to auto update.

I installed the Firefox Extension so that I could use Bitwarden in a more “seemless” fashion. If I could install Bitwarden on Falkon, I would but at this time, I am not sure how that would be accomplished. Supposedly there is some QML thing in the works but at this time, it is not obvious to me.

It should be noted that Firefox gives you a couple ways to use it. There is a side bar and a drop-down tool. I prefer the drop-down tool as the sidebar tool isn’t as easily turned on and off.

## Features

The most commonly used method of using a password manager is automatically through a plugin on a browser. From the different sites I tested it out, it works well. I have tried it on a few sites and when I had input the password I was asked if I wanted Bitwarden to store the login information. Upon returning to that site it did indeed work as expected.

An interesting bonus is that you can add any number of notes to a saved password. You could perhaps put the other related notes about your password, or maybe not even have your password at all but a series of hints about your password if you are so paranoid.

Manual Password entry since I often use Falkon instead of Firefox or Chrome and there is not a Bitwarden browser extension available, I will use the Bitwarden in the stand alone mode and do

## Friday15 March, 2019

After the librsvg team finished the rustification of librsvg's main library, I wanted to start porting the high-level test suite to Rust. This is mainly to be able to run tests in parallel, which cargo test does automatically in order to reduce test times. However, this meant that librsvg needed a Rust API that would exercise the same code paths as the C entry points.

At the same time, I wanted the Rust API to make it impossible to misuse the library. From the viewpoint of the C API, an RsvgHandle has different stages:

• Just initialized
• Loaded, or in an error state after a failed load
• Ready to render

To ensure consistency, the public API checks that you cannot render an RsvgHandle that is not completely loaded yet, or one that resulted in a loading error. But wouldn't it be nice if it were impossible to call the API functions in the wrong order?

This is exactly what the Rust API does. There is a Loader, to which you give a filename or a stream, and it will return a fully-loaded SvgHandle or an error. Then, you can only create a CairoRenderer if you have an SvgHandle.

For historical reasons, the C API in librsvg is not perfectly consistent. For example, some functions which return an error will actually return a proper GError, but some others will just return a gboolean with no further explanation of what went wrong. In contrast, all the Rust API functions that can fail will actually return a Result, and the error case will have a meaningful error value. In the Rust API, there is no "wrong order" in which the various API functions and methods can be called; it tries to do the whole "make invalid states unrepresentable".

To implement the Rust API, I had to do some refactoring of the internals that hook to the public entry points. This made me realize that librsvg could be a lot easier to use. The C API has always forced you to call it in this fashion:

1. Ask the SVG for its dimensions, or how big it is.
2. Based on that, scale your Cairo context to the size you actually want.
3. Render the SVG to that context's current transformation matrix.

But first, (1) gives you inadequate information because rsvg_handle_get_dimensions() returns a structure with int fields for the width and height. The API is similar to gdk-pixbuf's in that it always wants to think in whole pixels. However, an SVG is not necessarily integer-sized.

Then, (2) forces you to calculate some geometry in almost all cases, as most apps want to render SVG content scaled proportionally to a certain size. This is not hard to do, but it's an inconvenience.

## SVG dimensions

Let's look at (1) again. The question, "how big is the SVG" is a bit meaningless when we consider that SVGs can be scaled to any size; that's the whole point of them!

When you ask RsvgHandle …

## Notes:

The following is a proof of concept tutorial on how to create a Tor onion service on Windows 10 using Ubuntu in Windows Subsystem for Linux. This has not been security tested by anyone in the Tor project. It is also not exactly the same directions that I would give someone who wants create an onion service in Linux. Namely that WSfL doesn’t use systemd the way it is meant to be used natively. Instead you have to start system daemons using the old SysV method with /etc/init.d/ Also, services do not continue running after the window has been closed. If someone can find a workaround for that, I’ll gladly update this tutorial.

As for Apache and Tor, they seem to be working normally as long as the Ubuntu window is not closed. The default path for the Apache web server is: /var/www/html/index.html. More info on how to build a website with Apache can be found all over the web.

## On with the tutorial!

First, open powershell 64-bit as administrator:

Enable Windows Subsystem for Linux and reboot:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

After rebooting, go to the Microsoft Store and search for Ubuntu 18.04 LTS

Install the App but beware that you will be forced to sign in with a Microsoft account.

Open the new app. You will be prompted to create a local Linux account. This will not be tied to anything else. It is only for your computer.

Update the packages in Ubuntu to the latest versions:

sudo apt update -y && sudo apt upgrade -y

Install Tor and Apache:

sudo apt install apache2 tor -y

Edit the tor configuration:

sudo nano /etc/tor/torrc

Remove the # signs from before the following lines:

HiddenServiceDir /var/lib/tor/hidden_service/
HiddenServicePort 80 127.0.0.1:80

The result should look something like this:

Save by hitting CTRL-x

Start apache and then tor:

sudo /etc/init.d/apache2 start && sudo /etc/init.d/tor start

Get your new .onion site url:

sudo cat /var/lib/tor/hidden_service/hostname

Try your new onion service in Tor Browser!

Dear Tumbleweed users and hackers,

During week 11 we tested a total of 6 snapshots, of which only 3 were good enough to be distributed to the user base (well, not really: only one was discarded for quality reasons, two were discarded as some font changes in the installer made a lot of install tests fail in the view of openQA).

The published snapshots were: 0310, 0312 and 0314 containing these updates:

• NodeJS 10.5.2, including the switch to make nodejs10 the new default for Tumbleweed, replacing nodejs8
• KDE Applications 18.12.3
• Libvirt 5.1.0
• PostgreSQL 11
• coreutils 8.31
• Linux kernel 5.0.1
• LibreOffice 6.2.2.1
• SQLite 3.27.2
• systemd 241 (note: after upgrading from 239, the immediate reboot will be quite slow; this should be a one-off until the system was booted with 241)
• boost 1.69

You can look forward to those things currently being tested in our staging areas:

• KDE Plasma 5.15.3
• Mesa 19.0
• Linux kernel 15.0.2
• openSSL 1.1.1 – The blocker is, unchanged, nodejs (despite the move to nodejs10)

Dear Tumbleweed users and hackers,

Week 10 – the thrid week in a row with 4 published snapshots. This feels like a good pace. The 4 snapshots published have been 0301, 0305, 0306 and 0307.

The changes in those snapshots contained:

• Virtualbox 6.0.4
• curl 7.64
• KDE Plasma 5.15.2
• Linux kernel 4.20.12 & 4.20.13
• rust 1.32
• PostgreSQL 10.7
• Libreoffice 6.2.1.2 : with KDE Frameworks 5 integration

That’s almost everything that was on the promised list of last week – almost! And of course the list has been extended for you. So curretnly in Staging, we have:

• openSSL 1.1.1 – The blocker is, unchanged, nodejs
• Move of the default nodejs version from 8 to 10
• projectM 3.x – finally based on Qt5 (our packages were ahead of the time, so no major change for the users)
• systemd 241
• bash: Implement a change to have /bin/sh update-alternatives controlled
• Linux kernel 5.0
• PostgreSQL 11
• Boost 1.69

Welcome to the extended version of 𝐈𝐓’𝐒 𝐅𝐑𝐈𝐃𝐀𝐘! Here we will try and expand on the mentions, so there’s a bit more context. Also, we will try and include Steemit stuff. There’s no Twitter or Facebook coz we’re not big fans of those two services. 🤣 𝐈𝐓’𝐒 𝐅𝐑𝐈𝐃𝐀𝐘! I hope you’ll all join me in … Continue reading "It’s Friday 15 March 2019"

## Thursday14 March, 2019

As (open)SUSE releases are approaching, the YaST team is basically in bug squashing mode. However, we are still adding some missing bits, like the bcache support for AutoYaST. Additionally, there are some interesting improvements we would like to let you know about:

• AutoYaST support for using Btrfs subvolumes as user home directories.
• Improved Certificates management in the registration module.
• Correct detection of DASDs when using virtio-blk.
• Proper handling of the resume option in the bootloader module.
• Display fonts and icons properly during installation.

And, as a bonus, some insights about a YaST font scaling problem on the GNOME desktop (spoiler: not a YaST bug at all).

### Adding bcache support to AutoYaST

A few days ago, support for bcache landed in the YaST Partitioner. In a nutshell, bcache is a caching system that allows to improve the performance of any big but slow disk (so-called backing device) by using a faster and smaller disk (caching device).

The way to describe a bcache in AutoYaST is pretty similar to how a RAID or a LVM Volume Group is described. On one hand, you need to specify which devices are going to be used as backing and caching devices by setting bcache_backing_for and bcache_caching_for elements. And, on the other hand, you need to describe the layout of the bcache device itself. As you would do for a RAID, you can partition the device or use it as a filesystem.

The example below creates a bcache device (called /dev/bcache0) using /dev/sda to speed up the access to /dev/sdb.

<partitioning config:type="list">
<drive>
<type config:type="symbol">CT_DISK</type>
<device>/dev/sda</device>
<disklabel>msdos</disklabel>
<use>all</use>
<partitions config:type="list">
<partition>
<!-- It can serve as caching device for several bcaches -->
<bcache_caching_for config:type="list">
<listentry>/dev/bcache0</listentry>
</bcache_caching_for>
<size>max</size>
</partition>
</partitions>
</drive>

<drive>
<type config:type="symbol">CT_DISK</type>
<device>/dev/sdb</device>
<use>all</use>
<!-- <disklabel>none</disklabel> -->
<disklabel>msdos</disklabel>
<partitions config:type="list">
<partition>
<!-- It can serve as backing device just for one bcache -->
<bcache_backing_for>/dev/bcache0</bcache_backing_for>
</partition>
</partitions>
</drive>

<drive>
<type config:type="symbol">CT_BCACHE</type>
<device>/dev/bcache0</device>
<bcache_options>
<cache_mode>writethrough</cache_mode>
</bcache_options>
<use>all</use>
<partitions config:type="list">
<partition>
<mount>/data</mount>
<size>20GiB</size>
</partition>
<partition>
<mount>swap</mount>
<filesystem config:type="symbol">swap</filesystem>
<size>1GiB</size>
</partition>
</partitions>
</drive>
</partitioning>


### Using Btrfs Subvolumes as User Home Directories in AutoYaST

In our last report we presented a new feature to allow using Btrfs subvolumes as user’s home directories. However, the AutoYaST support for that feature was simply missing.

Now you can use the home_btrfs_subvolume to control whether a Btrfs should be used as home directory.

<user>
<encrypted config:type="boolean">false</encrypted>
<home_btrfs_subvolume config:type="boolean">true</home_btrfs_subvolume>
<fullname>test user</fullname>
<gid>100</gid>
<home>/home/test</home>
<shell>/bin/bash</shell>
<uid>1003</uid>
</user>


### Tuning the Bootloader’s resume parameter

The resume parameter

## Sunday10 March, 2019

Weblate 3.5.1 has been released today. Compared to the 3.5 release it brings several bug fixes and performance improvements.

Full list of changes:

• Fixed Celery systemd unit example.
• Fixed notifications from http repositories with login.
• Fixed race condition in editing source string for monolingual translations.
• Include output of failed addon execution in the logs.
• Improved validation of choices for adding new language.
• Allow to edit file format in component settings.
• Update installation instructions to prefer Python 3.
• Performance and consistency improvements for loading translations.
• Make Microsoft Terminology service compatible with current zeep releases.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

## Saturday09 March, 2019

We are creating a deployment of openSUSE clients with Salt. Kerberos needs password authentication. Therefore, we want to encrypt passwords before using them in Salt. I want to explain how to integrate that all.

At first, you have to install gpg, python-gnupg and python-pip. openSUSE wants to install only the package python-python-gnupg which isn’t enough for Salt. You have to use additionally pip install python-gpg.

After that, you have to create the directory /etc/salt/gpgkeys with mkdir. That will be the home directory for the decryption key of Salt. Then you can create a password less key in this directory. Salt is not able to enter any password for encryption.

# gpg --gen-key --pinentry-mode loopback --homedir /etc/salt/gpgkeys
gpg (GnuPG) 2.2.5; Copyright (C) 2018 Free Software Foundation, Inc.This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.

Note: Use "gpg2 --full-generate-key" for a full featured key generation dialog.

GnuPG needs to construct a user ID to identify your key.

Real name: Salt-Master
You selected this USER-ID:"Salt-Master <sarahjulia.kriesch@th-nuernberg.de>"

Change (N)ame, (E)mail, or (O)kay/(Q)uit? O
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key B24D083B4A54DB47 marked as ultimately trusted
gpg: directory '/root/.gnupg/openpgp-revocs.d' created
gpg: revocation certificate stored as '/root/.gnupg/openpgp-revocs.d/6632312B6E178E0031B9C8E8B24D083B4A54DB47.rev'
public and secret key created and signed.

pub   rsa2048 2019-02-05 [SC] [expires: 2021-02-04]
6632312B6E178E0031B9C8E8B24D083B4A54DB47
uid   Salt-Master <sarahjulia.kriesch@th-nuernberg.de>
sub   rsa2048 2019-02-05 [E] [expires: 2021-02-04]


After that you have to export and import your public and secret key in an importable format. Salt can not decrypt passwords without the Secret Key.

# gpg --homedir /etc/salt/gpgkeys --export-secret-keys --armor > /etc/salt/gpgkeys/Salt-Master.key
# gpg --homedir /etc/salt/gpgkeys --armor --export > /etc/salt/gpgkeys/Salt-Master.gpg
# gpg --import Salt-Master.key
gpg: key 9BE990C7DBD19726: public key "Salt-Master <sarahjulia.kriesch@th-nuernberg.de>" imported
gpg: key 9BE990C7DBD19726: secret key imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg:       secret keys read: 1
gpg:   secret keys imported: 1

# gpg --import salt-master.pub
gpg: key 9BE990C7DBD19726: "Salt-Master <sarahjulia.kriesch@th-nuernberg.de>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1



The key has the validity unknown at the moment. We have to trust that. Therefore, we have to edit the key, trust that, enter a 5 for utimately and save that.

# gpg --key-edit Salt-Master …

Back in the day, when I was podcasting. I always wanted to find and talk about news items that I felt were important but knew my contemporaries would not be talking about these on their shows because it wasn’t technical enough or didn’t require the host to say “Linux” every thirty seconds. Some of the … Continue reading "If only I were still Podcasting 9th March 2019"

## Thursday07 March, 2019

Salient OS is the first Arch based distribution that I put any significant time into. Salient takes the heavy lifting out of Arch. The value of not building your own Arch system and using somebody else’s assembly can be debated but that is outside of the scope of this review. I am looking at this from my biased perspective as an openSUSE Tumbleweed user, another rolling release. Could I use Salient OS long term? Perhaps but what am I gaining? I am not sure.

This review was initiated as a BigDaddyLinux distro challenge. I am perfectly happy with my choice of openSUSE. I am just dabbling around to learn and experiment because, why not? Linux is a fun thing.

## Installation

Installation of Salient OS is surprisingly easy. There isn’t an option to install it from boot but you can boot into a live media version of Salient and kick the tires before you commit to an installation.

My initial impression is, the desktop looks fantastic, it is themed just right and the wallpaper is pretty fantastic. I don’t know what it is from but it is visually quite interesting.

Since I am not a fan of testing things out in the live media mode, I wanted to install it but there wasn’t an icon on the desktop to begin the installation so I searched for it in the menu. Which, by the way, it should be noted that the this is a great menu.

Once I found the installer, by searching for “Installer”

Upon launching it, the welcome screen gave me a warning about my hardware, I paused for just a moment, but just continued anyway. It should be noted, I didn’t have any issues on my time with Salient OS. Next I set the location.

Next was the keyboard selection. It defaulted the proper keyboard which was welcoming. The partitioning, not my preference for the default but it seems to be more and more common, regardless of the benefits, I still prefer the default of a separate partition.

Next was setting the user information and finally a summary. Not a whole lot of options, perhaps a good thing since I am unfamiliar with Arch, this is likely a good thing.

The install process was pretty quick. There was a point at around 20% where it seemed like the installation stalled but the disk an CPU activity told me that it was indeed working hard.

Once the Installation was complete, I wanted to reboot immediately to see what I can do with this fresh installation.

## First Run

The Grub bootloader is among the best I have ever seen. It sets the mood right from the beginning. This isn’t a bright, eye stabbing, desktop, this desktop respects light discipline.

The login screen was a bit of a puzzle to me, not a big deal, no worse than having to press ctrl+alt+delete to log into Windows, in this case, I just had to

## Wednesday06 March, 2019

For the second time, Indonesia was chosen to host the openSUSE.Asia Summit 2019 event. A similar event was held in Yogyakarta, Indonesia, in 2016 and was attended by hundreds of local openSUSE lover as well as from other Asian countries. This year we are challenged to repeat the successful story of the openSUSE.Asia Summit on one of the most exotic islands in Indonesia, Bali.

openSUSE.Asia Summit is an event awaited by fans of openSUSE in Indonesia in particular, and activists of Free/Libre Open Source Software (FLOSS) in general. In this activity, experts, contributors, end users, and technology enthusiasts gather to share experiences about the development of openSUSE and other things related to FLOSS and have a lot of fun.

The island of Bali was chosen as the venue for the openSUSE.Asia Summit after being proposed by the Indonesian community during openSUSE.Asia Summit 2018 in Taipei, Taiwan. After going through a long discussion, the Asian committee chose Bali as the host of openSUSE.Asia Summit 2019. openSUSE.Asia Summit 2019 will be from October 5 to October 6, 2019, at Udayana University, Bali.

Goals to be achieved in the openSUSE.Asia Summit 2019 in Bali include:

• To promote openSUSE in the Asian region.
• To provide an alternative to the wider community that FLOSS can be a powerful tool for doing their daily job.
• To attract new contributors for openSUSE from Indonesia and other Asian countries.
• To provide a forum for sharing user and developer experiences because usually such discussions only occur online.

In the end, we are proud to present Bali Island to become one of the historical places for the openSUSE.Asia Summit :”)

Pre-announcement

openSUSE.Asia Summit 2019 will immediately open a call for paper for prospective speakers. In addition, we will also open a logo competition for the openSUSE.Asia Summit 2019. Surely this will be an opportunity for designers in Asia to compete with each other to show their abilities and contribute to this activity. We will inform you of more details about the above information in the near future through news.opensuse.org.

See you in Bali and have fun!

Stepgun – Pantai Kuta, Bali (2) – CC BY-SA 4.0

Bali Beach Taravel Boats Vocation by keulefm

## Monday04 March, 2019

At the beginning of March, I started my retreat stay in the French Alps to take care for myself and rest. Before the great rest though, I had to get from Brussels to the Alps: My first road trip.

If the mountain won’t come to the prophet, then the prophet must go to the mountain.1

If the mountain is very far away, the prophet is well advised to rather ride by horse, Uber or even take a plane. I was lucky to get the chance to use the car of the family. Unfortunately, this car was not in Brussels, but in North Rhine-Westphalia in Germany. Though, a car is much smaller and more mobile than a mountain, it was again me who had to go get the car.

I avoid cars whenever possible. I fly more often than I drive a car. So when I got to Germany for the hand-over of the car, we drive first together to the gas station, so I can verify that I can still manage to refill the tank, which I have not done for years.

The estimated total driving time to the Alps amounts to 8 hours. Considering that my longest drive was 4 hours from Berlin to the Baltic Sea, I decide to split the ride in two and stay halfway overnight. On my way to the Alps I make two stops in Germany for grocery shopping. Fritz-Kola, Bionade, different kinds of nuts, Klöße, Spätzle and few other specialities that are either very expensive in France or not available at all. In the end, little space is left in the car.

## On the Road

I am on the Autobahn. I listen to the album A Night At The Opera from Queen. I listen to the album twice. When I reached again the last song God Save The Queen, still much Autobahn is ahead of me. I have to concentrate at every motorway junction. In between, at 120 km/h, time seems to halt. Nothing happens. FLASH. Yikes! Apparently, I did not slow down fast enough and got caught by a speed camera. This would be the first time in my life I get an administrative fine.

The next minutes I observe the real-time fuel consumption per 100 km. I heard once car engines would be designed to be most fuel-efficient at an engine-specific speed. Without the non-conservative force friction, I would only need fuel for the initial acceleration and to climb the mountains. In the plane, my consumption should be close to 0. Hence, the majority of the consumption is required to balance friction to keep the speed. During my studies, we learned that friction f(v) is a function of the speed v and includes significant higher order contributions in v. That means,

f(v) = \alpha v + \beta v^2 + \mathcal{O}\left(v^3\right).

Consequently, the fuel consumption is optimal for a speed

## Sunday03 March, 2019

Weblate 3.5 has been released today. It includes improvements in the translation memory, addons or alerting.

Full list of changes:

• Improved performance of built in translation memory.
• Added interface to manage global translation memory.
• Improved alerting on bad component state.
• Added user interface to manage whiteboard messages.
• Addon commit message now can be configured.
• Reduce number of commits when updating upstream repository.
• Fixed possible metadata loss when moving component between projects.
• Improved navigation in the zen mode.
• Added several new quality checks (Markdown related and URL).
• Added support for app store metadata files.
• Added support for toggling GitHub or Gerrit integration.
• Added check for Kashida letters.
• Added option to squash commits based on authors.
• Improved support for xlsx file format.
• Compatibility with tesseract 4.0.
• Billing addon now removes projects for unpaid billings after 45 days.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

This is part 4 of a series of posts about revamping the user interface of OBS. We started off with the Package pages in October 2018, moved on to the Project, User and Group pages in December 2018, continued with the Request pages in February 2019 and just now finished the Configuration pages. These changes are already available for all users in the beta program. We are one step closer to revamping the whole OBS...

## Friday01 March, 2019

Dear Tumbleweed users and hackers,

In week 9, we managed to stay stable with regards to the number of snapshots, compared to week 8. There were again 4 snapshots published: 0220, 0224, 0225 and 0226. Of course, we don’t care as much for the stability of snapshot numbers as we do for stability in the product – but that seems also to be at a good level, judging from the numbers at http://review.tumbleweed.boombatower.com/

So, what did those four snapshots bring you? of course a lot of updates:

• Linux kernel 4.20.10
• Mesa 18.3.4
• binutils 2.32
• bison 3.3.2 (3.3.1 broke some packages in ‘funny’ ways)
• Mozilla Firefox 65.0.1
• Squid 4.6
• LXQt 0.14.0
• Ruby 2.5 is no more – R.I.P.

When looking at the current staging projects, there are still many things planned to come:

• Linux kernel 4.20.12
• KDE Plasma 5.15.2
• PostgreSQL 10.7 and maybe also version 11 soon
• git 2.21.0
• Virtualbox 6.0.4
• Rust 1.32
• Linux kernel 4.20.13
• LibreOffice 6.2.1.1 with full KDE FRamework integration instead of the GTK3 wrapper
• openssl 1.1.1 is not forgotten: nodejs still blocking

Welcome to the extended version of 𝐈𝐓’𝐒 𝐅𝐑𝐈𝐃𝐀𝐘! Here we will try and expand on the mentions, so there’s a bit more context. Also, we will try and include Steemit stuff. There’s no Twitter or Facebook coz we’re not big fans of those two services. 🤣 𝐈𝐓’𝐒 𝐅𝐑𝐈𝐃𝐀𝐘! Apparently a criminal is aloud come back … Continue reading "It’s Friday 1 March 2019"

#### Snapshots Trending Stable

There were three quality openSUSE Tumbleweed snapshot released this week bringing updates for python-setuptools, Mesa, php, Flatpak and both Mozilla Firefox and Thunderbird.

Eleven packages were updated in the latest snapshot of the week. Snapshot 20190226 updated the efivar 37 package, which is a tools and libraries package to work with Extensible Firmware Interface variables; the package add support for Embedded MultiMediaCard devices and for Peripheral Component Interconnect (PCI) root nodes without a device link in pseudo file system sysfs. The sensors 3.5.0 package add detection of Microchip MCP9808 and Nuvoton NCT6793D, which has yet to appear on the companies website. Bug fixes were made to the xclock 1.0.8, xev 1.2.3 and xfsinfo 1.0.6 packages. The xfsinfo package fixed a bug in 64-bit builds that caused the maximum request size to be incorrectly calculated. Other packages updated in the snapshot were File 5.36, python-idna 2.8 and python-python-dateutil 2.8.0.

A little more than a handful of packages were updated in the 20190225 snapshot. Mozilla Firefox 65.0.1 improved playback of interactive Netflix videos and provided various stability and security fixes. The libyui-qt-pkg 2.45.26 fixed an icon display to a new libyui-qt function. A suggestion by a user at EuroPython 2018 was made in the python-decorator 4.3.2 package and now the path to the decorator module appears in the tracebacks. The caching proxy squid 4.6 is able to detect IPv6 loopback binding errors and fixed OpenSSL builds that define OPENSSL_NO_ENGINE.  The sysconfig 0.85.2 package fixed the changes file to mention relevant github pull requests.

Mesa3D graphics library was updated to version 18.3.4 in snapshot 20190224.  The Mesa update brought compiler fixes and extra PCI IDs for Intel’s Coffee Lake and Ice Lake processors. The RADV driver has seen addressed to compile correctly with GNU Compiler Collection 9. The package for editing images and vector image files, ImageMagick 7.0.8.28, fixed some bugs including the rendering of complex text for Hindi. Mozilla Thunderbird 60.5.1 fixed four Common Vulnerabilities and Exposures (CVE) that were all listed as having a high impact. The GNU collection of binary tools, binutils 2.32, now support C-SKY processor series. Flatpak jumped from 1.2.0 to 1.2.3 and fixed some bugs and made some modifications with sandboxing. The 2.2.13 gpg2 implemented a key lookup via keygrip. Several other library packages were updated in the snapshot including libcontainers-common 20190219, libstorage-ng 4.1.91 and libxcrypt 4.4.3. A cURL related fix was made with the update version of php7 7.3.2. The Tumbleweed snapshot also brought a major version update for Python’s package manager/module python-pip; the update from 18.1 to 19.0.2 added improved documentation, deprecated support for Python 3.4 and made failed uninstall roll back more reliable and better at avoiding naming conflicts. The python-setuptools 40.8

## Thursday28 February, 2019

A piece of hardware that is often overlooked in many homes and businesses is the the “edge device” or often just called a router. Many Internet providers will supply their own edge device. This is the first line of defense from those that would do you harm from the Internet to your home or business. I look at it as your first line of security to protect yet give you access to the machines or devices on your network.

I have two reasons for setting up a pfSense box. Since I have heard great things about it, I wanted to try it for myself on my own network to give me confidence to set it up for use in a small office setting. Nothing too large, just a moderate size.

## Hardware

I had to start with an adequate piece of hardware to run pfSense. Since it requires a 64 bit system, I am using one of my newly inherited Dell Optiplex 745 machines. As far as specifications go, it is at the bottom end of the recommended specifications to run pfSense but the plan for this isn’t anything real intense.

### Specs That Matter

• CPU Intel Core 2 Duo 6300 @ 1.86Ghz
• 2.0 GB of DDR2 SDRAM
• 160 GiB HDD

Since this machine only comes equipped with a single Ethernet port, I had to purchase a half-height Gigabit Ethernet adapter to put in the one available PCI slot in this machine. The slot will only accept a PCI or PCI-X card which was actually more difficult to find than I originally anticipated. Full height, easy, half height, not so much.

This particular unit came with two plate options. Changing out the plate consisted of removing two screw, separating the plate from the card and replacing it with the other plate. There wasn’t a bit of complexity to it.

The machine has one PCI slot in it but there was a card with a COM port and PS/2 port on a card attached via ribbon cable to the main board that had to be removed first. I inserted the card, started it up and jumped in the BIOS to make sure it was recognized.

Since it was recognized, I was ready to move on to the software portion of this little tech adventure.

There really wasn’t much to do in configuring the hardware. The only major change I made to the configuration, outside of adding the second Ethernet card was to ensure that the machine would boot upon being powered. This is assuming that should the machine loses power due to power failure, it will boot upon power being restored.

From the pfSense download page I chose the AMD64 memstick version to put on a Dell Optiplex 745. It should be noted that the memstick version cannot be written using SUSE Studio Imagewriter. For more information on writing images:

https://www.netgate.com/docs/pfsense/hardware/writing-disk-images.html

## Conduct Checksum on the Downloaded Image …

Today, I was wondering why my local kiwi image builds got download problems:

[ DEBUG ]: 09:52:15 | system: Retrieving: dracut-kiwi-lib-9.17.23-lp150.1.1.x86_64.rpm [.error] [ DEBUG ]: 09:52:15 | system: Abort, retry, ignore? [a/r/i/...? shows all options] (a): a
[ INFO ]: Processing: [########################################] 100%
[ ERROR ]: 09:52:15 | KiwiInstallPhaseFailed: System package installation failed: Download (curl) error for 'http://download.opensuse.org/repositories/Virtualization:/Appliances:/Builder/openSUSE_Leap_15.0/x86_64/dracut-kiwi-lib-9.17.23-lp150.1.1.x86_64.rpm':
Error code: Curl error 60
Error message: SSL certificate problem: unable to get local issuer certificate
SSL error for an http:// URL? Digging for the zypper logs in the chroot found, that this request got redirected to https://provo-mirror.opensuse.org/repositories/... today, which failed due to missing SSL certificates.

After I had debugged the issue, the solution was quite simple: in the <packages type="bootstrap"> section, just replace ca-certificates with ca-certificates-mozilla and everything works fine again.

Of course, jut building the image in OBS would also have solved this issue, but for debugging and developing, a "native" kiwi build was really necessary today.

## Wednesday27 February, 2019

One of the pain points in trying to make the Meson build system work with Rust and Cargo is Cargo's use of build scripts, i.e. the build.rs that many Rust programs use for doing things before the main build. This post is about my exploration of what build.rs does.

Thanks to Nirbheek Chauhan for his comments and additions to a draft of this article!

TL;DR: build.rs is pretty ad-hoc and somewhat primitive, when compared to Meson's very nice, high-level patterns for build-time things.

I have the intuition that giving names to the things that are usually done in build.rs scripts, and creating abstractions for them, can make it easier later to implement those abstractions in terms of Meson. Maybe we can eliminate build.rs in most cases? Maybe Cargo can acquire higher-level concepts that plug well to Meson?

(That is... I think we can refactor our way out of this mess.)

## What does build.rs do?

The first paragraph in the documentation for Cargo build scripts tells us this:

Some packages need to compile third-party non-Rust code, for example C libraries. Other packages need to link to C libraries which can either be located on the system or possibly need to be built from source. Others still need facilities for functionality such as code generation before building (think parser generators).

That is,

• Compiling third-party non-Rust code. For example, maybe there is a C sub-library that the Rust crate needs.

• Link to C libraries... located on the system... or built from source. For example, in gtk-rs, the sys crates link to libgtk-3.so, libcairo.so, etc. and need to find a way to locate those libraries with pkg-config.

• Code generation. In the C world this could be generating a parser with yacc; in the Rust world there are many utilities to generate code that is later used in your actual program.

In the next sections I'll look briefly at each of these cases, but in a different order.

## Code generation

Here is an example, in how librsvg generates code for a couple of things that get autogenerated before compiling the main library:

• A perfect hash function (PHF) of attributes and CSS property names.
• A pair of lookup tables for SRGB linearization and un-linearization.

For example, this is main() in build.rs:

fn main() {
generate_phf_of_svg_attributes();
generate_srgb_tables();
}


And this is the first few lines of of the first function:

fn generate_phf_of_svg_attributes() {
let path = Path::new(&env::var("OUT_DIR").unwrap()).join("attributes-codegen.rs");
let mut file = BufWriter::new(File::create(&path).unwrap());

writeln!(&mut file, "#[repr(C)]").unwrap();

// ... etc
}


Generate a path like \$OUT_DIR/attributes-codegen.rs, create a file with that name, a BufWriter for the file, and start outputting code to it.

Similarly, the second function:

fn generate_srgb_tables() {
let linearize_table = compute_table(linearize);
let unlinearize_table = compute_table(unlinearize);

let path = Path::new(&env::var("OUT_DIR").unwrap()).join("srgb-codegen.rs");
let mut file = BufWriter::new(File::create(&path).unwrap());

// ...

print_table(&mut file, "LINEARIZE", &linearize_table);
print_table(&mut file …

We know we owe you a report for the previous development sprint (namely the 71th). But we also know how to compensate that. This week we have not only one, but up to three blog posts to keep you tuned to the YaST evolution.

So let’s start with the summary of what have been implemented and fixed lately. That includes

• Improvements in the Bcache support in the Partitioner
• Users home as Btrfs subvolumes
• Better visualization of Salt formulas in YaST Configuration Management
• Automatic selection of the needed driver packages
• Improvements in many other areas like AutoYaST, bootloader, the Partitioner and the storage Guided Setup

You will find links to the other more exhaustive blog posts, about the recently added Bcache support and the revamped Configuration Management module, in the corresponding sections of this report.

### Final Improvements in the Bcache Support

During several sprints, we have been detailing our efforts to offer a decent support for the Bcache technology in the YaST partitioner. During this sprint we have implemented what we consider the three final bits:

• Bache devices without caching
• Modifying Bcache devices
• Listing all caching sets

We will now detail this three improvements. But to celebrate that Bcache support looks complete now in the Partitioner, we have published a separate blog post explaining what Bcache is and how to take advantage of that technology using the YaST Partitioner. Enjoy!

And back to the topic of recent improvements, we should mention that the Bcache technology allows to create a Bcache device without an associated caching one. This is useful if you are considering to use Bcache in the future. In that case you can setup all your slow devices as Bcache backing devices without a cache, leaving open the possibility of adding caching devices later. That is now possible by selecting the new option labeled “without caching” during creation, as shown in the following screenshot.

Of course, that’s not much useful without the possibility of modifying a Bcache device. So in the latest sprints we also added a new “Change Caching” button.

This option will only work for bcaches that do not exist in your system yet (e.g., a bcache that you are creating right now). For existing bcache devices, this option is only available when the bcache has no associated caching device yet. Otherwise, a detaching action would be required, and that could take very long time in some situations.

And last but not least (regarding Bcache), now the Expert Partitioner also shows the list of all caching sets in a separate tab (unsurprisingly) titled “Caching Set Devices”. It is only an informative tab, but thanks to it you will be able to check all devices currently used for caching at a quick glance.

### Create the User’s Home as a Btrfs Subvolume

As many (open)SUSE users know, Btrfs offers several advantages over traditional Linux file-systems. One of them is the possibility of using subvolumes to customize the configuration and features of different parts of the

### An Introduction to the YaST Configuration Management Module

YaST Configuration Management is a relatively unknown module that was born back in 2016 during a workshop and was developed further during Hack Week 14. The idea was to enable AutoYaST to delegate part of its duties to a configuration management system like Salt or Puppet. Therefore, AutoYaST would take care of the initial installation (partitioning, software installation, etc.) and it will hand the control over to one of those systems for further configuration.

During Hack Week 15 the module got support for SUMA Salt Parametrizable Formulas and later it was adapted to
run during the 1st stage of the installation. Apart from that, the module received fixes and minor updates as needed.

But by the end of 2018, we started to work on the module again in order to:

• Update the SUMA Salt Parametrizable Formulas support.
• Add support for YaST Firstboot
• Improve the documentation (the README was basically rewritten).

In this article, we will review these changes including, of course, some screenshots. If you want to try this features by yourself, you will need to install yast2-configuration-management 4.1.5 and yast2-firstboot 4.1.5 (or later).

### Updating the SUMA Salt Parametrizable Formulas Support

Since the initial implementation of the SUMA Salt Parametrizable Formulas support, the forms specification evolved quite a lot, rendering the module outdated. Support for new data types, collections, conditions, etc. was simply missing.

When it comes to the new UI design, the main problem we faced is that, in YaST, we must take 80×24 interfaces into account and the support for scrolling in our libraries is quite limited. So we needed to organize the information minimizing the chance of getting out of space.

The screenshot above belongs to a fairly complex dhcp formula. At the left side, there is a tree that you can use to browse through the formula. At the right, YaST displays a set of form controls that you will use to enter the formula parameters.

When dealing with collections, YaST displays the information in pop-up dialogs as you can see below.

Do you want to try it by yourself? No problem, but bear in mind that it may modify your system configuration, so it would be wiser to do such experiments in a virtual machine.

Having said that, the easiest way to try the module is to grab some formulas from OBS, install them and start the module from the YaST control center by clicking on YaST2 Configuration Management under the Miscellaneous section. If you are a console lover, you can just run yast2 configuration_management.

### Adding Firstboot Support

YaST Firstboot is a module that allows the user to configure a pre-installed system during the first boot (hence the name). It implements a set of YaST clients to perform different stuff like setting the language, configuring the timezone, etc.

If you need to configure something which is not supported by YaST Firstboot, you could write your own client having the power of YaST

Usual readers of the YaST Team development sprint reports on this blog already know we have been working steadily on adding support for the Bcache technology to the YaST Partitioner. We have already reached a point in which we consider such feature to be ready to be shipped with openSUSE Leap 15.1 and SUSE Linux Enterprise 15 SP1. That sounds like a nice occasion to offer the full picture in a single blog post, so our beloved users don’t need to dig into several blog posts to know what the future releases will bring regarding Bcache in YaST. Needless to say, all this is already available for openSUSE Tumbleweed users, or will be available in the following weeks.

### Bcache 101

But, to begin with, what is Bcache? It’s a Linux technology that allows to improve the performance of any big but relative slow storage device (so-called “backing device” in Bcache terminology) by using a faster and smaller device (so-called caching device) to speed up read and write operations. The resulting Bcache device has then the size of the backing device and (almost) the effective speed of the caching one.

In other words, you can use one or several solid state drives, which are typically fast but small and expensive, to act as a cache for one or several traditional rotational (cheap and big) hard disks… effectively getting the best of both worlds.

How does it all look in your Linux system? Let’s explain it with some good old ASCII art:

(slow hard disk)   (faster device, SSD)
/dev/sda            /dev/sdb
|                     |
[Backing device]    [Caching device]  <-- Actually, this is a set of
|                     |             caching devices (Caching Set)
|__________ __________|
|
[Bcache]
/dev/bcache0


Take into account that the same caching device (or the same “caching set”, sticking to Bcache terminology) can be shared by several Bcache devices.

If you are thinking about using Bcache later, it is also possible to setup all your slow devices as Bcache backing devices without a cache. Then you can add the caching device(s) at a later point in time.

(slow hard disk)
/dev/sda
|
[Backing device]
|
|__________ __________|
|
[Bcache]
/dev/bcache0


Last but not least, the Bcache technology allows to create virtual devices on top of an existing caching set without an associated backing device. Such a device is known as Flash-only Bcache and is only useful in some very specific use cases.

                   (faster device, SSD)
/dev/sdb
|
[Caching device]
|
|__________ __________|
|
[Flash-only Bcache]
/dev/bcache0


You may be thinking: “hmm, all that sounds interesting and daunting at the same time… how can I get started with it in an easy way?“. And sure you are already figuring the answer.

### Bcache in the YaST Partitioner

When running on an x86 64 bits system, the YaST Partitioner will offer a Bcache entry in its usual left tree. There you can see two tabs. The second one lists the Bcache caching sets available in the system and its purely informative. But the first one is

## Saturday23 February, 2019

Fixes and updates applied with this second minor version improved and extended the main functions, let’s see what’s new.

If you are new to the zypper-upgraderepo plugin, give a look to the previous article to better understand the mission and the basic usage.

### Repository check

The first important change is inherent the way to check a valid repository:

• the HTTP request sent is HEAD instead of GET in order to have a more lightweight answer from the server, being the HTML page not included in the packet;
• the request point directly to the repodata.xml file instead of the folder, that because some server security setting could hide the directory listing and send back a 404 error although the repository works properly.

### Check just a few repos

Most of the times we want to check the whole repository’s list at once, but sometimes we want to check few of them to see whether or not they are finally available or ready to be upgraded without looping through the whole list again and again. That’s where the –only-repo switch followed by a list of comma-separated numbers comes in help.

–only-repo switch

### All repo by default

The disabled repositories now are shown by default and a new column highlights which of them are enabled or not, keeping their number in sync with the zypper lr output. To see only the enabled ones just use the switch –only-enabled.

Table view

### Report view

Beside the table view, the switch –report introduce a new pleasant view using just two columns and spanning the right cell to more rows in order to improve the number of info and the reading quality.

Report view

### Other changes

The procedure which tries to discover an alternative URL now dives back and forth the directory list in order to explore the whole tree of folders wherever the access is allowed by the server itself. The side effect is a general improvement also in repo downgrade.

The output in upgrade mode is now verbose and shows a table similar to the checking one, giving details about the changed URLs in the details column.

The server timeout error is now handled through the switch –timeout which allows tweaking the time before to consider an error any late answer from the server itself.

### Final thoughts and what’s next

This plugin is practically completed, achieving all the goals needed for its main purpose no other relevant changes are scheduled, so I started thinking of other projects to work in my spare time.

Among them, there is one I am interested in: bring the power of openSUSE software search page to the command line.

However, there are some problems:

• This website doesn’t implement any web API so will be a lot of scraping job;
• There are missing packages while switching from the global research (selecting All distribution) to the specific distribution;
• Packages from Packman are not included.

I have already got some ideas to solve them and did lay

There has been quite a lot of buzz in the news about the first stable release of Plasma in 2019, version 5.15.0, released on 12 February 2019. It came to openSUSE Tumbleweed a few days later and a few days after that, I started updating my various systems running Tumbleweed. I am not going to cover all the changes and improvements, there is plenty of that available to read. Instead, this is my experience with the upgrade process on the first three Tumbleweed machines.

My primary machine isn’t generally first to get the latest updates, because I am using it nearly all the time so I will begin the updates on other machines, incidentally, all of which are Dell. The first machine that I performed the updates is a Dell Latitude E6440. There isn’t a whole lot of software on this one as it’s primary focus is for educational related activities. There aren’t any community repositories on this machine so the update required no intervention at all. The next machine, a Dell Inspiron 20 3048, does do a lot for me but doesn’t have too many community maintained repositories. It too went without incident. Lastly, my primary machine, also a Dell Latitude E6440 but with more memory, storage and a dedicated AMD GPU.

This machine has quite a bit of software on it. I do try things out but I don’t always remove the applications or community maintained repositories. It took it as an opportunity to start trimming out some additional repositories, thankfully, zypper makes that process easy. My primary machine was trimmed down to 36 repositories. Then I performed the update.

sudo zypper dup

Zypper ran through, did its thing, asked me about a couple python packages an one package I installed that I already knew was “broken” by not having a dependency. After Zypper calculated everything out and I agreed to the update. Just as every other Tumbleweed update goes, this one proceeded without incident.

All three machines had but only one small issue. They didn’t want to leave Plasma to reboot, specifically, selecting “reboot” or “halt” and even “logout” did not actually perform those actions, Instead, I ran in terminal:

sudo systemctl reboot

There may be a better way of doing a reboot, if you are aware of such, please let me know. A few moments later, the machine started up without incident and what I may be most excited about is that, everything still, just works.

I did receive one pleasant surprise, my Bluetooth keyboard, for the first time communicated that it was low on power instead of just going unresponsive. I was able to see a “10% Warning” pop up notification. I thought that was pretty slick. I have been enjoying the status and warnings with wireless Logitech devices for years but this was the first for Bluetooth. Very well done.

## Final Thoughts

Nothing is ever perfect but my experience with using openSUSE Tumbleweed has been

## Friday22 February, 2019

I suspected Linux Foundation went to the dark side when they started strange deals with Microsoft. But I'm pretty sure they went to dark side now. https://venturebeat.com/2019/02/21/linux-foundation-elisa/ If Linux can be certified for safety-critical stuff, it means your certification requirements are _way_ too low. People are using microkernels for critical stuff for a reason...

Dear Tumbleweed users and hackers,

Week 8 is what we like to see in term of the number of snapshots: We managed to get out 4 snapshots (0214, 0215, 0217 and 0219), containing updates like:

• MozillaFirefox 65.0
• Flatpak 1.2.0
• KDE Applications 18.12.2
• KDE Frameworks 5.55.0
• KDE Plasma 5.15.0
• Linux Kernel 4.20.7
• PackageKit should now transparently switch from ‘up’ to ‘dup’ on Tumbleweed
• suse-module-tools: implemented fs blacklisting logic

The stagings are currently busy forging/testing these changes:

• openSSL 1.1.1: nodejs fails to build
• binutils 2.32
• Removal of Ruby 2.5 from the distribution
• PostgreSQL 11 made it back on the list
• Linux kernel 4.20.10 and beyond