Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.


Thursday
05 September, 2019


face

When I take my laptop and I go into a mobile mode, I’m often missing a second or third screen. Frequently, my need isn’t having full motion video or anything of that sort, it’s just the ability to have text displayed in some form, be it PDF or web page, beside my main screen. Most of the time, that is how I use my multi-screen layout. One screen is my main workspace while the others display reference information.

I came upon this long lost solution on the BDLL discourse from Eric Adams.

https://discourse.bigdaddylinux.com/t/use-your-tablet-as-a-monitor-with-virtscreen/104

Key difference in my implementation versus his, both of us using KDE plasma. His solution is probably more elegant and could probably better take advantage of my AMD GPU but my solution is quick and dirty but gets the job done.

Host Device

Since this package is not available in the openSUSE repositories, I downloaded the AppImage here:

https://github.com/kbumsik/VirtScreen

There are further instructions on that page but I am going to only highlight how I used it on openSUSE Tumbleweed with the Plasma Desktop Environment. Looking at the system requirements, I had to install X11VNC

sudo zypper install x11vnc

Since I used the AppImage, I had to make it executable. To do that in terminal, navigate to the location of the AppImage and run this:

chmod a+x VirtScreen.AppImage

Alternatively, if you are using Plasma with the Dolphin file manager, navigate to the location of the AppImage, right-click, select Properties (or Alt+Enter when highlighted). Select the Permissions tab and select the Is executable button.

Upon Launching it, I set the resolution of my Tablet, which is my HP Touchpad that I set up with F-Droid. I made an adjustment to the Height to adjust for the navigation buttons that seem to get stuck in the ON position.

I selected the Enable Virtual Screen.

Next, I needed to Open Display Settings to arrange the screens.

Unfortunately, there was an error that caused the display settings to not open. I went into the preferences to see what the other options were. Since I know I didn’t want Gnome, I went with ARandR.

Since it wasn’t installed, I went to openSUSE Software and searched for it.

https://software.opensuse.org/package/arandr

After installing ARandR, VirtScreen still could not launch ARandR. Thankfully, I was able to launch ARandR using Krunner (menu works too) and made the adjustment to the screen location.

The next step was to activate the VNC Server within VirtScreen by setting the password and opening up the appropriate port in the Firewall. Since the openSUSE default is Firewalld at the time of writing. You can either do so with the GUI, which is pretty straight forward or use the terminal.

To get the active firewall zone

sudo firewall-cmd --get-default-zone

Assuming you are only using the default zone, Public (adjust based on

sudo firewall-cmd --zone=public --permanent --add-port=5000-5003/tcp
sudo


Wednesday
04 September, 2019


face

Recently, a number of quite complex configurations came up while syslog-ng users were asking for advice. Some of these configurations were even pushing the limits of syslog-ng (regarding the maximum number of configuration objects). As it turned out, these configurations could be significantly simplified using the in-list() filter, one of syslog-ng’s lesser known features.

First, a bit of history. The idea of the in-list() filter came to me while I was listening to Xavier Mertens at a Libre Software Meeting conference talk in France. In his talk, he described how to check log messages for suspicious IP addresses. He used free IP address lists from the Internet (spammer IP addresses, malware command and control IP addresses, etc.) and, using a batch process, he kept checking if any of those were present in the log messages on a nightly basis.

It occurred to me that all of the above could be done in real-time. Namely, several different parsers capable of extracting IP addresses and other important information from log messages as they arrive are already available in syslog-ng. All that was missing was a tool that could compare the extracted value with a list of values coming from a file. This tool was implemented quickly as a ‘spare time project’ by one of my colleagues. This is how the in-list() filter was born.

As you can see, the original use case came from IT security. Still, the first real world use I know about came from an operation team where they wanted to forward logs based on application name lists. This later became the example configuration in the documentation: https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.22/administration-guide/inlist Previously, you had to maintain a long list of filters in your syslog-ng configuration and connect them using a Boolean OR. Using the in-list() filter, the list is moved into an easy-to-maintain external file, where each value is listed on a new line. Note that the values in this file are case sensitive, so Bla, bLa, and blA are all different.

We had reports that some syslog-ng users were generating syslog-ng.conf using a script. In these cases, the configuration became overly long and eventually reached the number of maximum possible objects. In comparison with using a (possibly very long) list of filters, using the in-list() filter has many advantages:

  • logic and data are separated

  • less error-prone, as there is no need to edit the configuration file each time

  • easier to maintain, as you edit a list of values instead of a configuration file

  • better performance

  • the number of values in the file used by the in-list() filter is only limited by your available RAM

There is no exact limit to specify when you should change from a list of filters to the in-list() filters. I would suggest using in-list() as soon as your configuration becomes difficult to read.

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email


Monday
02 September, 2019


face

Regolith is a very interesting distribution based on Ubuntu that uses the i3 Window manager. In this case, you get all the benefits of the Ubuntu distribution with the unique i3 interface with predefined shortcut keys. The creator of this fine distribution, Ken Gilmer, has put a lot of time, effort into really making this a fine demonstration of i3.

This is my first i3 experience and overall it has been quite enjoyable. For those that are less familiar with what a Window Manager vs a Desktop…. I really can’t say, to me, it is a desktop environment I’m sure there is some nuance that distinguishes a “desktop environment” to a “window manager” but that debate and discussion is outside of the scope of this blathering. For my purposes, anything that allows me to interact with my computer in a holistic fashion is a Desktop Environment. So what is holistic in this context?

This is my impression of using Regolith as a deeply entrenched, content openSUSE Tumbleweed User that thinks using anything other than Plasma keeps my fingers hovering just over the bail-out button. Bottom Line Up Front, Regolith was a challenging but educationally enjoyable experience. My trip through Regolith sparked my imagination as to some specific applications and uses for this user environment. As cool as the interface is for Regolith (i3) is, it is not enough to push me off the openSUSE Tumbleweed Plasma mountain. This is my biased impression after running Regolith as a my interface into my computer.

Installation

Since this is Ubuntu based, the installation is really quite trivial. The team at Canonical have done a fantastic job of giving us a low barrier of entry into the Linux world. When Regolith boots, out of the gate, you are asked to select your Language. The Grub Boot menu pops up where the second option will put you immediately to the “Installation Process.” Thumbs up there! Anytime I get that option right from the beginning, I am just pleased I don’t have to hope that the Installation Icon is not hidden, bypassing my need to hunt around for the one function I came here to do.

Choosing this option, it looks like Regolith boots up a basic desktop and you are immediately greeted with the installation application. To start out, you are welcomed and asked for your language preference… again… perhaps just a verification that you do indeed speek the language you previously specified. Then you selet your Keyboad Layout.

Next you are asked to select whether or not you would like to install updates and 3rd Party Software. The Installation Type I have chosen for this is to erase the entire disk as I am running this in a virtual machine

Before committing to the drive modifications, you are given a sanity check and that makes this the point of no return, in a manner of speaking. After that, you are required to select your location.

The last step is going to


Saturday
31 August, 2019


face

A lot more work than I initially anticipated, I have decided to start a “podcast” but the term “podcast” seems to pretentious for me in the same way that “blog” does so these are nothing more than audio blatherings of what I have been noodling around.

Or Click to listen to the Podcast Here

In Tumbleweed News

Standout updates in the snapshots released in the last two weeks have been pretty plentiful. As part of the fun in running openSUSE Tumbleweed, you get a regular stream of well tested software updates.

Some of the most recent changes includes updates for Mesa 3D Graphics Library with version 19.1.3 that mostly provided fixes for drivers and backends. The Mesa-ACO drivers are now in staging so that will be available soon in Tumbleweed.

KDE’s Frameworks and Plasma have been updated. There have been multiple fixes for KTextEditor, KWayland, KIO and Baloo. Plasma 5.16.4 provides bug fixes and Airplane mode improvements.

The Kernel has been updated to 5.2.10, VLC version 3.0.8 to improve adaptive streaming and a fix for stuttering and low framerates. CVEs were addressed with apache2 where a malicioius client could perform a Denial of Serve attack.

HP Linux Imaging and Printing package, hplip is now at version 3.19.6 which adds support for new printers. MariaDB 10.3.17 is enjoying five new CVE fixes

Pending rating for snapshot 20190824 is trending at a moderately stable score of 87, 20190828 is trending at 86. Tumbleweed Snapshot ratings can be viewed at the Tumbleweed snapshot reviewer.

Xfce 4.14 has arrived in openSUSE

After 4 years in the making and a few more days of baking in the openSUSE Build Service. Xfce has been run through the openSUSE gauntlet of openQA, the automated quality assurance system and has been built, ready for Tumbleweed and backported to Leap as well. I tested it on version 15.1 and it has the same pzazz and vigor you’d see on Tumbleweed. 1After 4 years in the making and a few more days of baking in the openSUSE Build Service. Xfce has been run through the openSUSE gauntlet of openQA, the automated quality assurance system and has been built, ready for Tumbleweed and backported to Leap as well. I tested it on version 15.1 and it has the same pzazz and vigor you’d see on Tumbleweed.

The installation on Leap is about 443 packages when selecting to the X11:Xfce repository. Keep in mind, this is not the official repository but what is considered “Experimental” so keep that in mind, for what it’s worth.

Some of the changes that I find particularly noteworthy is that all the core components are now using GTK3. You can enjoy, potentially, a flicker and screen tearing free experience due to fully gaining support for VSync. If you have a High DPI monitor, your life with that hardware will be much improved and there have been


Friday
30 August, 2019


face

Dear Tumbleweed users and hackers,

The last two weeks have been average weeks when it comes to the number of snapshots and updates. We have released a total of 6 snapshots. From a user point of view, I think this is actually a pretty good pace. The 6 snapshots were 0815, 0820, 0822, 0823, 0824 and 0828.

The snapshots brought you those updates:

  • KDE Applications 19.08.0
  • KDE Frameworks 5.61.0
  • openSUSE-welcome has been newly added
  • pkg-config was dropped in favor of pkgconf
  • libvirt 5.6.0
  • XFCE 4.14.0
  • Linux kernel 5.2.9 & 5.2.10
  • poppler 0.79
  • vlc 3.0.8
  • CMake 3.15.1
  • gawk 5.0.1
  • Mesa 19.1.5
  • GCC 9.2.1
  • util-linux 2.34

Some things that I reported work in progress two weeks ago are still pending. At the moment, the staging areas are filled with:

  • glibc 2.30
  • Swig 4 (libsolv fix just arrived today)
  • Inclusion of python-tornado 5 & 6 (waiting for salt to declare proper dependencies in the package)
  • Linux kernel 5.2.11
  • filesystem changes: /etc/cron.* will no longer be generally available. BuildRequire cron in your packages if you install files there (or better yet, migrate to systemd timers)

face

The summer is almost gone but, looking back, it has been pretty productive from the YaST perspective. We have fixed a lot of bugs, introduced quite interesting features to the storage layer and the network module refactoring continues to progress (more or less) as planned.

So it is time for another sprint report. During the last two weeks, we have been basically busy squashing bugs and trying to get the network module as feature-complete as possible. But, after all, we have had also some time to improve our infrastructure and organize for the future.

YaST2 Network Refactoring Status

Although we have been working hard, we have not said a word about the yast2-network refactoring progress since the end of July, when we merged part of the changes into yast2-network 4.2.9 and pushed it to Tumbleweed. That version included quite a lot of internal changes related to the user interface and a few bits of the new data model, especially regarding routing and DNS handling.

However, things have changed a lot since then, so we would like you to give you an overview of the current situation. Probably, the most remarkable achievement is that the development version is able to read and write the configuration using the new data model. OK, it is not perfect and does not cover all the use cases, but we are heading in the right direction.

In the screencast below you can see it in action, reading and writing the configuration of an interface. The demo includes handling aliases too, which is done way better than the currently released versions.

YaST2 Network New Data Model in Action

Moreover, we had brought back support for many types of devices (VLAN, InfiniBand, qeth, TAP, TUN, etc.), improved the WiFi set-up workflow and reimplemented the support for renaming devices.

Now, during the current sprint, we are focused on taking this new implementation to a usable state so we can release the current work as soon as possible and get some feedback from you.

Finally, if you like numbers, we can give you a few. Since our last update, we have merged 34 pull requests and have increased the unit test coverage from 44% in openSUSE Leap 15.0/SUSE Linux Enterprise SP1 to around 64%. The new version is composed of 31.702 (physical) lines of code scattered through 231 files (around 137 lines per file) vs 22.542 in 70 files of the old one (more than 300 lines per file). And these numbers will get better as we continue to replace the old code 🙂

Missing Packages in Leap

It turned out that some YaST packages were not updated in Leap 15.1. The problem is that, normally, the YaST packages are submitted to the SLE15 product and they are automatically mirrored to the Leap 15 distribution via the build service bots. So we do not need to specially handle the package updates for Leap.

However, there are few packages which are not included in the SUSE Linux Enteprise product line, but are


Thursday
29 August, 2019


face

There have been three openSUSE Tumbleweed snapshots released this week.

The snapshots brought new versions of VLC, Apache, Plopper and an update of the Linux Kernel.

Snapshot 20190824 delivered a  fix that was made to the swirl option, which produced an unexpected result, with the update of ImageMagick’s 7.0.8.61 version. Improved adaptive streaming and a fix for stuttering for low framerate videos became available in VLC 3.0.8; 13 issues, including 5 buffer overflows we fixed and 11 Common Vulnerabilities and Exposures were assigned and addressed in the media player version. More than a handful of CVEs were addressed with the apache2 2.4.41 update. One of the CVEs addressed was that of a malicious client that could perform a Denial of Services attack by flooding a connection with requests and basically never reading responses on the TCP connection. The new version also improves the balancer-manager protection against XSS/XSRF attacks from trusted users. The x86 emulation library fixed a compiler warning in the 2.4 version and the X11 RandR utility updated the geometry text file configure.ac for gitlab migration with the xrandr 1.5.1 version. The snapshot is trending at a rating of 86, according to the Tumbleweed snapshot reviewer.

The HP Linux Imaging and Printing package hplip 3.19.6 added support for several new color and enterprise printer, which was released in snapshot 20190823. The Linux Kernel was updated to version 5.2.9 and offered more than a handful of commits for the Direct Rendering Manager for AMD hardware and offered some memory leak bugs related to the Advanced Linux Sound Architecture. The utility library for rendering PDFs, poppler, also fixed some memory allocation in the PostScriptFunction with version 0.79.0; the version also fixed regressions on TextSelectionPainter. Minor updates were also made in the snapshot for xfce4-settings 4.14.1 and yast2-fonts 4.2.1, yast2-instserver 4.2.3 and yast2-support 4.2.2 all had changes related to a newer Ruby version. The snapshot is trending at a rating of 84, according to the Tumbleweed snapshot reviewer.

The first snapshot of the week, 20190822, updated five packages. MariaDB’s 10.3.17 package had the most changes in the snapshot and provided merge relevant storage engine changes from MySQL 5.7.27 as well as five CVE fixes. Small bug fixes and fuzzer fixes were made to libetonyek 0.1.9. GNOME’s photo manager shotwell 0.30.7 fix compatibility with programming language Vala 0.46. The other two package updates were libsrtp2  2.2.0 and rubygem-sassc 2.1.0. The snapshot recorded a rating of 78, according to the Tumbleweed snapshot reviewer.


Wednesday
28 August, 2019


face

I am not a “Distro Hopper” but I like to try out other distributions of Linux or operating systems, for that matter. I don’t have much interest in wiping out my main system to find out I prefer openSUSE over something else. The alternative is virtual machines. I have found that QEMU/KVM seems to work better with openSUSE Tumbleweed than Virtualbox. I have previously described this issue here.

The issue I had today was that when starting a Virtual Machine Guest on may system, I received an error without any real hint as to the solution of the problem. A bunch of details that, frankly didn’t make a whole lot of sense so I searched the title of this error:

Error starting domain: Requested operation is not valid: network ‘default’ is not active

I found a reference that fixed the issue and so I made myself a little reference as another gift to future self. For you know, when I break something again.

Libvirt / QEMU / KVM Reference

Reference

Virtual Machine Manager with QEMU/KVM on openSUSE Tumbleweed


face

Not too much noise has been made about it, but fairly recently SailfishOS for Sony Xperia XA2, XA2 Ultra and XA2 Plus (finally) came out of beta stage after the initial release last autumn. I went and got myself an XA2 Plus and have been using it for a week now and am very pleased with it. Compared to former SailfishOS devices the Android runtime for the XA2 models is at version 8.x (compared to 4.x for previous devices), meaning a lot more Android apps will run on it.

So if you’re looking for a proper GNU/Linux phone and/or an alternative to the Google/Apple duopoly now is your chance to run SailfishOS on very decent and affordable midrange hardware. Below is a video of the XA2 Plus running SailfishOS (not mine).

you can buy the image here (EUR 49,90) and installation instructions are here.

I unlocked and flashed the device on openSUSE Leap 15.0 using the android-tools package from this home repo https://download.opensuse.org/repositories/home:/embar-:/Lietukas/openSUSE_Leap_15.0/

What is SailfishOS?

SailfishOS is a proper GNU/Linux OS for mobile devices based in Finland. It’s the successor of Nokia’s MeeGo so to speak. It has a cool, smooth and efficient swipe based interface. Behind the scenes it’s using Qt, Wayland, RPM, BlueZ, Systemd, Pulseaudio etc. as well as openSUSE technologies libzypp and Open Build Service.


Tuesday
27 August, 2019


face

Intro

If you think, this is too much text and sounds far too complicated or you are not interested in background informations: install openSUSE Kubic and go directly to the “Deploy Kubernetes” Section. There is not really an easier way to deploy Kubernetes!

Why should I want to use kubic-control?

We have kubeadm on openSUSE Kubic to manage kubernetes, why do we need yet another management tool? There is a nice blog giving an answer to this: Automated High Availability in kubeadm v1.15: Batteries Included But Swappable, which explains, beside kubernetes multi master, also the scope of kubeadm very well: “kubeadm is focused on performing the actions necessary to get a minimum viable, secure cluster up and running in a user-friendly way. kubeadm’s scope is limited to the local machine’s filesystem and the Kubernetes API, and it is intended to be a composable building block for higher-level tools.”

The following tasks are out of scope for kubeadm:

  • Infrastructure provisioning
  • Third-party networking
  • Non-critical add-ons, e.g. monitoring, logging and visualitzation
  • Specific cloud provider integrations

kubic-control is such a higher-level toolkit. It configures the Host OS (in the feature, it will even be able to install new baremetal nodes with help of yomi), setups all necessary services, uses kubeadm to deploy kubernetes and installs and configures all necessary add-ons, like pod network and update and reboot services for the OS and the cluster. It will also help you to avoid mistakes like not calling kubeadm with the right arguments if you want to use flannel for the network.

What is kubic-control?

kubic-control consists of three binaries:

  • kubicd, a daemon which communicates via gRPC with clients. It’s setting up kubernetes on openSUSE Kubic, including pod network, kured, transactional-update, …
  • kubicctl, a cli interface
  • haproxycfg, a cli interface adjust haproxy.cfg for use as loadbalancer for the kubernetes API

The communication is encrypted, the kubicctl command can run on any machine. The user authenticates with his certificate, using RBAC to determine if the user is allowed to call this function. kubiccd will use kubeadm and kubectl to deploy and manage the cluster. So the admin can at everytime modify the cluster with this commands, too, there is no hidden state-database except for the informations necessary for a kubernetes multi-master/HA setup.

Requirementes

Mainly generic requirements by kubernetes itself:

  • All the nodes on the cluster must be on a the same network and be able to communicate directly with each other.
  • All nodes in the cluster must be assigned static IP addresses. Using dynamically assigned IPs will break cluster functionality if the IP address changes.
  • The Kubernetes master node(s) must have valid Fully-Qualified Domain Names (FQDNs), which can be resolved both by all other nodes and from other networks which need to access the cluster.
  • Since Kubernetes mainly works with certificates and tokens, the time on all Nodes needs to be always in sync. Else communication inside the cluster will break.

As salt is used for the communication, the Admin Node


face

In the last couple of weeks we continued tackling the typography issues in the new UI. Continued what we started… We looked for the following issues: Looking for different font-sizes and reducing the number of different font-sizes per page to at most 3. Looking for color contrasts which could be bad for people with visual issues. Reducing the usage of small classes for buttons and other components if not necessary. Reworking the hierarchy of the...


Sunday
25 August, 2019


face

This is a little outside of my normal blatherings format but after stumbling upon a video from Red Robbo’s YouTube channel. I wanted to investigate his claims that maybe, just maybe the security mitigations that I have chosen they are a bit excessive for my use case. Recently, openSUSE has added a feature to make this easily user adjustable. Since they made it easy, obviously, someone far smarter than I am has decided that some of the mitigations may be excessive and not worth the performance loss for all use cases. I written about the mitigations some time ago and how it is fun to see all that is being implemented. Maybe it’s time to dial it back.

This is the video that made me pause and think about the choices I’ve made.

Red Robbo made the statement, “how many people are actually impacted by this, not potentially impacted but actually…”

Fair statement, what is my actual risk. not imaginary but actual risk. So that got me thinking. My setup has been to keep the mitigations on “Auto”. That seems fair to me. Let the system decide how many mitigations I need to have in place. Then this video came out and It got me thinking…

“How many mitigations do I really need to have to protect my system?”
“What are the threats against my main machine, a laptop, that does not run any services?”
“How much of a performance improvement would I have if I switched the mitigations off?”

According to SUSE, by leaving the mitigations to Auto, “All CPU side channel mitigations are enabled as they are detected based on the CPU type. The auto-detection handles both unaffected older CPUs and unaffected newly released CPUs and transparently disables mitigations. This options leave SMT enabled.”

It was time to explore this further. Do some, self-discovery, as it were.

In reading all the CVEs on the subject, they are worded as either, “Local attacker”, “In theory”, “…a possible approach”, “could be made to leak”.

I couldn’t help but think, golly, this is all… speculative… isn’t it. I now wonder what the actual threat is. I appreciate how the fixes were very much preemptive before any attacks were made but it almost seems like building my house so that it is meteor proof, just in case of meteor strike.

What I’ve done

So I did as Red Robbo suggested, not on all my machines, just the machines that that, I shut them off. I am not on anyone’s target list. I don’t run any kind of service that has tons of people in this system and it doesn’t often face the scary internet directly as it is going through a firewall that filters most of the scary traffic away. Making the change was really quite easy and underscores the beauty of YaST. To get to the right module, I go into YaST and select the Boot Loader module under System.

Within the


Saturday
24 August, 2019


face

In full disclosure, Plasma is my Desktop Environment of choice, it is very easy to customize and to make my own with very little effort. As of late, there isn’t a whole lot of customizing I do, it’s all pretty minor. A couple tweaks to the the visuals, make it dark, change some sound effects to make it more Star Trek The Next Generation, add a couple Plasmoids and set up KDE Connect. Then I am ready to go.

Since KDE 3 and later Plasma, each release adds and refines existing features, all of which seems as though they are doing so in a sustainable fashion. New releases of Plasma are always met with excitement and anticipation. I can count on new features and refinements and an overall better experience. I didn’t look anywhere else but then, Xfce wondered into my world and although slow to change has become that desktop too. Historically, Xfce has been [for me] just there, nothing particularly exciting. It has held the spot of a necessary, minimal viable desktop… but not anymore.

Previous Xfce Experiences

Using Xfce was like stepping back in time to an era of awkwad looking computer innocence, where icons were mismatched and widgets were a kind of grey blockiness with harsh contrasting lines. Such a great time… While KDE Plasma and Gnome moved on, working in new visuals and staying “modern,” Xfce did it’s own thing… or nothing… I don’t really know but it, in my eyes, became the dated desktop environment. It was always rock solid but wasn’t much to look at. To be fair, there were some examples of real decent looking expressions of Xfce but I unfairly dismissed it.

New Experiences with Xfce

I started to do a little distro and desktop hopping, not to replace my preferred setup, openSUSE Tumbleweed with Plasma, but to see what else is out there and to play with some other examples of desktop design and experience. One such example that I really enjoyed was MX Linux.

It is a clean and pleasant experience that doesn’t scream 2002. The configuration options are plentiful and easy to understand. Not to mention the Dark theme looks simply fantastic. Then there is Salient OS which has a slick and modern look. It didn’t look Plasma but looks like the present and doesn’t make you think of the traditional Xfce environment.

Then came Endeavour OS where, for just a moment, I thought I was using Plasma. It is truly a slick Xfce environment with some great choices for appearance.

Although, 4.12 was released in 2015 and some speculated the project as being dead, new breath life came to the users of this project and just recently (Aug 2019), version 4.14 was released.

Xfce’s latest release didn’t take away features or trim out functionality. It only added new features and refined the the whole desktop. Most notably, a complete (I think) move to GTK3


Friday
23 August, 2019


face

Ahoy! openSUSE Xfce team is pleased to announce that the long awaited Xfce 4.14 has been released for Tumbleweed.

After a long development cycle (4 years!), all of the core components and applications have been ported to GTK 3.

Among the main new features and improvements, the xfwm4 window manager has finally gained support for VSync, HiDPI, hardware GLX and various compositor improvements.

You can check out the neat new features in the official Xfce 4.14 tour and the official release announcement.

openSUSE Changes

For openSUSE, we continued to polish the default experience by adding new packages that complete the desktop and make it more approachable to new users.

We:

Switched to xfce4-screensaver, the new Xfce screenlocker, from xscreensaver

– Added xfce4-panel-profiles, a tool to back up and restore your panel layout configuration as well as layout presets

– Added mugshot, a tool to easily input personal information and a user avatar. It is integrated into the Whisker Menu

– Added lightdm-gtk-greeter-settings, a tool to easily configure LightDM

– Added gnome-disk-utility, a disk management tool that allows you to partition disks and mount ISO files

New GTK Theme

In the process of updating to Xfce 4.14, we decided that we wanted to have our very own GTK theme. Thus, Greybird Geeko was born.

Based on the popular Greybird Xfce theme, Greybird Geeko is an official spin with an openSUSE look & feel and other improvements, such as a dark variant of the theme. 

A special shout out to Carson Black who carried out the work and maintains this theme! For a quick overview, please check out the screenshots.

A big “thank you” to everyone who got involved in this release! 

More information about Xfce on openSUSE is available at https://en.opensuse.org/Portal:Xfce.


face

I watched American Factory on Netflix at Stupid-o’clock this morning https://www.youtube.com/watch?v=m36QeKOJ2Fc Some of the workers just don’t learn, do they? One was complaining that they used to earn $29 an hour with GM while currently earning $14 an hour with Fuyao. Maybe that’s why GM shut the factory in the first place? My favourite was … Continue reading "American Factory"


Thursday
22 August, 2019


face

If your are into devops or someone who take care backend of NGINX, you will notice there is limit of tools to read / transform NGINX access or error log files into colorize which is helpful and readable, compare to Httpd / Apache which have tonnes of script and tools to colorize your logs.

The last time I configuring NGINX was 4 years ago and yesterday I setup webserver for my client using NGINX instead of Httpd / Apache, after 7 minutes port 80 was open to public, I notice slowness on my client VPS machine, so I assume maybe some massive bots are scanning this webserver for some reason.

I open NGINX log with my favourite less- R command line and I feel awful and ackward. Do you know why? Because I been involve with lot of programming framework and tool that offer me ANSI colorize log.

If you are just like me, then I suggest you to install ccze! Here some currrent info from Fedora repository

[rnm@robbinespu ~] $ sudo dnf info ccze
Last metadata expiration check: 0:00:13 ago on Thu 22 Aug 2019 11:55:41 AM +08.
Available Packages
Name         : ccze
Version      : 0.2.1
Release      : 22.fc30
Architecture : x86_64
Size         : 81 k
Source       : ccze-0.2.1-22.fc30.src.rpm
Repository   : fedora
Summary      : A robust log colorizer
URL          : http://bonehunter.rulez.org/CCZE.html
License      : GPLv2+
Description  : CCZE is a roboust and modular log colorizer, with plugins for apm,
             : exim, fetchmail, httpd, postfix, procmail, squid, syslog, ulogd,
             : vsftpd, xferlog and more.

Like it said, this is a robust and modular log colorizer and comes with few plugins. It available on Fedora, Debian, Ubuntu, Centos, Opensuse and others distro repository.

Just install it:

$ sudo dnf install ccze #for Red Hat/CentOS/Fedora based
$ sudo apt install ccze #for Debian/Ubuntu based

and to use it, you need to open your file reader and pipe into ccze. For example:

$ sudo less -R /var/log/nginx/access.log | ccze -A | less -R 

NGINX access log with ccze

You may check helps for more option how to manipulate and using ccze

$ ccze --help
Usage: ccze [OPTION...]
ccze -- cheer up 'yer logs.

  -a, --argument=PLUGIN=ARGS...   Add ARGUMENTS to PLUGIN
  -A, --raw-ansi             Generate raw ANSI output
  -c, --color=KEY=COLOR,...  Set the color of KEY to COLOR
  -C, --convert-date         Convert UNIX timestamps to readable format
  -F, --rcfile=FILE          Read configuration from FILE
  -h, --html                 Generate HTML output
  -l, --list-plugins         List available plugins
  -m, --mode=MODE            Change the output mode
                             (Available modes are curses, ansi and html.)
  -o, --options=OPTIONS...   Toggle some options
                             (such as scroll, wordcolor and lookups,
                             transparent, or cssfile)
  -p, --plugin=PLUGIN        Load PLUGIN
  -r, --remove-facility      remove syslog-ng's facility from start of the
                             lines
  -?, --help                 Give this help list
      --usage                Give a short usage message
  -V, --version              Print program version

Mandatory or optional arguments to long options are also mandatory or optional
for any corresponding short options.

Report bugs to <algernon@bonehunter.rulez.org>.

Cheers and have fun!


Wednesday
21 August, 2019


face

Endeavour OS is the unofficial successor to Antegros, I’ve never used Antegros so I cannot make any comparisons between the two. It should also be noted that I think Arch Linux, in general, is more work than it is worth so this won’t exactly be a shining review. Feel free to bail here if you don’t like the direction of my initial prejudice.

I am reviewing Endeavour OS as a rather biased openSUSE Linux user that is firmly entrenched in all things openSUSE. I am going at this from the perspective that my computer is my companion, my coworker or assistant in getting my digital work done and some entertainment sprinkled in there as well.

Bottom Line Up Front: If you want to run main-line Arch, Endeavour OS is absolutely the way to get going with it. They take the “Easy Plus One” approach to Arch by allowing you to install what I would consider a minimal but very usable base and learn to use “genuine Arch” with all the triumphs and pitfalls. If you want to go Arch, I can most certainly endorse this as the route to do so. However, even after playing here for two weeks, I find Arch to be more trouble than it is worth but a great educational experience.

Installation

Installing Arch using the “Arch Method” from the Wiki is pretty obtuse. Following it, step by step is not clear and leaves to many aspects ambiguous and unclear. It should NOT be a “beginners guide” at all. Thankfully, Endeavour OS installer bypasses the nonsense so you can get going with Arch.

The media will boot quickly and you are given a shiny desktop with a window open. There are two tabs, the first tab has two selections: one access to offline information and the second for information the Endeavour OS website. The second tab will allow you to create partitions and to install Endeavour OS to the disk.

Should you choose to make modifications to the existing file system. You can do so from here using the Gparted tool.

Since I set this up to be on a virtual machine, I intended on using the entire disk so no partitioning was necessary. Selecting Install EndeavourOS to disk initiates the installer. It will start out requesting language then Location.

Next is the Keyboard layout and your partitions preference. Since this is a simple setup, I selected to erase the disk to meet my testing requirements.

Lastly, the User, computer hostname and passwords will be entered. The last step being the summary and a final sanity check. Not a single step was difficult in this process. It was all very straight forward.

The installation proceeds rather quickly and gives some rather enjoyable propaganda is presented. One questioning your disposition towards the terminal.

Once the installation is complete, I restarted the system to boot into the newly installed Arch Linux based operating system.

First run and Impressions

Something that is most noteworthy was the


Monday
19 August, 2019


face

Dear Community,

After six years on the openSUSE Board and five as its Chairperson, I have decided to step down as Chair of the openSUSE Board effective today, August 19.

This has been a very difficult decision for me to make, with reasons that are diverse, interlinked, and personal.
Some of the key factors that led me to make this step include the time required to do the job properly, and the length of time I’ve served.
Five years is more than twice as long as any of my predecessors.
The time required to do the role properly has increased and I now find it impossible to balance the demands of the role with the requirements of my primary role as a developer in SUSE, and with what I wish to achieve outside of work and community.
As difficult as it is to step back from something I’ve enjoyed doing for so long, I am looking forward to achieving a better balance between work, community, and life in general.

Serving as member and chair of the openSUSE Board has been an absolute pleasure and highly rewarding. Meeting and communicating with members of the project as well as championing the cause of openSUSE has been a joyous part of my life that I know I will miss going forward.

openSUSE won’t get rid of me entirely. While I do intend to step back from any governance topics, I will still be working at SUSE in the Future Technology Team. Following SUSE’s Open Source policy, we do a lot in openSUSE. I am especially looking forward to being able to focus on Kubic & MicroOS much more than I have been lately.

As I’m sure it’s likely to be a question, I wish to make it crystal clear that my decision has nothing to do with the Board’s ongoing efforts to form an independent openSUSE Foundation.

The Board’s decision to form a Foundation had my complete backing as Chairperson, and will continue to have as a regular openSUSE contributor.
I have absolute confidence in the openSUSE Board; Indeed, I don’t think I would be able to make this decision at this time if I wasn’t certain that I was leaving openSUSE in good hands.

On that note, SUSE has appointed Gerald Pfeifer as my replacement as Chair. Gerald is SUSE’s EMEA-based CTO, with a long history as a Tumbleweed user, an active openSUSE Member, and upstream contributor/maintainer in projects like GCC  and Wine.

Gerald has been a regular source of advice & support during my tenure as Chairperson. In particular, I will always remember my first visit to FOSDEM as openSUSE Chair.
Turning up more smartly dressed than usual, I was surprised to find Gerald, a senior Director at SUSE, diving in to help at the incredibly busy openSUSE booth, and doing so dressed in quite possibly the oldest and most well-loved openSUSE T-shirt I’ve ever seen.
When booth visitors came


Richard Brown: Changing of the Guard

08:00 UTCmember

face

Dear Community,

After six years on the openSUSE Board and five as its Chairperson, I have decided to step down as Chair of the openSUSE Board effective today, August 19.

This has been a very difficult decision for me to make, with reasons that are diverse, interlinked, and personal. Some of the key factors that led me to make this step include the time required to do the job properly, and the length of time I’ve served. Five years is more than twice as long as any of my predecessors. The time required to do the role properly has increased and I now find it impossible to balance the demands of the role with the requirements of my primary role as a developer in SUSE, and with what I wish to achieve outside of work and community. As difficult as it is to step back from something I’ve enjoyed doing for so long, I am looking forward to achieving a better balance between work, community, and life in general.

Serving as member and chair of the openSUSE Board has been an absolute pleasure and highly rewarding. Meeting and communicating with members of the project as well as championing the cause of openSUSE has been a joyous part of my life that I know I will miss going forward.

openSUSE won’t get rid of me entirely. While I do intend to step back from any governance topics, I will still be working at SUSE in the Future Technology Team. Following SUSE’s Open Source policy, we do a lot in openSUSE. I am especially looking forward to being able to focus on Kubic & MicroOS much more than I have been lately.

As I’m sure it’s likely to be a question, I wish to make it crystal clear that my decision has nothing to do with the Board’s ongoing efforts to form an independent openSUSE Foundation.

The Board’s decision to form a Foundation had my complete backing as Chairperson, and will continue to have as a regular openSUSE contributor. I have absolute confidence in the openSUSE Board; Indeed, I don’t think I would be able to make this decision at this time if I wasn’t certain that I was leaving openSUSE in good hands.

On that note, SUSE has appointed Gerald Pfeifer as my replacement as Chair. Gerald is SUSE’s EMEA-based CTO, with a long history as a Tumbleweed user, an active openSUSE Member, and upstream contributor/maintainer in projects like GCC and Wine.

Gerald has been a regular source of advice & support during my tenure as Chairperson. In particular, I will always remember my first visit to FOSDEM as openSUSE Chair. Turning up more smartly dressed than usual, I was surprised to find Gerald, a senior Director at SUSE, diving in to help at the incredibly busy openSUSE booth, and doing so dressed in quite possibly the oldest and most well-loved openSUSE T-shirt I’ve ever seen. When booth visitors came


Sunday
18 August, 2019


face

The lucky winner of the openSUSE Jeopardy and the Geeko
AppArmor Crash Course slides, 2019 edition.
Last weekend, I was at FrOSCon - a great Open Source conference in Sankt Augustin, Germany. We (Sarah, Marcel and I) ran the openSUSE booth, answered lots of questions about openSUSE and gave the visitors some goodies - serious and funny (hi OBS team!) stickers, openSUSE hats, backpacks and magazines featuring openSUSE Leap. We also had a big plush geeko, but instead of doing a boring raffle, we played openSUSE Jeopardy where the candidates had to ask the right questions about Linux and openSUSE for the answers I provided.

To avoid getting bored ;-) I did a sub-booth featuring my other two hobbies - AppArmor and PostfixAdmin. As expected, I didn't get too many questions about them, but it was a nice addition and side job while running the openSUSE booth ;-)

I also gave an updated version of my "AppArmor Crash Course" talk. You can find the slides on the right, and the video recording (in german) on media.ccc.de.


Saturday
17 August, 2019


Klaas Freitag: ownCloud and CryFS

20:30 UTCmember

face

It is a great idea to encrypt files on client side before uploading them to an ownCloud server if that one is not running in controlled environment, or if one just wants to act defensive and minimize risk.

Some people think it is a great idea to include the functionality in the sync client.

I don’t agree because it combines two very complex topics into one code base and makes the code difficult to maintain. The risk is high to end up with a kind of code base which nobody is able to maintain properly any more. So let’s better avoid that for ownCloud and look for alternatives.

A good way is to use a so called encrypted overlay filesystem and let ownCloud sync the encrypted files. The downside is that you can not use the encrypted files in the web interface because it can not decrypt the files easily. To me, that is not overly important because I want to sync files between different clients, which probably is the most common usecase.

Encrypted overlay filesystems put the encrypted data in one directory called the cipher directory. A decrypted representation of the data is mounted to a different directory, in which the user works.

That is easy to setup and use, and also in principle good to use with file sync software like ownCloud because it does not store the files in one huge container file that needs to be synced if one bit changes as other solutions do.

To use it, the cypher directory must be configured as local sync dir of the client. If a file is changed in the mounted dir, the overlay file system changes the crypto files in the cypher dir. These are synced by the ownCloud client.

One of the solutions I tried is CryFS. It works nicely in general, but is unfortunately very slow together with ownCloud sync.

The reason for that is that CryFS is chunking all files in the cypher dir into 16 kB blocks, which are spread over a set of directories. It is very beneficial because file names and sizes are not reconstructable in the cypher dir, but it hits on one of the weak sides of the ownCloud sync. ownCloud is traditionally a bit slow with many small files spread over many directories. That shows dramatically in a test with CryFS: Adding eleven new files with a overall size of around 45 MB to a CryFS filesystem directory makes the ownCloud client upload for 6:30 minutes.

Adding another four files with a total size of a bit over 1MB results in an upload of 130 files and directories, with an overall size of 1.1 MB.

A typical change use case like changing an existing office text document locally is not that bad. CryFS splits a 8,2 kB big LibreOffice text doc into three 16 kB files in three directories here. When one word gets inserted, CryFS needs to create three new dirs in


face

Kata Containers is an open source container runtime that is crafted to seamlessly plug into the containers ecosystem.

We are now excited to announce that the Kata Containers packages are finally available in the official openSUSE Tumbleweed repository.

It is worthwhile to spend few words explaining why this is a great news, considering the role of Kata Containers (a.k.a. Kata) in fulfilling the need for security in the containers ecosystem, and given its importance for openSUSE and Kubic.

What is Kata

As already mentioned, Kata is a container runtime focusing on security and on ease of integration with the existing containers ecosystem. If you are wondering what’s a container runtime, this blog post by Sascha will give you a clear introduction about the topic.

Kata should be used when running container images whose source is not fully trusted, or when allowing other users to run their own containers on your platform.

Traditionally, containers share the same physical and operating system (OS) resources with host processes, and specific kernel features such as namespaces are used to provide an isolation layer between host and container processes. By contrast, Kata containers run inside lightweight virtual machines, adding an extra isolation and security layer, that minimizes the host attack surface and mitigates the consequences of containers breakout. Despite this extra layer, Kata achieves impressive runtime performances thanks to KVM hardware virtualization, and when configured to use a minimalist virtual machine manager (VMM) like Firecracker, a high density of microVM can be packed on a single host.

If you want to know more about Kata features and performances:

  • katacontainers.io is a great starting point.
  • For something more SUSE oriented, Flavio gave a interesting talk about Kata at SUSECON 2019,
  • Kata folks hang out on katacontainers.slack.com, and will be happy to answer any quesitons.

Why is it important for Kubic and openSUSE

SUSE has been an early and relevant open source contributor to containers projects, believing that this technology is the future way of deploying and running software.

The most relevant example is the openSUSE Kubic project, that’s a certified Kubernetes distribution and a set of container-related technologies built by the openSUSE community.

We have also been working for some time in well known container projects, like runC, libpod and CRI-O, and since a year we also collaborate with Kata.

Kata complements other more popular ways to run containers, so it makes sense for us to work on improving it and to assure it can smoothly plug with our products.

How to use

While Kata may be used as a standalone piece of software, its intended use is to serve as a runtime when integrated in a container engine like Podman or CRI-O.

This section shows a quick and easy way to spin up a Kata container using Podman on openSUSE Tumbleweed.

First, install the Kata packages:

$ sudo zypper in katacontainers

Make sure your system is providing the needed set of hardware virtualization features required by Kata:

$ sudo kata-runtime 

face

I’m one of those people you meet in life that ‘things’ happen to. I suppose I should be grateful; I can at least say my life is never boring? Every Saturday Charlie and I, Charlie is a dog, by the way, go to put fuel in my car and then we have a little ride … Continue reading "I’m staying in today!"


Friday
16 August, 2019


face

Dear Tumbleweed users and hackers,

Week 2019/33 ‘only’ saw three snapshots being published (3 more were given to openQA but discarded).  The published snapshots were 0809, 0810 and 0814 containing these changes:

  • NetworkManager 1.18.2
  • FFmpeg 4.2
  • LibreOffice 6.3 (RC4)
  • Linux kernel 5.2.7 & 5.2.8
  • Libinput 1.14

And those updates are underway for the next few snapshots:

  • Addition of ‘openSUSE-welcome’
  • KDE Applications 19.08.0
  • KDE Frameworks 5.61.0
  • Replace’pkg-config’ with a more modern implementation ‘pkgconf’. It is supposed to be transparent to the change, once things are all worked out.
  • GLibc 2.30
  • cmake3.15.x (breaks nfs-ganesha)
  • Swig 4 (still waiting for a libsolv fix 🙁 )
  • Inclusion of python-tornado 5 & 6
  • gawk 5.0: Careful: we’d seen some build scripts failing already
  • util-linux 2.34: no longer depends on libuuid. Some packages assumed liubuuid-devel being present to build. Those will fail.

Thursday
15 August, 2019


face

Kata

Kata Containers is an open source container runtime that is crafted to seamlessly plug into the containers ecosystem.

We are now excited to announce that the Kata Containers packages are finally available in the official openSUSE Tumbleweed repository.

It is worthwhile to spend few words explaining why this is a great news, considering the role of Kata Containers (a.k.a. Kata) in fulfilling the need for security in the containers ecosystem, and given its importance for openSUSE and Kubic.

What is Kata

As already mentioned, Kata is a container runtime focusing on security and on ease of integration with the existing containers ecosystem. If you are wondering what’s a container runtime, this blog post by Sascha will give you a clear introduction about the topic.

Kata should be used when running container images whose source is not fully trusted, or when allowing other users to run their own containers on your platform.

Traditionally, containers share the same physical and operating system (OS) resources with host processes, and specific kernel features such as namespaces are used to provide an isolation layer between host and container processes. By contrast, Kata containers run inside lightweight virtual machines, adding an extra isolation and security layer, that minimizes the host attack surface and mitigates the consequences of containers breakout. Despite this extra layer, Kata achieves impressive runtime performances thanks to KVM hardware virtualization, and when configured to use a minimalist virtual machine manager (VMM) like Firecracker, a high density of microVM can be packed on a single host.

If you want to know more about Kata features and performances:

  • katacontainers.io is a great starting point.
  • For something more SUSE oriented, Flavio gave a interesting talk about Kata at SUSECON 2019,
  • Kata folks hang out on katacontainers.slack.com, and will be happy to answer any quesitons.

Why is it important for Kubic and openSUSE

SUSE has been an early and relevant open source contributor to containers projects, believing that this technology is the future way of deploying and running software.

The most relevant example is the openSUSE Kubic project, that’s a certified Kubernetes distribution and a set of container-related technologies built by the openSUSE community.

We have also been working for some time in well known container projects, like runC, libpod and CRI-O, and since a year we also collaborate with Kata.

Kata complements other more popular ways to run containers, so it makes sense for us to work on improving it and to assure it can smoothly plug with our products.

How to use

While Kata may be used as a standalone piece of software, it’s intended use is serve as a runtime when integrated in a container engine like Podman or CRI-O.

This section shows a quick and easy way to spin up a Kata container using Podman on openSUSE Tumbleweed.

First, install the Kata packages:

$ sudo zypper in katacontainers

Make sure your system is providing the needed set of hardware virtualization features required by Kata:

$ sudo kata-runtime 

Wednesday
14 August, 2019


face

Given that the main development workflow for most kernel maintainers is with email, I spend a lot of time in my email client. For the past few decades I have used (mutt), but every once in a while I look around to see if there is anything else out there that might work better.

One project that looks promising is (aerc) which was started by (Drew DeVault). It is a terminal-based email client written in Go, and relies on a lot of other go libraries to handle a lot of the “grungy” work in dealing with imap clients, email parsing, and other fun things when it comes to free-flow text parsing that emails require.

aerc isn’t in a usable state for me just yet, but Drew asked if I could document exactly how I use an email client for my day-to-day workflow to see what needs to be done to aerc to have me consider switching.

Note, this isn’t a criticism of mutt at all. I love the tool, and spend more time using that userspace program than any other. But as anyone who knows email clients, they all suck, it’s just that mutt sucks less than everything else (that’s literally their motto)

I did a (basic overview of how I apply patches to the stable kernel trees quite a few years ago) but my workflow has evolved over time, so instead of just writing a private email to Drew, I figured it was time to post something showing others just how the sausage really is made.

Anyway, my email workflow can be divided up into 3 different primary things that I do:

  • basic email reading, management, and sorting
  • reviewing new development patches and applying them to a source repository.
  • reviewing potential stable kernel patches and applying them to a source repository.

Given that all stable kernel patches need to already be in Linus’s kernel tree first, the workflow of the how to work with the stable tree is much different from the new patch workflow.

Basic email reading

All of my email ends up in either two “inboxes” on my local machine. One for everything that is sent directly to me (either with To: or Cc:) as well as a number of mailing lists that I ensure I read all messages that are sent to it because I am a maintainer of those subsystems (like (USB), or (stable)). The second inbox consists of other mailing lists that I do not read all messages of, but review as needed, and can be referenced when I need to look something up. Those mailing lists are the “big” linux-kernel mailing list to ensure I have a local copy to search from when I am offline (due to traveling), as well as other “minor” development mailing lists that I like to keep a copy locally like linux-pci, linux-fsdevel, and a few other smaller vger lists.

I get these maildir folders synced with the mail server using (mbsync) which


face

July and August are very sunny months in Europe… and chameleons like sun. That’s why most YaST developers run away from their keyboards during this period to enjoy vacations. Of course, that has an impact in the development speed of YaST and, as a consequence, in the length of the YaST Team blog posts.

But don’t worry much, we still have enough information to keep you entertained for a few minutes if you want to dive with us into our summer activities that includes:

  • Enhancing the development documentation
  • Extending AutoYaST capabilities regarding Bcache
  • Lots of small fixes and improvements

AutoYaST and Bcache – Broader Powers

Bcache technology made its debut in YaST several sprints ago. You can use the Expert Partitioner to create your Bcache devices and improve the performance of your slow disks. We even published a dedicated blog post with all details about it.

Apart of the Expert Partitioner, AutoYaST was also extended to support Bcache devices. And this time, we are pleased to announce that … we have fixed our first Bcache bug!

Actually, there were two different bugs in the AutoYaST side. First, the auto-installation failed when you tried to create a Bcache device without a caching set. On the other hand, it was not possible to create a Bcache with an LVM Logical Volume as backing device. Both bugs are gone, and now AutoYaST supports those scenarios perfectly.

Configuring Bcache and LVM with AutoYaST

But Bcache is a quite young technology and it is not free of bugs. In fact, it fails when the backing device is an LVM Logical Volume and you try to set the cache mode. We have already reported a bug to the Bcache crew and (as you can see in the bug report) a patch is already been tested.

Enhancing Our Development Documentation

This sprint we also touched our development documentation, specifically we documented our process for creating the maintenance branches for the released products. The new branching documentation describes not only how to actually create the branches but also how to adapt all the infrastructure around (like Jenkins or Travis) which requires special knowledge.

We will see how much the documentation is helpful next time when somebody has to do the branching process for the next release. 😉

Working for a better world YaST

We do our best to write code free of bugs… but some bugs are smarter than us and they manage to survive and reproduce. Fortunately we used this sprint to do some hunting.


face

Version 7 of the Elastic stack was released a few months ago, and brought several breaking changes that affect syslog-ng. In my previous blog post, I gave details about how it affects sending GeoIP information to Elasticsearch. From this blog post you can learn about the Kibana side, which has also changed considerably compared to previous releases. Configuration files for syslog-ng are included, but not explained in depth, as that was already done in previous posts.

Note: I use a Turris Omnia as log source. Recent software updates made the Elasticsearch destination of syslog-ng available on the Turris Omnia. On the other hand, you might be following my blog without having access to this wonderful ARM Linux-based device. In the configuration examples below, Turris Omnia does not send logs directly to Elasticsearch, but to another syslog-ng instance (running on the same host as Elasticsearch) instead. This way you can easily replace Turris Omnia with a different device or loggen in your tests.

Before you begin

First of all, you need some iptables log messages. In my case, I used the logs from my Turris Omnia router. If you do not have iptables logs at hand, several sample logs are available on the Internet. For example, you can take Anton Chuvakin’s Bundle 2 ( http://log-sharing.dreamhosters.com/ ) and use loggen to feed it to syslog-ng.

Then, you also need a recent version of syslog-ng. The elasticsearch-http() destination was introduced in syslog-ng OSE version 3.21.1 (and PE version 7.0.14).

Last but not least, you will also need Elasticsearch and Kibana installed. I used version 7.3.0 of the Elastic Stack, but any version after 7.0.0 should be fine. Using version 7.1.0 (or later) has the added benefit of having basic security built in for free: neither payment nor installing 3rd party extensions are necessary.

Network test

It is easy to run into obstacles when collecting logs over the network. Routing, firewalls, SELinux can all prevent you from receiving (and sometimes even sending) log messages. This is the reason why it is worth doing an extra step and test collecting logs over the network. Before configuring Elasticsearch and Kibana, make sure that syslog-ng can actually receive those log messages.

Below is a very simple configuration to receive legacy syslog messages over TCP on port 514 and store them in a file under /var/log. Append it to your syslog-ng.conf, or place it in its own file under the /etc/syslog-ng/conf.d/ directory if your system is configured to use it.

# network source
source s_net {
    tcp(ip("0.0.0.0") port("514"));
};

# debug output to file
destination d_file {
     file("/var/log/this_is_a_test");
};

# combining all of above in a log statement
log {
    source(s_net);
    destination(d_file);
};

Once syslog-ng is reloaded, it listens for log messages on port 514 and stores any incoming message to a file called /var/log/this_is_a_test. Below is a simple configuration file for the Turris Omnia, sending iptables


Monday
12 August, 2019


face

We tackled typography issues after receiving feedback from multiple users. The Tackled Issues Beside defining the font-stack, we revisited the homepage, requests, projects and packages looking for the following issues and fixing them: Looking for different font-sizes and reducing the number of different font-sizes per page to at most 3. Looking for color contrasts which could be bad for people with visual issues. Reducing the usage of small classes for buttons and other components if...


Sunday
11 August, 2019


face
Distribution Forum Wiki Community Membership Bug Reporting Mailing List Chat
MX Linux Yes Technical Only No No Yes No No
Manjaro Yes Yes No No Forum Only Yes Yes
Mint Yes No Yes No Upstream or Github No IRC
elementary Stack Exchange No No No Yes No Slack
Ubuntu Yes Yes Yes Yes Yes Yes IRC
Debian Yes Yes Yes Yes Yes Yes IRC
Fedora Yes Yes Yes Yes Yes Yes IRC
Solus Yes No Yes No Yes No IRC
openSUSE Yes Yes Yes Yes Yes Yes IRC
Zorin Yes No No No Forum Only No No
deepin Yes Yes No No Yes Yes No
KDE neon Yes Yes Yes No Yes Yes IRC
CentOS Yes Yes Yes No Yes Yes IRC
ReactOS* Yes No Yes No Yes Yes Webchat
Arch Yes Yes Yes Yes Yes Yes Yes
ArcoLinux Yes No No No No No Discord
Parrot Yes Debian Wiki No No Forum Only No IRC/Telegram
Kali Yes No Yes No Yes No IRC
PCLinuxOS Yes No No No Forum Only No IRC
Lite Yes No Yes Yes Yes No No

*All are Linux distributions except ReactOS

Column descriptions:

  • Distribution: Name of the distro
  • Forum: Is there a support message board?
  • Wiki: Is there a user-editable wiki?
  • Community: Are there any links where I can directly contribute to the project?
  • Membership: Can I become a voting member of the community?
  • Bug Reporting: Is there a way to report bugs that I find?
  • Mailing list: Is there an active mailing list for support, announcements, etc?
  • Chat: Is there a way to talk to other people in the community directly?

What is this list?

This is the top 20 active projects distributions according to distrowatch.org in the past 12 months.

Things that I learned:

Only well-funded corporate sponsored Linux distributions (Fedora, Ubuntu, OpenSUSE) have all categories checked. That doesn’t mean that anyone is getting paid. I believe this means that employees are probably the chief contributors and that means there are more people putting in resources to help.

Some distributions are “Pat’s distribution”. Pat’s group owns it and Pat doesn’t want a steering committee or any other say in how the distro works. Though contributions by means of bug reports may be accepted.

A few distributions “outsource” resources to other distributions. Elementary allows Stack Exchange to provide their forum. Parrot Linux refers users to the Debian wiki. Mint suggests that you put in bug reports with the upstream provider unless it is a specific Mint create application.

There are a few Linux distributions that leave me scratching my head. How is this in the top 20 distros on distrowatch? There’s nothing here and the forum, if there is one, is nearly empty. Who uses this?

What do you want from an open source project?

Do you want to donate your time, make friends, and really help make a Linux distribution grow? Look at Fedora, Ubuntu, OpenSUSE, or Arch. These communities have ways to help you

<- Current blog entries