Skip to main content

the avatar of openSUSE News

openSUSE Community Publishes Annual Survey Results

The openSUSE community has published results from the annual community survey.

This year’s results increased from last year’s results by more than 100 participants, with 1,320 respondents this time around. While participation increased, results indicated a larger amount of people located in Europe took the survey. Since other regions were less represented, people should be aware of this fact while reading the results.

Information was gathered about the project, its distributions, the demographics and how the community is contributing to the project. A number of questions in this year’s survey provided participants with a wider opportunity to express their opinions and satisfaction with the above topics. A document lists all the comments from the survey.

Some modifications were made from the previous year’s survey, but the highest percentage group of users were ages of 35 to 44. An overwhelming survey participants were from the Northern Hemisphere and almost 500 were in the +1 UTC time zone before the recent daily savings, which put most in the Central European Time. Almost 60 percent of surveyees had a university degree and more than 81 percent of those who answered the question had at least a graduate degree. The amount of surveyees with a job in IT is about split about 50/50.

A core take away from the survey was that openSUSE distros were well supported for IT professionals and developers. Technologies for professional content creators appears to be growing is usage, according to the results.

More than 40 percent have been using Unix-like operating systems for 10 years and less than 3 percent have been using one less than a year. A majority of openSUSE’s desktop users favored using Tumbleweed and those favoring server use opted for Leap. MicroOS use on servers increase more than 40 percent. Packagers were given much recognition in the survey. An overwhelming percentage of surveyees ranked official repositories as their favorite source for get software; the others that followed were rpm downloads along with Flatpak, AppImages and snap respectively.

The survey results showed that most would recommend openSUSE distributions to advanced users and least recommend the distributions for new users. The overall sense was that users were satisfied with the distributions and found them stable and reliable; the distributions and there ease of use were held in high regard, especially the core uses of the server with web hosting, databases, containers and virtualization.

A majority of people prefered openSUSE’s forums, reddit and Twitter for technical communication and knowledge sharing about openSUSE. The other platforms were used for more social interaction with the community.

For more details about the annual survey, please visit the results on the openSUSE Wiki.

a silhouette of a person's head and shoulders, used as a default avatar

Rząd dotuje organizacje religijne, które stosują jawną segregację religijną.

https://www.wykop.pl/link/6612291/rzad-dotuje-organizacje-religijne-ktore-stosuja-jawna-segregacje-religijna/

W całej sprawie chodzi o przyznanie uczelni wyższej, która nie dopuszcza w swe szeregi nie-katolików, sporego grantu pieniężnego od rządu. Mogę przeboleć fundusz kościelny, bo niby krzywdy komunistyczne. Ale grant pieniężny na prowadzenie kierunku, do którego mogą się zapisać tylko katolicy? I to z funduszu ochrony środowiska? To tak samo, jakby fundować prywatne drogi dla księdza Rydyzka…

a silhouette of a person's head and shoulders, used as a default avatar

Ingenuity’s One Year of Mars Flights

Next week it’s going to be a year since Ingenuity made the first flight in an atmosphere of another planet. An atmosphere with about 1% of density of the one on Earth.

Something that sounded unlikely became a great success with 24 flights so far. To celebrate the occasion I revisited an older project of mine and used more appropriate footage shot by my friend VOPO at Fuertaventura, Canary Islands. I rendered some overlays of the Ingenuity model kindly provided by NASA.

Ingenuity with Perseverance

The music was composed on the Dirtywave M8 Tracker, went straight from the device, no mastering. The time constraints of weeklybeats is brutal. I used a hint to the Close Encounters of the Third Kind main theme, but because the flight is unrealistically dynamic, it leans on a mangled classic Jungle Jungle break.

Thanks to the C-Base crew for taking good care of us and the GNOME Fundation to support the travel (mostly on the surface).

Supported by the GNOME Foundation

a silhouette of a person's head and shoulders, used as a default avatar

Blender Eevee is Magic

Mentioning my ever lasting adoration of the realtime engine in Blender on twitter kind of exploded (for my standards), so I figured I’d write a blog post.

Boatswain

Georges wrote a great little app to configure Elgato Stream Deck buttons on non-proprietary platforms. I’ve been following Georges’ ventures into streaming. He’s done amazing job promoting GNOME development and has been helping with OBS a lot too.

To make Boatswain work great by default, rather than just presenting people with a monstrous shit work mountain to even get going, Georges needed some graphical assets to have the configurator ship with ready made presets.

Boatswain Screenshot

Sadly I’ve already invested in an appliance that covers my video streaming and recording needs, so I wasn’t going to buy the gear to test how things look. Instead I made the device virtual, and rendered an approximation of it in Blender.

For GNOME wallpapers there are cases where I have to resort to waiting on Cycles, but if you compare this 2 minute render (left) to the 0.5s render (right) it’s really not worth it in many cases.

Cycles left, Eevee right

Personally switching to AMD GPUs a couple of years back has removed much pain updating the OS and the reason I was allowed to do that is Eevee.

There was some interest in how the shader is set up, so I’m happily making the file available for dissection.

To do it properly, I’d probably want to improve the actual display shader to rasterize the bitmaps in a more sophisticated manner than just displaying a bitmap with no filtering. But I’d say even this basic setup has served the purpose of checking the viability of a symbol rendered on a lousy display.

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE Tumbleweed – Review of the week 2022/14

Dear Tumbleweed users and hackers,

Another week has gone by, and despite me claiming we won’t be using openQA anymore (hey, it was April 1st; you should know not to trust anything you read on that day), we are of course very much relying on it. Tumbleweed couldn’t possibly be as stable as it is without the help of openQA and the fabulous team developing and maintaining it. So since last Friday we have thrown a full set of 7 snapshots at openQA and received a ‘go’ back for all of them. so we pushed out 7 snapshots (0331, 0401, 0402, 0403, 0404, 0405, and 0406)

The major changes included in those snapshots were:

  • Linux kernel 5.17.1
  • KDE Plasma 5.24.4
  • SQLite 3.38.2
  • Mesa 22.0.1
  • Pipewire 0.3.49
  • openldap 2.5.9 (upgraded from 2.4.59)
  • dracut 056
  • gcc12: it is providing the base libraries like libgcc_s1, libstdc++6, but it is not yet the default compiler. That will comea bit later
  • procps 4.0.0: note: a few issues have been found that could be attribute to this, like salt not being able to parse the resulting output when distributing sysctl values
  • autoconf 2.71: a bunch of packages fail to build now. A common thing I’d seen is gettext translation catalogs wrongly being installed to /usr/locale (instead of /usr/share/locale). Most package seem to be old and rather under-maintained (most, nost all!)
  • Full RelRO (-z now) has been enabled by default for all builds. Partial RelRo was enbable around the timeframe of SUSE Linux 10.1, about time to move one level up

Currently, we’re testing these changes in the staging areas:

  • LLVM 14
  • Podman 4.0.3
  • Rust 1.60
  • Pytest 7
  • GCC 12 as default compiler
the avatar of openSUSE News

Tumbleweed to Get New Default GCC

A new default GNU Compiler Collection for openSUSE Tumbleweed is set to follow one of the snapshots that rolled out this week.

Snapshot 20220405 prepares the default compiler switch to GCC 12. GCC 12 is now providing the libgcc standard libraries. It is not yet the default compiler, but that will follow at a later date.

The most recent snapshot that came out after the switch was 20220406. This snapshot updated five packages. One of those updated packages was autoconf 2.71. Configuration scripts from the latest autoconf improved compatibility with C-variant front end compiler clang and compatibility was restored with automake’s rules for regenerating a configuration. The Linux SCSI target framework tgt package updated to version 1.0.82 and added support for listening on a random port. Other packages to update in the snapshot were xf86-video-dummy 0.4.0 and yast2-slp-server 4.5.0.

The 20220405 snapshot prepares GCC 12 to become the default compiler at a later date for the rolling release and it had multiple packages updated in the snapshot. New packages were inherited from GCC 11 with the GCC 12.0.1 update, and the compiler provides the conflicts to glibc crosses since only one GCC version for the target can be installed at the same time; the same thing was done for libgccjit as well. Translations were updated in the gedit 42.0 update. An update in the new dracut version does not use network-wicked as default the network handler. The update of text shaping engine harfbuzz 4.2.0 fixed the handling of contextual lookups. The word selection error in Arabic text was fixed in libreoffice 7.3.2.2 and four crashes with various circumstances and inputs were fixed. Other notable updates in the snapshot were libvirt 8.2.0, aws-cli 1.22.87 and file synchronizer unison 2.52.0.

The 20220404 snapshot gave Xfce users a file manager update to thunar; the 4.16.11 thunar version fixed a few reloading views, prevented a crash on malformed bookmarks and updated translations. Smoother rendering of a clock’s hands were made in the xclock 1.1.0 update. The printing package cups-filters 1.28.12 had some resolution and image size fixes. Other packages to update in the snapshot were xwayland 22.1.1, ceph 16.2.7, yast2-installation 4.4.51 and more.

Added support for the Zstandard compression algorithm was made in the 20220403 snapshot with the kdump 1.0.2 update; kdump also removed a few patches for the crash dumps feature of the Linux Kernel. New features were added in the update from gnome-logs 3.36.0 to 42.0. Some of those features include opening journal files directly and several keyboard shortcuts to help with overlay. There were also window sizing improvements in the gnome-logs upgrade. An update of libsoup 3.0.6 had miscellaneous HTTP/2 fixes, meson build improvements and fixed build issues with Visual Studio. A few other packages were updated in the snapshot.

The second snapshot of the month, 20220402, had several package updates. The minor update of ImageMagick 7.1.0.28 fixed a couple buffer overflows and 3D Graphics Library Mesa 22.0.1 fixed some maintainer scripts and panfrost drivers. Some documentation was added in the firewalld 1.1.1 update about container host integration and a build fix was made for the use of dbus inside a container. A Common Vulnerabilities and Exposure was addressed in the openvpn 2.5.6 update; the vulnerability could allow for a possible authentication bypass in external authentication plug-in. Bluetooth Advanced Audio Distribution Profile streaming was improved and reduced stuttering was made on some devices with the PipeWire 0.3.49 update. Several other packages updated in the snapshot.

Konqi fans had an update to Plasma 5.24.4 in the 20220402; the update provided fixes mostly for KWin and Plasma Workspace. Window flickering was fixed with the update for KWin when a clip intersected with the blur region and Plasma Workspace fixed some weird behavior with the lockscreen. Other packages to update in the snapshot were ncurses 6.3.20220319, expat 2.4.8, sqlite3 3.38.2 and more.

a silhouette of a person's head and shoulders, used as a default avatar

Using the regexp-parser of syslog-ng

For many years, you could use the match() filter of syslog-ng to parse log messages with regular expressions. However, the primary function of match() is filtering. Recent syslog-ng versions now have a dedicated regular expression parser, the regexp-parser(). So, you should use match() only if your primary use case is filtering. Otherwise, use the regexp-parser for parsing, as it is a lot more flexible.

You can read the rest of my blog at https://www.syslog-ng.com/community/b/blog/posts/using-the-regexp-parser-of-syslog-ng

syslog-ng logo

the avatar of Federico Mena-Quintero

Automating my home network with Salt

I'm a lousy sysadmin.

For years, my strategy for managing my machines in my home network has been one of maximum neglect:

  • Avoid distribution upgrades or reinstalls as much as possible, because stuff breaks or loses its configuration.

  • Keep my $HOME intact across upgrades, to preserve my personal configuration files as much as possible, even if it accumulates vast amounts of cruft.

  • Cry when I install a long-running service on my home server, like SMB shares or a music server, because I know it will break when I have to reinstall or upgrade.

About two years ago, I wrote some scripts to automate at least the package installation step, and to set particularly critical configuration files like firewall rules. These scripts helped me lose part of my fear of updates/reinstalls; after running them, I only needed to do a little manual work to get things back in working order. The scripts also made me start to think actively on what I am comfortable with in terms of distro defaults, and what I really need to change after a default installation.

Salt

In my mind there exists this whole universe of scary power tools for large-scale sysadmin work. Of course you would want some automation if you have a server farm to manage. Of course nobody would do this by hand if you had 3000 workstations somewhere. But for my puny home network with one server and two computers? Surely those tools are overkill?

Thankfully that is not so!

My colleague Richard Brown has been talking about the Salt Project for a few years. It is similar to tools for provisioning and configuration management like Ansible or Puppet.

What I have liked about Salt so far is that the documentation is very good, and it has let me translate my little setup into its configuration language while learning some good practices along the way.

I started with the Salt walkthrough, which is pretty nice.

TL;DR: the salt-master is the central box that keeps and distributes configuration to other machines, and those machines are called salt-minions. You write some mostly-declarative YAML in the salt-master, and propagate that configuration to the minions. Salt knows how to "create a user" or "install a package" or "restart a service when a config file changes" without you having to use distro-specific commands.

My home setup and how I want it to be

pambazo - Has a RAID and serves media and stores backups. This is my home server.

tlacoyo - A desktop box, my main workstation.

torta - My laptop, which has seen very little use during the pandemic; in principle it should have an identical setup to my desktop box.

I open the MDNS firewall ports on those three machines so I can use somehost.local to access them directly, without having to set up DNS. Maybe I should learn how to do the latter.

All the machines need my basic configuration files (Emacs, shell prompt), and a few must-have programs (Emacs, Midnight Commander, git, podman).

My workstations need my gitlab/github SSH keys, my Suse VPN keys, some basic infrastructure for development, I need to be able to reinstall and reconstruct them quickly.

All the machines should get the same configuration for the two printers we have at home (a laser that can do two-sided printing, and an inkjet for photos).

My home server of course needs all the configuration for the various services it runs.

I also have a few short-lived virtual machines to test distro images. In those I only need my must-have packages.

Setting up the salt master

My home server works as the "salt master", which is the machine that holds the configuration that should be distributed to other boxes. I am just using the stock salt-master package from openSUSE. The only things I changed in its default configuration were the paths where it looks for configuration, so I can use a git checkout in my home directory instead of /etc/salt. This goes in /etc/salt/master:

# configuration for minions
file_roots:
  base:
    - /home/federico/src/salt-states

# sensitive data to be distributed to minions
pillar_roots:
  base:
    - /home/federico/src/salt-pillar

Setting up minions

It is easy enough to install the salt-minion package and set up its configuration to talk to the salt-master, but I wanted a way to bootstrap that. Salt-bootstrap is exactly that. I can run this on a newly-installed machine:

curl -o bootstrap-salt.sh -L https://bootstrap.saltproject.io
sudo sh bootstrap-salt.sh -w -A 192.168.1.10 -i my-hostname stable

The first line downloads the bootstrap-salt.sh script.

The second line:

  • -w - use the distro's packages for salt-minion, not the upstream ones.

  • -A 192.168.1.10 - the IP address of my salt-master. At this point the machine that is to become a minion doesn't have MDNS ports open yet, so it can't find pambazo.local directly and it needs its IP address.

  • -i my-hostname - Name to give to the minion. Salt lets you have the minion's name different from the hostname, but I want them to be the same. That is, I want my tlacoyo.local machine to be a minion called tlacoyo, etc.

  • stable - Use a stable release of salt-minion, not a development one.

When the script runs, it creates a keypair and asks the salt-master to register its public key. Then, on the salt-master I run this:

salt-key -a my-hostname

This accepts the minion's public key, and it is ready to be configured.

The very basics: set the hostname, open up MDNS in the firewall

I want my hostname to be the same as the minion name. Make it so!

'set hostname to be the same as the minion name':
  network.system:
    - hostname: {{ grains['id'] }}
    - apply_hostname: True
    - retain_settings: True

Salt uses Jinja templates to preprocess its configuration files. One of the variables it makes available is grains, which contains information inherent to each minion: its CPU architecture, amount of memory, OS distribution, and its minion id. Here I am using {{ grains['id'] }} to look up the minion id, and then set it as the hostname.

To set up the firewall for desktop machines, I used YaST and then copied the resulting configuration to Salt:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
/etc/firewalld/firewalld.conf:
  file.managed:
    - source: salt://opensuse/desktop-firewalld.conf
    - mode: 600
    - user: root
    - group: root

/etc/firewalld/zones/desktop.xml:
  file.managed:
    - source: salt://opensuse/desktop-firewall-zone.xml
    - mode: 644
    - user: root
    - group: root

firewalld:
  service.running:
    - enable: True
    - watch:
      - file: /etc/firewalld/firewalld.conf
      - file: /etc/firewalld/zones/desktop.xml

file.managed is how Salt lets you copy files to destination machines. In lines 1 to 6, salt://opensuse/desktop-firewalld.conf gets copied to /etc/firewalld/firewalld.conf. The salt:// prefix indicates a path under your location for salt-states; this is the git checkout with Salt's configuration that I mentioned above.

Lines 15 to 20 tell Salt to enable the firewalld service, and to restart it when either of two files change.

Indispensable packages

I cannot live without these:

'indispensable packages':
  pkg.installed:
    - pkgs:
      - emacs
      - git
      - mc
      - ripgrep

pkg.installed takes an array of package names. Here I hard-code the names which those packages have in openSUSE. Salt lets you do all sorts of magic with Jinja and the salt-pillar mechanism to have distro-specific package names, if you have a heterogeneous environment. All my machines are openSUSE, so I don't need to do that.

My username, and personal configuration files

This creates my user:

federico:
  user.present:
    - fullname: Federico Mena Quintero
    - home: /home/federico
    - shell: /bin/bash
    - usergroup: False
    - groups:
      - users

This just copies a few configuration files:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
{% set managed_files = [
  [ 'bash_profile',  '.bash_profile',         644 ],
  [ 'bash_logout',   '.bash_logout',          644 ],
  [ 'bashrc',        '.bashrc',               644 ],
  [ 'starship.toml', '.config/starship.toml', 644 ],
  [ 'gitconfig',     '.gitconfig',            644 ],
] %}

{% for file_in_salt, file_in_homedir, mode in managed_files %}

/home/federico/{{ file_in_homedir }}:
  file.managed:
    - source: salt://opensuse/federico-config-files/{{ file_in_salt }}
    - user: federico
    - group: users
    - mode: {{ mode }}

{% endfor %}

I use a Jinja array to define the list of files and their destinations, and a for loop to reduce the amount of typing.

Install some flatpaks

Install the flatpaks I need:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
'Add flathub repository':
  cmd.run:
    - name: flatpak remote-add --if-not-exists --user flathub https://flathub.org/repo/flathub.flatpakrepo
    - runas: federico

{% set flatpaks = [
   'com.github.tchx84.Flatseal',
   'com.microsoft.Teams',
   'net.ankiweb.Anki',
   'org.gnome.Games',
   'org.gnome.Solanum',
   'org.gnome.World.Secrets',
   'org.freac.freac',
   'org.zotero.Zotero',
] %}

install-flatpaks:
  cmd.run:
    - name: flatpak install --user --or-update --assumeyes {{ flatpaks | join(' ') }}
    - runas: federico
    - require:
      - 'Add flathub repository'

Set up one of their configuration files:

/home/federico/.var/app/org.freac.freac/config/.freac/freac.xml:
  file.managed:
    - source: salt://opensuse/federico-config-files/freac.xml
    - user: federico
    - group: users
    - mode: 644
    - makedirs: True

This last file is the configuration for fre:ac, a CD audio ripper. Flatpaks store their configuration files under ~/.var/app/flatpak-name. I configured the app once by hand in its GUI and then copied its configuration file to my salt-states.

Etcetera

The above is not all my setup; obviously things like the home server have a bunch of extra packages and configuration files. However, the patterns above are practically all I need to set up everything else.

How it works in practice

I have a git checkout of my salt-states. When I change them and I want to distribute the new configuration to my machines, I push to the git repository on my home server, and then just run a script that does this:

#!/bin/sh
set -e
cd /home/federico/src/salt-states
git pull
sudo salt '*' state.apply

The salt '*' state.apply causes all machines to get an updated configuration. Salt tells you what changed on each and what stayed the same. The slowest part seems to be updating zypper's package repositories; apart from that, Salt is fast enough for me.

Playing with this for the first time

After setting up the bare salt-master on my home server, I created a virtual machine and immediately registered it as a salt-minion and created a snapshot for it. I wanted to go back to that "just installed, nothing set up" state easily to test my salt-states as if for a new setup. Once I was confident that it worked, I set up salt-minions on my desktop and laptop in exactly the same way. This was very useful!

A process of self-actualization

I have been gradually moving my accumulated configuration cruft from my historical $HOME to Salt. This made me realize that I still had obsolete dotfiles lying around like ~/.red-carpet and ~/.realplayerrc. That software is long gone. My dotfiles are much cleaner now!

I was also able to remove my unused scripts in ~/bin (I still had the one I used to connect to Ximian's SSH tunnel, and the one I used to connect to my university's PPP modem pool), realize which ones I really need, move them to ~/.local/bin and make them managed under Salt.

To clean up that cruft across all machines, I have something like this:

/home/federico/.dotfile-that-is-no-longer-used:
  file.absent

That deletes the file.

In the end I did reach my original goal: I can reinstall a machine, and then get it to a working state with a single command.

This has made me more confident to install cool toys in my home server... like a music server, which brings me endless joy. Look at all this!

One thing that would be nice in Flatpak

Salt lets you know what changed when you apply a configuration, or it can do a dry run where it tells you what will change without actually modifying anything. For example, Salt knows how to query the package database for each distro and tell you if a package needs to be updated, or if an existing config file is different from the one that will be propagated.

It would be nice if the flatpak command-line tool would return this information. In the examples above, I use flatpak remote-add --if-not-exists and flatpak install --or-update to achieve idempotency, but Salt is not able to know what Flatpak actually did; it just runs the commands and returns success or failure.

Some pending things

Some programs keep state along with configuration in the same user-controlled files, and that state gets lost if each run of Salt just overwrites those files:

  • fre:ac, a CD audio ripper, has one of those "tip of the day" windows at startup. On each run, it updates the "number of the last shown tip" somewhere in its configuration for the user. However, since that configuration is in the same file that holds the paths for ripped music and the encoder parameters I want to use, the "tip of the day" gets reset to the beginning every time that Salt rewrites the config file.

  • When you load a theme in Emacs, say, with custom-enabled-theme in custom.el, it stores a checksum of the theme you picked after first confirming that you indeed want to load the theme's code — to prevent malicious themes or something. However, that checksum is stored in another variable in custom.el, so if Salt overwrites that file, Emacs will ask me again if I want to allow that theme.

In terms of Salt, maybe I need to use a finer-grained method than copying whole configuration files. Salt allows changing individual lines in config files, instead of overwriting the whole file. Copy the file if it doesn't exist; just update the relevant lines later?

I need to distill my dconf for GNOME programs installed on the system, as opposed to flatpaks, to be able to restore it later.

I'm sure there's detritus under ~/.config that should be managed by Salt... maybe I need to do another round of self-actualization and clean that up.

We have a couple of Windows machines kicking around at home, because schoolwork. Salt works on Windows, too, and I'd love to set them up for automatic backups to the home server.

a silhouette of a person's head and shoulders, used as a default avatar

The cult of Amiga and SGI, or why workstations matter

I’m considered to be a server guy. I had access to some really awesome server machines. Still, when computers come up in discussions, we are almost exclusively talk about workstations. Even if servers are an important part of my life, that’s “just” work. I loved the SGI workstations I had access to during my university years. Many of my friends still occasionally boot their 30 years old Amiga boxes.

The cult of Amiga

One would say that the Amiga was popular in the eighties and early nineties. They were revolutionary desktop computers, ahead of their time in many ways. Way better multimedia capabilities than the x86 PCs of that age and a more responsive operating system as well. By the mid nineties they lost their technical advantages and the company went bankrupt. However, thirty years later people still boot these computers and even now new software is still developed. These boxes rarely change owners and if they do, they cost a smaller fortune.

Once upon a time I worked for Genesi. I was a Linux guy and I worked on quality assurance of Linux on their PowerPC boxes. It took a while for me to realize that at heart they were working on keeping the Amiga dream alive. I tried MorphOS, but compared to openSUSE on the Pegasos, I felt it really limited. However, many years later Linux on 32 bit big endian POWER is dead. No mainstream Linux distribution officially supports it. MorphOS on the other hand is still actively developed, it had a new release a few months ago. And whenever I mention that I worked for Genesi, people praise me like God, how close I was working to the Amiga dream.

The cult of SGI

I started my IT life with x86: an XT then 286, 486, and so on. I used some really powerful RS/6000 machines remotely at Dartmouth College. I learned basics of scripting on them, how to exit from vi, and few more things. But it was just a bit of curiosity, not any kind of attachment. I also had access to Macs there, but I never really liked it, as MacOS felt a kind of dumb, and does so ever since.

Fast forward a few years. Soon after I started university I became part of the student team at the faculty IT department. When a couple of SGI workstations arrived there, I got user access, and soon admin access as well. This is where I first used Netscape Navigator, ran Java applications, and enjoyed running a GUI on a UNIX machine.

Unfortunately, just like Amiga, SGI was also killed by x86. It took a bit longer, they even tried to embrace x86 and refocus from workstations to servers, but they are also gone now. Their legacy is still with us: I use the XFS file system originally developed by SGI. The classic SGI desktop is now available on Linux as well: https://docs.maxxinteractive.com/

SGI logo

The x86 takeover

Even if x86 PCs were limited, boring and slow compared to SGI, HP, IBM, DEC or SUN workstations, they had an advantage: price. So cost was a big factor and people realized with Beowulf clusters that you could get more performance for less cost than large SMP boxes using commodity hardware. So like many trends, it grew from HPC. In the end non-x86 workstation hardware disappeared, and just SUN and IBM kept producing servers using non-x86 architectures. While at the turn of the century most Linux distributions were available for several architectures, just a decade later most Linux distros were back to supporting only x86 or only x86 as a primary architecture. Many alternative architectures disappeared completely. While most open source software was pretty well portable earlier, with the focus on x86 much of this easy portability disappeared.

The reason is quite simple: people are passionate about their workstations. Be it at home, like the Amiga, or at their workplaces, like the SGI for me, it is almost the same. However, it must be something they have direct access to, not something in a remote server room. As an intern I helped in installing the fastest server of Hungary, an IBM POWER box larger than a fridge. But to me installing Linux on a spare IBM RS/6000 43P was a much more interesting experience.

Easily available remote resources for developers are of course useful, but in itself they do not have as much impact as a workstation on the developers desk: work vs. passion. For many years the majority of developers only could use x86 based workstations, and even if the situation is improving, we are still suffering from the results.

The comeback

The comeback of non-x86 computers was largely due to cheap Arm boards and systems available. My first Arm system was a SheevaPlug, then an Arm laptop from Genesi, before the Raspberry Pi arrived. Just as x86 dominance was largely due to price, this hegemony was blasted by cheap Arm hardware.

Standardization of Arm systems started a long time ago. However, standardized systems are unfortunately quite expensive. So, most developers use the cheaper non-standard boards. See my earlier blog: https://peter.czanik.hu/posts/arm-workstation-why-softirion-overdive-still-relevant/

POWER also has a comeback. In part it is due to the efforts of the OpenPOWER Foundation, providing developers with remote resources. However, POWER9 workstations by Raptor Computing also have an important role in this. While remote resources are truly useful, just think about my blog from last week about openSUSE Build Service, developers prefer workstations. I was often told in discussions at various conferences, that the passion of how developers resolve problems on their POWER9 workstations had more impact on advancement of open source software on POWER, than any of the remote resources available. Workstations by Raptor Computing are a lot cheaper than IBM servers, however still out of reach for many. Luckily there are multiple projects to resolve this problem and provide POWER hardware at more accessible prices: the Libre-SOC project and the Power Pi project by the OpenPower Foundation. Hopefully they succeed and within a reasonable time frame.

the avatar of openSUSE News

Leap Micro Beta Available for Testers

People browsing through openSUSE’s websites may spot something new on get.opensuse.org.

Leap Micro, which is currently showing the 5.2 beta version, is for containerized and virtualized workloads. It is immutable and ideal for host-containers and described as an ultra-reliable, lightweight operating system that experts can use for compute deployments. The community version of Leap Micro is based on SUSE Linux Enterprise Micro and leverages the enterprise hardened security of twins SUSE Linux Enterprise and openSUSE Leap, which merges this to a modern, immutable, developer-friendly OS platform.

Leap Micro has several use cases for edge, embedded/IoT deployments and more. Leap Micro is well suited for decentralized computing environments, microservices, distributed computing projects and more. The release will help developers and IT professionals to build and scale systems for uses in aerospace, telecommunications, automotive, defense, healthcare, robotics, blockchain and more. Leap Micro provides automated administration and patching.

Leap Micro has similarities of MicroOS, but Leap Micro does not offer a graphical user interface or desktop version. It is also based on SUSE Linux Enterprise and Leap rather than a variant of Tumbleweed, which MicroOS bases its release on.

Leap release manager Luboš Kocman announced the availability of the images on the factory mailing list.

Users wanting to test out Leap Micro should look at the openSUSE wiki page. Those using a preconfigured image with Raspberry Pi or Intel bases should view the combustion and ignition documentation. Users will need to label the volume name ignition on the usb-drive. This can be done via disk utility (format partition) or sudo e2label /dev/sdY ignition. If the config.ign file has the following: passwordHash : O9h4s2UUtAtok, the password will be password. Leap Micro has the current openSUSE default, which is PermitRootLogin = without-password. Therefore users need to supply a pubkey via combustion; this is a known issue and will be fixed.

Users should know that zypper is not used with Leap Micro, but transactional-update is used instead.