Skip to main content

the avatar of Federico Mena-Quintero

Paying technical debt in our accessibility infrastructure

This is somewhat of an extended abstract for the talk I want to give at GUADEC.

Curently, very few people work on GNOME's accessibility infrastructure, which is basically what the free desktop ecosystem uses regardless of GUI toolkit. After Oracle acquired Sun Microsystems in 2010, paid work on accessibility virtually disappeared. There is little volunteer work on it, and the accessibility stack has been accumulating technical debt since then.

What is legacy code? What sort of technical debt is there in our accessibility stack?

  • It has few or no tests.

  • It does not have a reproducible environment for building and testing.

  • It has not kept up with the rest of the system's evolution.

  • Few people know how it works.

It's a vicious circle, and I'd like to help break the loop.

Quick reminder: What does the accessibility infrastructure do?

An able-bodied person with good eyesight uses a desktop computer by looking at the screen, and interacting with things via the keyboard and mouse. GUI toolkits rely very heavily on this assumption, and hardware does, too — think of all the work expended in real-time flicker-free graphics, GPUs, frame-by-frame profilers, etc.

People who can't see well, or at all, or who can't use a regular keyboard in the way applications require them to (only got one hand? try pressing a modifier and a key at the opposite ends of the keyboard!), or who can't use a mouse effectively (shaky hands? arthritis pain? can't double-click? can't do fine motor control to left-click vs. right-click?), they need different technologies.

Or an adapter that translates the assumptions of "regular" applications into an interaction model they can use.

The accessibility stack for each platform, including GNOME, is that kind of adapter.

In subsequent blog posts I'll describe our accessibility infrastructure in more detail.

Times change

I've been re-familiarizing myself with the accessibility code. The last time I looked at it was in the early 2000s, when Sun contracted with Ximian to fix "easy" things like adding missing accessible roles to widgets, or associating labels with their target widgets. Back then everything assumed X11, and gross hacks were used to snoop events from GTK and forward them to the accessibility infrastructure. The accessibility code still used CORBA for inter-process communication!

Nowadays, things are different. When GNOME dropped CORBA, the accessibility code was ported in emergency mode to DBus. GTK3 and then GTK4 happened, and Wayland too. The accessibility infrastructure didn't quite keep up, and now we have a particularly noticeable missing link between the toolkit and the accessibility stack: GTK removed the event snooping code that the accessibility code used to forward all events to itself, and so for example not all features of Orca (the screen reader) fully work with GTK4 apps.

Also, in Wayland application windows don't know their absolute position in the screen. The compositor may place them anywhere or transform their textures in arbitrary ways. In addition, Wayland wants to move away from the insecure X11 model where any rogue application can set itself up as an accessibility technology (AT) and sniff all the events in all applications.

Both our toolkit infrastructure and the world's security needs have changed.

What I have been doing

I've been re-familiarizing myself with how the accessibility stack works, and I hope to detail it in subsequent blog posts.

For now, I've done the following:

  • Add continuous integration to at-spi2-core. Three of the basic modules in the accessibility infrastructure — at-spi2-core, at-spi2-atk, pyatspi2 — didn't have any CI configured for them. Atk has CI, Orca doesn't, but I haven't explored the Orca code yet.

  • I intend to merge at-spi2-core/atk/at-spi2-atk/pyatspi2 into a single repository, since they are very tightly coupled with each other and it will be easier to do end-to-end tests if they are in the same repository. Currently those tests are spread out between at-spi2-atk and pyatspi2.

  • Made the README in at-spi2-core friendlier; turned it into Markdown and updated it for the current state of things.

  • Did a bit of refactoring in at-spi2-core after setting up static analysis in its CI.

  • Did a bit of exploratory refactoring there, but found out that I have no easy way to test it. Hence the desire to make it possible to test all the accessibility code together.

  • Fixed a crash in GTK when one uses gnome-text-editor with the screen reader on. (merge request awaiting review)

  • Currently trying to debug this missing annotation in gnome-shell.

A little call for help

Is anyone familiar with testing things in Gitlab that need to be launched by dbus-broker? I think this may require either running systemd in the CI container (a cumbersome proposition), or using a virtual machine with systemd instead of a container. Anyway — if you care about Fedora or dbus-broker, please help.

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE Tumbleweed – Review of the week 2022/17

Dear Tumbleweed users and hackers,

Week 17 was filled with snapshots – 7 of them to be precise. We have published snapshots 0421…0427 (with the next ones almost ready for QA).

The main changes included were:

  • TeXLive 2022
  • Postfix 3.6.6
  • Linux kernel 5.17.4
  • jdk-11.0.15+10 (April 2022 CPU)
  • systemd: systemd-boot efi binary is now signed
  • Mesa 22.0.2
  • KDE Gear 22.04.0

Staging projects are currently busy building and testing these things:

  • ffmpeg 5: use ffmpeg 5 as default over ffmpeg 4; results are mixed so far
  • GNOME 42.1
  • cURL 7.83.0
  • Linux kernel 5.17.5
  • some fdupes changes: aid with reproducible builds (order dups by name)
  • GCC 12 as default compiler

the avatar of Timo's openSUSE Posts

GNOME Dynamic Triple Buffering patch on openSUSE

I’ve always, or at least ever since the development of the iconic Nokia N9 and the projects I was working on at the time, wanted “60 fps” silky smooth behavior from both phones and computers, and learned to be sensitive to that. GNOME has been fighting back a bit on that front for several years though on HiDPI displays, with also regressing at least on openSUSE a bit earlier which I was unable to pinpoint exact reason to. A combination of power management options helped, as have been kernel and GNOME fixes later on, so I have been mostly ok but still not happy.

This changed last year, when Daniel van Vugt started offering his awesome work on triple buffering in GNOME, which ensures correct amount of power budget to be used for what’s needed for a smooth UI. I backported / adapted the patch to GNOME 40 on Tumbleweed first, then later GNOME 41. There were a couple of bugs left and Wayland support worked worse at the time, but I still was so happy with the performance that I couldn’t imagine anymore using GNOME without the patch set.

Now with GNOME 42 the patch set is mature, and I’m using it also on Wayland without obvious issues. It’s great on my laptop’s internal 4K display, and on my WQHD (2560x1440) desktop screen. Notably, the patch set is enabled by default in the new Ubuntu 22.04 LTS long-term supported version, after being in development for 1.5 years. I really hope it will be merged upstream soon, but meanwhile we have the next best option - a patch set that directly applies on top of GNOME 42.

To install it from my OBS repository (note: I will not use that repository for other packages despite the generic name):

zypper ar https://download.opensuse.org/repositories/home:/tjyrinki_suse:/branches:/openSUSE:/Factory/openSUSE_Tumbleweed/home:tjyrinki_suse:branches:openSUSE:Factory.repo
zypper up

The repository automatically updates to the latest changes whenever there’s a new Mutter in Tumbleweed. I will also likely continue to update the repository either with the patch set from upstream, or if rebasing becomes more difficult I can also check the patch set changes applied on top of Ubuntu’s GNOME 42.

For smoother future!

a silhouette of a person's head and shoulders, used as a default avatar

Hardware for a syslog-ng server

What hardware to use for a syslog-ng server? It is a frequent question with no definite answer. It depends on many factors: the number and type of sources, the number of logs, the way logs are processed, and so on. My experience is that for the majority users even a Raspberry Pi would be enough. But of course, not for everyone.

You can read the rest of my blog at https://www.syslog-ng.com/community/b/blog/posts/hardware-for-a-syslog-ng-server

syslog-ng logo

the avatar of Flavio Castelli

Write kubectl plugins using WebAssembly and WASI

A long time passed since the last time I wrote something on this blog! 😅 I haven’t been idle during this time, quite the opposite… I kept myself busy experimenting with WebAssembly and Kubernetes.

You probably have already heard about WebAssembly, but there are high chances that happened in the context of Web application development. There’s however a new emerging trend that consists of using WebAssembly outside of the browser.

WebAssembly has many interesting properties that make it great for writing plugin systems or even distributing small computational units (think of FaaS).

WebAssembly is what is being used to power Kubewarden, a project I created almost two years ago at SUSE Rancher, with the help of Rafa and other awesome folks. This is where the majority of my “blogging energies” have been focused.

Now, let’s go back to the main focus of today’s blog entry: write kubectl plugins using WebAssembly.

The current state of things

As you all know, kubectl can be easily extended by writing external plugins.

These plugins are executables named kubectl-<name of the plugin> that, once put in your $PATH can be invoked via kubectl <name of the plugin>. This is the same mechanism used to write git plugins.

These plugins can be managed via a tool called Krew.

The kubectl tool is available for multiple operating systems and architectures, which means these plugins must be available for many platforms.

Can WebAssembly help here?

I think writing kubectl plugins using WebAssembly has the following advantages:

  • Portability: you don’t have to build your WebAssembly-powered plugin for all the possible operating systems and architectures the end users might want.
  • Security: each WebAssembly module is executed inside of a dedicated sandbox. They cannot see other modules or processes running on the host. They also don’t have access to the host resources (filesystem, devices, network). Think of them as containers.
  • Size: the majority of kubectl plugins are written using Go, which produces big binaries. The average size of a kubectl plugin is around 9 Mb. WebAssembly on the other hand, can produce plugins that are half the size.

Last but not least, this sounds like a fun experiment!

Introducing krew-wasm

The idea about writing kubectl plugins with WebAssembly originated during a brainstorming session I was doing with Rafa about our upcoming talk for WasmDay EU 2022. The idea kinda “infected” me, I had to hack on it ASAP!!! This is how the krew-wasm project was created.

krew-wasm takes inspiration from Krew, but it does not aim to replace it. That’s quite the opposite, it’s a complementary tool that can be used alongside with Krew.

The sole purpose of krew-wasm is to manage and execute kubectl plugins written using WebAssembly and WASI.

krew-wasm plugins are WebAssembly modules that are distributed using container registries, the same infra used to host container images.

krew-wasm can download kubectl WebAssembly plugins from a container registry and make them discoverable to kubectl. This is achieved by creating a symbolic link for each managed plugin. This symbolic link is named kubectl-<name of the plugin> but, instead of pointing to the WebAssembly module, it points to the krew-wasm executable.

Once invoked, krew-wasm determines its usage mode which could either be a “direct invocation” (when the user invokes the krew-wasm binary to manage plugins) or it could be a “wrapper invocation” done via kubectl.

When invoked in “wrapper mode”, krew-wasm takes care of loading the WebAssembly plugin and invoking it. krew-wasm works as a WebAssembly host, and takes care of setting up the WASI environment used by the plugin.

I’ll leave the technical details out of this post, but if you want you can find more on the GitHub project page.

Some examples

The POC would not be complete without some plugins to run. Guess what, you can find a one right here!

The kubectl decoder plugin dumps Kubernetes Secret objects to the standard output, decoding all the data that is base64-encoded. On top of that, when a x509 certificate is found inside of the Secret, a detailed output is shown rather then the not so helpful PEM encoded representation of the certificate.

If you want to experiment with this idea, you can write your plugins using Rust and this SDK.

Summary

This has been a nice experiment. It proves the combination of WebAssembly and WASI can be used to produce working kubectl plugins.

What’s more interesting is the fact these technologies could be used to extend other Cloud Native projects. Did someone say helm? 😜

There are however some limitations, mostly caused by the freshness of WASI. These are documented here. However, I’m pretty sure things will definitely improve over the next months. After all the WebAssembly ecosystem is moving at a fast pace!

a silhouette of a person's head and shoulders, used as a default avatar

21unity: serving open source software in a cloud based on OpenPOWER

The first time I heard about 21unity was when I read the announcement: 21unity Joins OpenPOWER Foundation. I immediately became interested in the company, as it combines two things I am interested in: POWER and open source. Among others 21unity has its own cloud based on the POWER platform and provides Nextcloud as a service. I tried to refresh my German knowledge and read their website, but the more I read the more interesting it got and the more questions I had. I have seen from the reactions on Twitter, that many people were happy to learn about a new company working with POWER. So, instead of a few quick questions in private, I asked for an interview. Chris Branston of 21unity answered my questions.

21unity Collaboration Cloud

Introduction:

Hi, my name is Chris Branston and I’m 35, living in Nürnberg Germany and am the Head of Marketing and Communications for 21unityGmbH. I’m a tech and opensource enthusiast and have spent many years working in the IT/opensource Industry and have always enjoyed meeting and collaborating with people from around the globe in the many different communities creating awesome and open solutions.

Chris Branston

How 21unity was born?

I only joined 21unity in March 22 but one of the favorite quotes our CTO likes to mention is a quote from Steve Jobs: “Here´s to the crazy ones (…) the ones who see things differently (…) You can quote them, disagree with them, glorify or vilify them, but the only thing you can´t do is ignore them. Because they change things.” “Because the people who are crazy enough to think they can change the world, are the ones who do.“

I love the way 21unity tries to find new solutions, to everyday problems without all the nonsense and blabla, which we are used to from big corporations. But with a sole focus on making IT accessible and easy to use and maybe even fun for our users.

As lateral thinkers in technology, we leave the path of conventional thinking behind, to find new and efficient solutions. Solutions that form the foundation for a trusting and long-term cooperation. Our claim is to stand by our customers as honest partner.

We‘ve learned our trade from scratch and our team consists of technicians, developers and programmers who were there in the beginnings: Those who grew up with Apple II and Lisa and worked with systems from sgi, Cray, Sun and IBM. But generally, we are just as knowledgeable in the Linux or Unix environment, as we are in handling Apple devices (Mac) or Windows PCs.

Our mission is to take really complex technology and deliver it to our customers (and with that the end users) in an easy to use, stress free way. While it is vital, to use state of the art technology, it’s even more important, to create products that do one simple thing – WORK.

Those are the reasons why 21unity was founded and from where we take our everyday motivation to do what we do.

How did you get started with POWER? What was the main reason?

We decided, the only way to deliver consistent services with full control of what happens where and how data is handled and how secure these systems are, is to build our own data center, which is not connected to any of the big 3s networks. This idea might sound a bit crazy at first, but it has one big benefit when you build stuff from scratch: There are no messy integrations, upgrades or other legacy issues that come up, but you get to build exactly what you need and want.

So when we started thinking about architecture and designing the system, we started thinking from the other side- the enduser perspective. We wanted to build a cloud system, that is fast, easy to use and foremost reliable and secure. With the hopes of becoming very successful and gaining many new users in short periods of time, we knew it would be important, that we use hardware that is scalable, modular and which gives the maximum user per core, or to say it a little more colloquial, which gives us more bang for our buck.

The thing about x86 and traditional VMs, is that they also are scalable and manageable, but there is just so much speed you can get out of them. In 2022 we have so many new ways of deploying and running workloads on hardware, that it wouldn’t have made any sense, to deploy in a traditional, or say pretty monolithic way.

So when it came down to choosing an architecture, naturally we decided on using openPOWER, because the POWER architecture has proven itself time and time again in big, mission critical deployments. OpenPOWER, in our scenario, helps us tackle typical challenges like scalability, high availability and stability of our infrastructure. Since we believe in the benefits of open solutions and the communities building them,we joined the OpenPOWER foundation, because we want to use the benefits of a proven technology, combined with a global community of enthusiast and experts. And at 21unity internally, we were always more on the RISC than the CISC side of thing ;)

Do you use IBM servers, or from Raptor Computing? Or do you build your own?

We use POWER 9 processors combined with hardware, that we build ourselves. We will also deliver this hardware to our customers in the future, if they don’t want to or can’t use our cloud services and applications. While IBM servers are usually the go-to for these cases, as mentioned in the first question, we like to re define the common standards and find our own ways of using state of the art technologies, to build the best possible services.

Do you have POWER also on the desktop?

No, for the moment, we are only working on the server side of things, but for the future I was told by our CTO, that a POWER workstation is definitely on the horizon.

You provide Nextcloud, a well known open source application as a service. Are there any other open source software in your portfolio?

Since Nextcloud is a complete collaboration platform, that includes many features and functionalities, we are using a multitude of open source projects and technologies. We are also in contact with different projects like the OpenPOWER foundation to connect and collaborate and modify some of the out of the box experiences, to create our own custom-built platform and features, that are oriented around our customers feedback and needs.

One of the main open source projects I can mention and that we heavily use is Kubernetes.

Your website and blogs are only available in German. Does it mean that you are active only in Germany or German speaking countries? Do you plan international expansion?

As our marketing expert I really love that question and have already been asked about that a couple of times. We had internal discussions about this and decided to start locally and then grow to international levels at pace. I’ve seen many companies trying to reach the whole world instead of taking things step by step and delivering quality over quantity and reach. But I definitely will start to seed international (English) content to our pages and products, as this is an industry standard and I don’t want to leave anyone out with cool information and technology.

a silhouette of a person's head and shoulders, used as a default avatar

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE Tumbleweed – Review of the weeks 2022/15 & 16

Dear Tumbleweed users and hackers,

As I skipped the review last week due to the prolonged Easter weekend, this week I will span over two weeks. In this period we have released 11 snapshots (0407…0415, 0419, and 0420). The Easter weekend was a bit slower, as a lot of people were absent. This showed even in the number of submit-requests incoming.

The main changes that happened in those two weeks included:

  • Mozilla Firerfox 99.0 & 99.0.1
  • Samba 4.16.0
  • Freetype 2.12.0
  • KDE Frameworks 5.93.0
  • Postfix 3.6.5
  • Linux kernel 5.17.2 & 5.17.3
  • Cups 2.4.1
  • LLVM 14:
  • openldap 2.6.1
  • Podman 4.0.3

This week, the number of submissions is higher already again and the staging projects are currently filled with:

  • KDE Gear 22.04.0
  • Rust 1.60
  • Pytest 7
  • Linux kernel 5.17.4
  • Poppler 22.04.0
  • GCC12 as default compiler

the avatar of openSUSE News

LLVM, PipeWire, git update in Tumbleweed

There have been three openSUSE Tumbleweed snapshots released since last Thursday.

If the 20220420 snapshot passes openQA, it might be released before this article publishes and push the number of snapshots released to four.

A little less than 10 packages were updated in the 20220419 snapshot. The most updates in the snapshot came in the 5.17.3 Linux Kernel update. A few KVM fixes were made for x86; there was also one for arm64 that makes sure an event filter isn’t changed. There were also about 30 Direct Rendering Manager changes in the kernel update. Wine applications using the JACK backend should no longer crash with the pipewire 0.3.50 update. The audio and video package update also had a change that ensures Advanced Linux Sound Architecture will now only allocate a buffer size big enough to hold four times the quantum limit instead of as large as possible. The update of the libnl3 3.6.0 package added Generic Routing Encapsulation and Virtual Tunnel Interface support for IPv6 and both yast2-trans and libstorage-ng 4.5.4 updated slavic translations.

Snapshot 20220415 updated ImageMagick to version 7.1.0.29. A few reversions were made in the update, according to the changelog, and a fix was made to account for gray images imported as RGBA. The first minor release for Mozilla Firefox 99 was made with the 99.0.1 update. The browser update fixed a selection issue in the Download panel with the drag and drop. There was also a fix for an issue preventing the Zoom gallery mode to work. An update of git 2.35.3 fixed a Common Vulnerabilities and Exposures; CVE-2022-24765 could have allowed git to execute commands defined by other users from unexpected worktrees, according to the changelog. Other packages to update in the snapshot were vim 8.2.4745, Ruby 3.1.2, Xen 4.16.1, whois 5.5.13 and more.

With the 20220414 snapshot from last Thursday, the procps package was reverted from version 4.0.0 to 3.3.17. This major version reversion picked up several patches and took care of some CVEs. A major version update of LLVM 14.0.0 arrived in the snapshot. This version brought in a bunch of new tools, dropped some patches and opted to split up Clang libraries, which was inspired by GNU Compiler Collection packaging. A major update of libunistring 1.0 was made in the snapshot, which provided Unicode 14.0.0 support and a license change. There was a git version update of kdump 1.0.2, which gave a filesystem remount for fadump regarding read and write. Also updated in the snapshot was a newer version of ncurses and dracut. There were several other packages updated in the snapshot.

a silhouette of a person's head and shoulders, used as a default avatar

Windows made easy: Windows Subystem for Linux

How can you make Windows easy? Install the Windows Subsystem for Linux, or WSL in short. Well, probably this is not true for everyone. However, as a Linux user, I definitely love WSL. When not using a browser or text editor, I spend my time on the command line. With WSL, you can have the familiar Linux command line environment from openSUSE also under Windows.

Why Windows?

Die hard Linux users might ask: why do I use Windows? There is an Open Source alternative for almost all Windows software. Well, it’s mostly true. However there are multiple problems. I love state-of-the-art hardware. Nothing could open the RAW files from my brand new camera for years under Linux, so I had to use Windows to process those files. My other hobby is playing on the synthesizer. Not that I ever learned music, but I still enjoy most of the noise I make (strictly only using headphones…). Linux Audio was problematic even before PulseAudio was introduced, but now it’s even more difficult and its sound quality is even worse. And while there are some software synthesizers available under Linux, there are a lot more available under Windows. Without spending days and weeks to get them to work.

As a bonus, using Windows also helps to separate my work and private life. Linux is my work OS, Windows is my play OS for photography and music. And I do not have access to anything work-related from my Windows box.

Why WSL?

I recall that one of my managers told me when he saw how I work: you do not need a GUI, you do everything in a terminal window. Well, it’s not completely true, but I even start LibreOffice from a terminal and not from the menu. On Windows, it’s slightly different: I start all applications from the search window. For working with files, I use the terminal. PowerShell is powerful, just as its name implies. However, I already have shell scripts to manage photo archives and got used to BASH anyway. Using WSL, I do not have to learn PowerShell, but I can keep using my familiar tools. Use joe for text editing, Midnight Commander for file management, including sftp access to remote Linux hosts. There might be native alternatives available on Windows. But that would require research, testing software, integrating a new environment.

When installing openSUSE Leap in WSL I can keep the exact same scripts and workflows on my Windows box as I already have on Linux. I can spend my time on photos and music instead of building up and maintaining a new environment on Windows.

flower

You can find some of my photos on-line at GuruShots: https://gurushots.com/pczanik/photos

There are no recordings of my music, not even in private :-)