Tue, Nov 14th, 2023
More info with -ll in sudo 1.9.15
Version 1.9.15 of sudo gives more detailed information when using the -ll option. For commands, it adds the rule that allows it. Without a command parameter, it lists rules affecting a given user. It also prints which file contains the given rule, making debugging easier.
You can read more about it at https://www.sudo.ws/posts/2023/11/more-info-with-ll-in-sudo-1.9.15/
The syslog-ng Insider 2023-11: Splunk; configuration; journald;
The November syslog-ng newsletter is now on-line:
- Sending logs to Splunk using syslog-ng
- Developing a syslog-ng configuration
- Systemd-journald vs. syslog-ng
Sun, Nov 12th, 2023
Is developing word processing software hard?
Hello LibreOffice Planet!
This is my first blog post on the topic of LibreOffice. Let me quickly explain my link to LibreOffice. I work for a data protection authority in the EU and help navigate the digital transformation of our office with about 100 staff members. While many of our partner organisations adopt Microsoft 365, our office decided to pilot Nextcloud with Collabora Office Online.
In the future, I want to blog (in my personal capacity) about my thoughts related to the use of alternative word processing software in the public sector in general and in our specific case.
As there are no dedicated resources for training, preparation of templates etc., during the pilot of LibreOffice, the experience so far covers a large spectrum of user satisfaction. Generally, our staff has been spent years of their life using Microsoft Office and has the expectation that any other software works the same way. If it does not, they send an email to me (best case) or switch back to Microsoft Office.
During the week, I discussed again some smaller LibreOffice bugs. Then, I showed this weekend some FOSS Blender animated short videos to family members. It seems that Blender is more successful in its domain than LibreOffice. Is that possible? Or are animated short videos just more capturing due to their special effects? 😅
I find it very inspiring to see what talented artists can do with Blender. For my part, I have once installed Blender and deinstalled it. Back then it was not easy to use for people not familiar with video animation software. Blender competes with proprietary software such as Maya or Cinema 4D. The latter is about 60 USD per month in the annual subscription plan. Not exactly cheap.
Then, I read in the fediverse about people working with LibreOffice:
I just tried to use #LibreOffice #Draw to draw some arrows and boxes onto JPEG images for emphasizing stuff.
The UX is really bad for somebody not working with Draw all the time.
Whatever I do, instead of drawing onto the image, the image gets selected instead.
Could not find any layer-sidebar.
Could not scale text without starting the “Character …” menu, modifying font size blindly + confirming > just to see its effect and then start all over.
Dear #FOSS, we really should do better.
— Author Karl Voit (12 November 2023 at 14:51)
In my past, I have worked on online voting systems. They are not very good yet despite years of efforts. xkcd dedicated a comic to voting software
Elections seem simple—aren’t they just counting? But they have a unique, challenging combination of security and privacy requirements. The stakes are high; the context is adversarial; the electorate needs to be convinced that the results are correct; and the secrecy of the ballot must be ensured. And they have practical constraints: time is of the essence, and voting systems need to be affordable and maintainable, and usable by voters, election officials, and pollworkers.
What is the unique challenge of developing word processing software? Happy to hear back from you in the blog comments or on the companion fediverse post!
Sat, Nov 11th, 2023
AppStream 1.0 released!
Today, 12 years after the meeting where AppStream was first discussed and 11 years after I released a prototype implementation I am excited to announce AppStream 1.0!
Some nostalgic memories
I was not in the original AppStream meeting, since in 2011 I was extremely busy with finals preparations and ball organization in high school, but I still vividly remember sitting at school in the students’ lounge during a break and trying to catch the really choppy live stream from the meeting on my borrowed laptop (a futile exercise, I watched parts of the blurry recording later).
I was extremely passionate about getting software deployment to work better on Linux and to improve the overall user experience, and spent many hours on the PackageKit IRC channel discussing things with many amazing people like Richard Hughes, Daniel Nicoletti, Sebastian Heinlein and others.
At the time I was writing a software deployment tool called Listaller – this was before Linux containers were a thing, and building it was very tough due to technical and personal limitations (I had just learned C!). Then in university, when I intended to recreate this tool, but for real and better this time as a new project called Limba, I needed a way to provide metadata for it, and AppStream fit right in! Meanwhile, Richard Hughes was tackling the UI side of things while creating GNOME Software and needed a solution as well. So I implemented a prototype and together we pretty much reshaped the early specification from the original meeting into what would become modern AppStream.
Back then I saw AppStream as a necessary side-project for my actual project, and didn’t even consider me as the maintainer of it for quite a while (I hadn’t been at the meeting afterall). All those years ago I had no idea that ultimately I was developing AppStream not for Limba, but for a new thing that would show up later, with an even more modern design called Flatpak. I also had no idea how incredibly complex AppStream would become and how many features it would have and how much more maintenance work it would be – and also not how ubiquitous it would become.
The modern Linux desktop uses AppStream everywhere now, it is supported by all major distributions, used by Flatpak for metadata, used for firmware metadata via Richard’s fwupd/LVFS, runs on every Steam Deck, can be found in cars and possibly many places I do not know yet.
What is new in 1.0?
The most important thing that’s new with the 1.0 release is a bunch of incompatible changes. For the shared libraries, all deprecated API elements have been removed and a bunch of other changes have been made to improve the overall API and especially make it more binding-friendly. That doesn’t mean that the API is completely new and nothing looks like before though, when possible the previous API design was kept and some changes that would have been too disruptive have not been made. Regardless of that, you will have to port your AppStream-using applications. For some larger ones I already submitted patches to build with both AppStream versions, the 0.16.x stable series as well as 1.0+.
For the XML specification, some older compatibility for XML that had no or very few users has been removed as well. This affects for example
release elements that reference downloadable data without an
artifact block, which has not been supported for a while. For all of these, I checked to remove only things that had close to no users and that were a significant maintenance burden. So as a rule of thumb: If your XML validated with no warnings with the 0.16.x branch of AppStream, it will still be 100% valid with the 1.0 release.
Another notable change is that the generated output of AppStream 1.0 will always be 1.0 compliant, you can not make it generate data for versions below that (this greatly reduced the maintenance cost of the project).
For a long time, you could set the developer name using the top-level
developer_name tag. With AppStream 1.0, this is changed a bit. There is now a
developer tag with a
name child (that can be translated unless the
translate="no" attribute is set on it). This allows future extensibility, and also allows to set a machine-readable
id attribute in the
developer element. This permits software centers to group software by developer easier, without having to use heuristics. If we decide to extend the developer information per-app in future, this is also now possible. Do not worry though the
developer_name tag is also still read, so there is no high pressure to update. The old 0.16.x stable series also has this feature backported, so it can be available everywhere. Check out the developer tag specification for more details.
Scale factor for screenshots
Screenshot images can now have a
scale attribute, to indicate an (integer) scaling factor to apply. This feature was a breaking change and therefore we could not have it for the longest time, but it is now available. Please wait a bit for AppStream 1.0 to become deployed more widespread though, as using it with older AppStream versions may lead to issues in some cases. Check out the screenshots tag specification for more details.
It is now possible to indicate the environment a screenshot was recorded in (GNOME, GNOME Dark, KDE Plasma, Windows, etc.) via an
environment attribute on the respective
screenshot tag. This was also a breaking change, so use it carefully for now! If projects want to, they can use this feature to supply dedicated screenshots depending on the environment the application page is displayed in. Check out the screenshots tag specification for more details.
This is a feature more important for the scientific community and scientific applications. Using the
references tag, you can associate the AppStream component with a DOI (Digital object identifier) or provide a link to a CFF file to provide citation information. It also allows to link to other scientific registries. Check out the references tag specification for more details.
Releases can have tags now, just like components. This is generally not a feature that I expect to be used much, but in certain instances it can become useful with a cooperating software center, for example to tag certain releases as long-term supported versions.
Thanks to the interest and work of many volunteers, AppStream (mostly) runs on FreeBSD now, a NetBSD port exists, support for macOS was written and a Windows port is on its way! Thank you to everyone working on this
Better compatibility checks
For a long time I thought that the AppStream library should just be a thin layer above the XML and that software centers should just implement a lot of the actual logic. This has not been the case for a while, but there was still a lot of complex AppStream features that were hard for software centers to implement and where it makes sense to have one implementation that projects can just use.
The validation of component relations is one such thing. This was implemented in 0.16.x as well, but 1.0 vastly improves upon the compatibility checks, so you can now just run as_component_check_relations and retrieve a detailed list of whether the current component will run well on the system. Besides better API for software developers, the
appstreamcli utility also has much improved support for relation checks, and I wrote about these changes in a previous post. Check it out!
With these changes, I hope this feature will be used much more, and beyond just drivers and firmware.
So much more!
The changelog for the 1.0 release is huge, and there are many papercuts resolved and changes made that I did not talk about here, like us using gi-docgen (instead of gtkdoc) now for nice API documentation, or the many improvements that went into better binding support, or better search, or just plain bugfixes.
I expect the transition to 1.0 to take a bit of time. AppStream has not broken its API for many, many years (since 2016), so a bunch of places need to be touched even if the changes themselves are minor in many cases. In hindsight, I should have also released 1.0 much sooner and it should not have become such a mega-release, but that was mainly due to time constraints.
So, what’s in it for the future? Contrary to what I thought, AppStream does not really seem to be “done” and fetature complete at a point, there is always something to improve, and people come up with new usecases all the time. So, expect more of the same in future: Bugfixes, validator improvements, documentation improvements, better tools and the occasional new feature.
Onwards to 1.0.1!
Fri, Nov 10th, 2023
openSUSE Tumbleweed – Review of the week 2023/45
Dear Tumbleweed users and hackers,
This week was all in the name of Hackweek, which means some people diverged their efforts a bit away from Factory/Tumbleweed and others have finally found the time they craved and could submit what they had on their wish-list. The Release Team was trying to keep up with the submissions while also having their own time dedicated to hackweek. Seeing that we published 6 snapshots (1102, 1103, 1105, 1106, 1107, and 1108) I dare say this worked out pretty well.
The six delivered snapshots contained these changes:
- GNOME-Shell/Mutter 45.1
- libxml 2.11.5
- Pipewire 0.3.84
- systemd: disabled utmp support
- Libvirt 9.9.0
- xfconf 4.18.3
- fwupd 1.9.7
- VLC 3.0.20
- BlueZ 5.70
- LLVM 17.0.4
- SQLite 3.44.0
Snapshot 1109 is already in QA, 1110 is building, and stagings are getting ready. All in all, you can expect these changes to happen soon-ish:
- Linux kernel 6.6 (Snapshot 1109+)
- gAWK 5.3.0
- Binutils 2.41 (Snapshot 1110+)
- KDE Gear 23.08.3
- Mozilla Firefox 119.0.1
- RPM 4.19: Some preparation work has happened to ensure packages still build (e.g. %patch with unnumbered patches is no longer supported)
- dbus-broker as the default dbus session manager
My containerized websites
Coincidentally I have had multiple needs for serving websites from behind containers instead of directly from the host. After some practicing, which can be also described as ”hacking without carefully documenting what I did”, I decided not to explicitly describe my setup but how I achieved (towards) my goal of serving my web sites (3 on the same server) from behind containers. This will enable me to consider using things like PHP or other notorious pieces of software thanks to a level of sandboxing.
Here in this rant paragraph I note that I'm not overtly fan of pulling random images from the web even if not quite as bad as using curl-sudo ”install method”. Containers are at least sandboxed, but coming from Linux distribution background perspective I somewhat dislike all software vendors acting as OS image providers. I will need to learn more about the aspects of where the images come from, what is the trust model, and what happens if eg Official Nginx (tm) image or Official Certbot (tm) image or Docker Hub gets compromised. I do mostly trust Debian, SUSE, Ubuntu etc infrastructure, so it will take some time until I construct my trust towards alternatives. One possible option could be of course to use my pre-trusted vendors via some alternative image registry, and do a bit more work with Ansible for example to utilize them instead of trusting Nginx or Certbot via Docker Hub.
My setup is looking like ending up like this, involving docker-compose, nginx as a proxy, several web host containers, and a Certbot container that is used to renew certificates via a cronjob:
But before I ended up with that I said hello to Search Engine. One result showed the general idea of sharing directories. Another reminded how to serve acme-challenge in nginx since I’ve been mainly Apache user. There were several guides about Apache and Let’s encrypt too, without proxy. I probably read this when figuring out the actual nginx proxy setup. At some point I needed to check that nginx is ”nginx” while certbot is ”certbot/certbot”. I was happy to find certbot command line option ideas although I learned the hard way to use
--dry-run as well especially in the case of forgetting
restart option in docker-compose.yml. I did not need any kind of bootstrap phase since I already had existing Let’s Encrypt certificates.
Finally, in practice what I have done so far is a proof that it works and can be used as drop in replacement for my on-host Apache. I have served all of the web sites from inside a container (pointed to the same content as the on-host), and Certbot container works to renew the certificate, replacing acme-tiny I have used before. Currently however the setup is not complete for simultaneous serving of all the 3 sites, cronjobs are not set up (or tested to be functional). I can’t have my on-host Apache and containerized Nginx proxy competing for ports 80 and 443 at the same time, so I can really enable the setup only when everything is fully complete. It’s nice however how easy switching between the two already is!
Finally², I can’t say it would be 100% complete even then! At some point in the past I have largely but not completely ”ansibled” my host when I sadly moved from a really old quad-core ARM server to x86_64 server. I like different architectures, but the ARM servers were decommissioned by Scaleway. Anyway, regarding Ansible this kind of containerization obviously increases ”Ansible debt” further by a fair amount, and that would be a different story to get that fully done…
But it has been some nice hacking!
LLVM, VLC updates in Tumbleweed
The arrival of kernel-firmware 20231107 in snapshot 20231108 enhances various hardware, including Intel Bluetooth firmware updates for several devices like AX201, AX203, AX210, AX211, and more. A update of sqlite3 3.44.0 brings the addition of aggregate functions like
concat_ws(), which aligns compatibility with other SQL projects. Some error handling table statements were refined that contribute to a more immediate error-identification process. Data compression package brotli updates to 1.1.0 and the build process now incorporates optimal flags and introduces a new command-line option
--dictionary. Another package to update was libgusb 0.4.8 that introduces a new device error code for
busy that enhances its capability to handle device errors more effectively.
Snapshot 20231107 updates the Linux Bluetooth protocol stack bluez to version 5.70. The update fixes problems related to Generic Attribute Profile (GATT) confirmations and it introduces support for Media Independent Command Protocol profile and the Mesh Independent Control Service while adding a patch to address a regression when pairing with game controllers. An update of gupnp 1.6.6 makes improvements to the resilience in handling unusual formatting during introspection, along with the addition of a new Application Programming Interface for creating actions in
ServiceProxy. It also addresses memory leaks. An update of llvm17 17.0.4 was updated to ensure compatibility with 17.0.0 version along with maintaining API and Application Binary Interface consistency. The package did include
libomp*-devel for pivotal GPU offloading functionality. A few other packages were updated in the snapshot.
An update of yast2-trans in snapshot 20231106 involving translations via Weblate includes updates for Slovak, Czech, Dutch, Catalan, Japanese, and Indonesian languages among others. A new POT files for text domains like storage, installation, and update have been introduced. An update of jasper 4.1.0 introduces support for building various JasPer application programs for the WebAssembly target with WASI support. The update also resolves an integer overflow bug. A few other packages updated including openSUSE’s tool to manage systemd-boot sdbootutil. This git+ update focuses on enhancing the
sdboot in snapper hooks, installing commands with specific snapshots and refining the behavior of sdbootutil within RPM scriptlets.
The 20231105 snapshot updates the super-thin layer on the DBus interface - fwupd. The 1.9.7 version includes additions such as child device requirements in metadata, new hardware support for Logitech Rally System devices. The update adds new security attributes for BIOS capsule updates, and expanded support for parsing Extended Display Identification Data and Concise Software Identifier payload sections. An update of VLC 3.0.20 was mainly focusing on bug fixes and improvements. There were some notable changes like addressing crashes associated with some older versions of AMD drivers and fixing event propagation problems on double-clicking with the mouse wheel. The 2.42.1 version update of git addresses issues related to the exit code handling in
git diff and refines the behavior of
rebase -i in scenarios where the command is interrupted by conflicting changes. A few other packages like openldap2 2.6.6 and redis 7.2.3 were updated along with several RubyGems packages, which saw rubygem-pairing_heap update from version 1.0.0 to version 3.0.1.
Thu, Nov 9th, 2023
Hack Week is the time SUSE employees experiment, innovate & learn interruption-free for a whole week! Across teams or alone, but always without limits.
The Hack Week 23 was from November 6th to November 10th, and my project was to gvie some love to the GNOME Project.
Before the start of the Hack week I asked in the GNOME devs Matrix channel, what project needs some help and they gave me some ideas. At the end I decided to work on the GNOME Calendar, more specifically, improving the test suite and fixing issues related to timezones, DST, etc.
GNOME Calendar is a Gtk4 application, written in C, that heavily uses the evolution-data-server library. It's a desktop calendar application with a modern user interface that can connect handle local and remote calendars. It's integrated in the GNOME desktop.
The current gnome-calendar project has some unit tests, using the GLib testing framework. But right now there are just a few tests, so the main goal right now is to increase the number of tests as much as possible, to detect new problems and regressions.
Testing a desktop application is not something easy to do. The unit tests can check basic operations, structures and methods, but the user interaction and how it's represented is something hard to test. So the best approach is to try replicate user interactions and check the outputs.
A more sophisticated approach could be to start to use the accessibility stack in tests, so it's possible to verify the UI widgets output without the need of rendering the app.
With gnome-calendar there's another point of complexity for tests because it relies on the evolution-data-server to be running, the app communicates with it using dbus, so to be able to do more complex tests we should mock the evolution-data-server and we should create fake data for testing.
By the end of the week I've created four Merge requests, three of them have been merged now, and I'll continue working on this project in the following weeks/months.
I'm happy with the work that I was able to do during this Hack Week. I've learned a bit about testing with GLib in C, and a lot about the evolution-data-server, timezones and calendar problems.
It's just a desktop calendar application, how hard it could be? Have you ever deal with dates, times and different regions and time zones? It's a nightmare. There are a lot of edge cases working with dates that can cause problems, operations with dates in different time zones, changes in dates for daylight saving, if I've an event created for October 29th 2023 at 2:30 it will happens two times?
A lot of problems could happen and there are a lot of bugs reported for gnome-calendar related to this kind of issues, so working on this is not something simple, it requires a lot of edge case testing and that's the plan, to cover most of them with automated tests, because any small change could lead to a bug related to time zones that won't be noticed until someone has an appointment at a very specific time.
And this week was very productive thanks to the people working on gnome-calendar. Georges Stavracas reviews my MR very quickly and it was possible to merge during the week, and Jeff Fortin does a great work with the issues in gitlab and leading me to most relevant bugs.
So after a week of going deep into the gnome-calendar source code it could be a pity to just forget about it, so I'll try to keep the momentum and continue working on this project, of course, I'll have just a few hours per week, but any contribution is better than nothing. And maybe for the next summer I can propose a Google Summer of Code project to get an intern working on this full time.
Community to Explore Engineering Depths with AMA Session
The open-source community is in for a treat next week as an Ask Me Anything (AMA) session on the openSUSE Project’s Jitsi instance will provide unique and insightful information about how Quality Engineering teams work.
The AMA is scheduled for November 16 at 19:00 UTC and will take place online at meet.opensuse.org/meeting.
Santiago Zarate, who is a Product Owner of SUSE’s Quality Engineering Core team, will run the session organized during Hack Week.
The AMA session aims to provide a closer look at how developers in the Research and Development (R&D) sector of SUSE work. The primary goal is to foster collaboration and understanding between Global Services and Infrastructure (GSI) and Engineering, which will help to unravel the intricacies of development workflows and shed some light on Quality Engineering practices.
The objective of the session is clear; give the audience a window into the inner workings of SUSE Engineering, offering people an opportunity to ask questions and gain insights into the processes behind the scenes.
Attendees are encouraged to submit their questions beforehand at https://etherpad.opensuse.org/p/ama to ensure a vibrant and engaging discussion during the live AMA.
While the AMA offers a limited window into SUSE Engineering for open-source attendees, it serves as a valuable platform to enhance the collective understanding of the organization’s processes.
The AMA session is not just an opportunity to ask questions, but also a chance to foster collaboration, bridge gaps, and gain a deeper understanding of the intricate processes that drive engineering efforts. Don’t miss out on this unique chance to connect with like minded software engineers.
Mark your calendars for November 16 at 19:00 UTC, and join the conversation on meet.opensuse.org/meeting. The community’s regular weekly meeting is scheduled to follow the AMA at 20:00 UTC, but the time for the community meeting can be used to extend the AMA.
Mon, Nov 6th, 2023
Results of Use Case Survey Published
The survey provides a breakdown of the use cases of Linux among respondents based on their primary use of IT.
The survey had general questions along with sections for Work/Business and Home/Hobby. Those who responded as using both, which if selected provides the surveyee the opportunity to take the entire 30-question survey.
Questions on various IT technologies like cloud computing, containerization, configuration management, desktop computing, server infrastructure, serverless computing, virtualization, edge computing, IoT applications, machine learning, blockchain, gaming and a category of others were part of the survey.
People were able to rate their use of these technologies as well as to provide comments for selective questions. The raw text from the full report was compiled to provide a summary for the ratings and use cases.
People reviewing the report are more than welcome to enhance the summary on the wiki.