Skip to main content

the avatar of openSUSE News

Planet News Roundup

This is a roundup of articles from the openSUSE community listed on planet.opensuse.org.

The below featured highlights listed on the community’s blog feed aggregator are from Nov. 29 to Dec. 6.

Blog posts this week highlight a broad mix of community work, from KDE Plasma enhancements and mobile updates to local-AI home projects, syslog-ng Kafka testing, mutation testing in librsvg, news from the Fediverse, FSF, SUSE’s Mobile Hackday and defense even contractors trying to block right to repair.

Here is a summary and links for each post:

Plasma Mobile 6.5 Released

The KDE Blog announces the release of Plasma Mobile 6.5 with several refinements to KDE’s mobile experience. This version adds native Waydroid setup support, making Android app integration easier for users. It also improves the lock screen, notifications, quick settings, and introduces early support for a new virtual keyboard.

Better Hardware Support — This Week in Plasma

The KDE Blog highlights several user-interface refinements and bug fixes for hardware like drawing tablets, printers, and monitors in this week’s update. Changes in Plasma 6.6 let you Alt-click or double-click desktop items to view their properties. Other improvements include clearer driver-missing warnings for tablets, a cancel button in snapshot selection overlays, and various fixes for screen-locking, multi-monitor setups, and system-tray behaviour.

openSUSE Tumbleweed – Review of the Week 49, 2025

Victorhck and DimStar reported on the week’s snapshot for Tumbleweed brings updates including GDM 49.2, libproxy 0.5.12, and XOrg Server 21.1.21. A problematic upgrade to systemd 258 caused significant issues in automated testing that resulted in a snapshot being set aside for more testing and a reversion of a version for a deeper dive into the diagnostics. Upcoming snapshots may include major package updates like Mozilla Firefox 145.0.2, kernel 6.18.0, Mesa 25.3.1, SQLite 3.51.1, and others.

Ragging My Brain – a quick thought dive

A personal blog post by Timo reflects on a detailed experiment with Retrieval-Augmented Generation (RAG) using local LLMs (qwen3-coder, qwen3-embedding) via Ollama. The author built a RAG system on a personal desktop, added features with LLM assistance, created a web interface, and reflects on the challenges of prompting and code quality in AI-assisted development.

LOUVRE: The Art of Cybersecurity — Compilando Podcast

The KDE Blog shares a new short-episode (64) of Compilando Podcast titled “LOUVRE: The Art of Cybersecurity,” inviting reflection on digital security as more than technique. The episode draws an analogy between cybersecurity and art: protecting systems requires not only skill, but also sensitivity, awareness, and a culture of care.

Redmine RAG system

This blog post from Zoltán describes another Retrieval-Augmented Generation personal project where the author extracted all issues from a Redmine instance (including comments) and anonymized the data to build a local ChromaDB-backed RAG system for semantic search and Q&A. The pipeline includes robust error-handling, checkpointing, and retry logic to reliably download tens of thousands of issues, then optionally transform them into embeddings via Ollama for high-quality semantic search. The result is a fully offline, privacy-conscious RAG stack that lets users query historic Redmine data with natural-language queries.

Fediverse and Non-Proprietary Social Networks #Vamonosjuntas

The KDE Blog highlights a talk by David Marzal about the importance of joining the Fediverse, which is a community of decentralized, non-proprietary social networks. The post argues that by using federated networks, people can escape toxic, profit-driven social media and help build a freer, more collaborative internet.

I want to have a hot shower – From Tesseract Troubles to Local VLM

Zoltán describes how he turned a simple 300-liter hot-water heater into a smart, sensor-driven system using a cheap USB webcam, a tiny computer (Raspberry Pi), and a relay switch, so he can monitor and control water heating more efficiently. After struggling with unreliable optical-character recognition (OCR) via Tesseract, he switched to a local vision–language model (VLM) setup via Ollama for reading the boiler’s display, which improved reliability without sending data to the cloud.

Last Two Weeks in KDE Apps – Performance improvements in Krita, Trust and Safety in NeoChat, and file actions in Photos

The KDE Blog reports on recent enhancements across several KDE apps. Some of the performance optimizations were made to Krita, trust and safety improvements in NeoChat, and improved file-handling actions in KDE Photos. The update also highlights ongoing fundraising efforts for the KDE project.

Free Software Foundation News Roundup – December 2025

Victorhck aggregates selected stories from the December 2025 issue of the Free Software Foundation (FSF) newsletter, which marks 40 years since the foundation’s creation. It highlights calls from the Free Software Foundation Europe (FSFE) urging that Germany’s “Germany Stack” public-software initiative be built entirely with free software for genuine digital sovereignty. The roundup also draws attention to global issues, including contractors trying to shoot down US military “right to repair” rules and a major worldwide outage caused by Cloudflare.

Congress esLibre 2026 in Melide

The KDE Blog announces that esLibre 2026 will take place in Melide (Spain) on April 17–18. The event will feature talks, workshops, community-led sessions and exhibitions, hosted at the Centro Sociocultural Mingos de Pita and the adjoining Multiuso building. Attendance is free, but you need to register to participate.

syslog-ng: How to Test Kafka Source by Building the Package Yourself

Peter ‘CzP’ Czanik walks through the testing the upcoming syslog-ng Kafka source by cloning the repo, applying PR #5564 as a patch, building RPM/DEB packages with DBLD, installing them, and configuring syslog-ng to consume from Kafka.

Intel NPU Driver Now Available in openSUSE Versions

The openSUSE Innovator initiative has packaged the driver for Intel’s built-in Neural Processing Unit (NPU), which enables users to run inference workloads, generative-AI tasks, or neural-network workloads without needing a dedicated GPU. The driver supports recent Intel CPUs (like the Core Ultra family) and allows efficient, low-power AI workloads directly on the CPU die.

Mutation testing for librsvg

Federico’s Blog blog post describes how he applied mutation testing to librsvg, which is a popular SVG-rendering library, using cargo‑mutants. He reports that after running thousands of code mutations, hundreds of them (889) were “missed” by the existing test suite. The write-up explains how to run cargo-mutants on a Rust workspace and encourages improving tests (or using mutation testing in CI) to catch such blind spots.

The death of an iPod

Victorhck reflects on the end of their use of an iPod and describes how the device has finally “died.” The article evokes memories about owning and using the iPod, and contemplates how technological progress and streaming services have changed the way we interact with music.

Tumbleweed Monthly Update – November 2025

The openSUSE project reports that November brought a steady cadence of updates to openSUSE Tumbleweed. Key highlights include updates to Plasma 6.5.3 and KDE Gear 25.08.3 for improved desktop stability; GNOME 49.2 for smoother session handling; as well as kernel 6.17.9, Mesa 25.3.0 and PipeWire 1.5.83 to enhance graphics, audio, and hardware support.

App improvements in Plasma 6.5

The KDE Blog describes notable enhancements in many of the apps bundled with KDE Plasma 6.5. The update improves the performance and responsiveness of the software manager Discover as it now launches faster, handles Flatpak + HTTPS URLs, and can show hardware drivers available for installation.

Geekos Japan Blog Post – ribbon/3582

This blog post on the Geeko.jp site talks about Leap 15.6 and about changing settings to receive certain control sequences of the cursor.

4th Linux Mobile Hackday at SUSE Prague

SUSE reports on the fourth edition of Linux Mobile Hackday held in Prague on November 29. Attendees experimented with running Linux on various phones, worked on kernel/device-tree support, reviewed patches, and explored packaging tools for mobile Linux distributions.

openSUSE Tumbleweed – Review of Weeks 47 & 48, 2025

Victorhck covers more than two weeks of Tumbleweed with key updates include Mesa 25.2.7/25.3.0, Linux kernel 6.17.8/6.17.9 (with efidrm and vesadrm enabled), KDE Plasma 6.5.3, Mozilla Firefox 145.0, GNOME 49.2, PipeWire 1.5.83, GStreamer 1.26.8, fwupd 2.0.17, and many core libraries like cURL, Freetype and Samba.

My Plasma Desktop – November 2025 ViernesDeEscritorio

The KDE Blog showcases the author’s personal desktop setup using Plasma in November 2025, as part of their ViernesDeEscritorio (“Friday Desktop”) series. The post highlights their choice of wallpaper, icon theme, plasmoids and overall layout.

View more blogs or learn to publish your own on planet.opensuse.org.

the avatar of Greg Kroah-Hartman

Linux CVEs, more than you ever wanted to know

It’s been almost 2 full years since Linux became a CNA (Certificate Numbering Authority) which meant that we (i.e. the kernel.org community) are now responsible for issuing all CVEs for the Linux kernel. During this time, we’ve become one of the largest creators of CVEs by quantity, going from nothing to number 3 in 2024 to number 1 in 2025. Naturally, this has caused some questions about how we are both doing all of this work, and how people can keep track of it.

a silhouette of a person's head and shoulders, used as a default avatar

Dithering

One of the new additions to the GNOME 49 wallpaper set is Dithered Sun by Tobias. It uses dithering not as a technical workaround for color banding, but as an artistic device.

Halftone app

Tobias initially planned to use Halftone — a great example of a GNOME app with a focused scope and a pleasantly streamlined experience. However, I suggested that a custom dithering method and finer control over color depth would help execute the idea better. A long time ago, Hans Peter Jensen responded to my request for arbitrary color-depth dithering in GIMP by writing a custom GEGL op.

Now, since the younger generation may be understandably intimidated by GIMP’s somewhat… vintage interface, I promised to write a short guide on how to process your images to get a nice ordered dither pattern without going overboard on reducing colors. And with only a bit of time passing since the amazing GUADEC in Brescia, I’m finally delivering on that promise. Better late than later.

GEGL dithering op

I’ve historically used the GEGL dithering operation to work around potential color banding on lower-quality displays. In Tobias’ wallpaper, though, the dithering is a core element of the artwork itself. While it can cause issues when scaling (filtering can introduce moiré patterns), there’s a real beauty to the structured patterns of Bayer dithering.

You will find the GEGL Op in Color > Dither menu. The filter/op parameters don’t allow you to set the number of colors directly—only the per-channel color depth (in bits). For full-color dithers I tend to use 12-bit. I personally like the Bayer ordered dither, though there are plenty of algorithms to choose from, and depending on your artwork, another might suit you better. I usually save my preferred settings as a preset for easier recall next time (find Presets at the top of the dialog).

Happy dithering!

a silhouette of a person's head and shoulders, used as a default avatar

Tumbleweed – Review of the week 2025/49

Dear Tumbleweed users and hackers,

This week, we have seen only a single snapshot (1127) published, which landed just before the start of HackWeek.

Since then, a few issues have piled up, blocking further releases. First, the update to ICU 78 caused libqt5-qtwebengine the build to fail. Because this package is present on nearly 100% of systems (specifically those installed before the new opensuse-welcome), we could not consider releasing it in that state.

We also encountered significant friction with systemd 258, which triggered several complex issues in openQA. While we have set that update aside for deep diagnostics, we decided to revert to the already shipped version for the immediate future. This specific move should unblock Tumbleweed and get the snapshots rolling again.

Naturally, HackWeek 25 has also played a role here. With many contributors focusing on side projects, resources for regular maintenance have been kept to a minimum.

Snapshot 1127 contained these changes:

  • GDM 49.2
  • libproxy 0.5.12
  • XOrg server 21.1.21

The future will bring you these changes sooner or later:

  • Mozilla Firefox 145.0.2
  • Bash 5.3.8
  • Linux kernel 6.18.0
  • PostgreSQL 18.1
  • SQLite 3.51.1
  • Mesa 25.3.1
  • Ruby 4.0 (early preview staged to find out what all breaks)

As part of HackWeek, Santiago has tirelessly improved the test coverage for “Tumbleweed installed using Agama.” We plan to expand this further and minimize the gap between the legacy installer and Agama, with a priority on NET installations.

the avatar of Timo's openSUSE Posts

RAGging my brain

The Idea

My original idea was to study how to RAG (Retrieval-Augmented Generation) an LLM model by inserting something potentially useful to it.

I chose qwen3-coder as my language model, and qwen3-embedding as my embedding model, both easily available via Ollama.

My current work laptop is not powerful enough for any of this work, so I used my personal desktop computer for the testing, with AMD’s ROCm containers.

In Practice

The LLM vibes brought me to learn a couple of things. First of all, after searching for a couple of alternatives, I found a simple MIT licensed ragging demo to start experimenting with. I changed it to use my local, already downloaded models and cut off the network access to my container. Since I’m pulling all kinds of dependencies from the Internet, I like to both use a container in the first place and also not have the container access network to make sure it’s both safe and provably functional without network.

I realized that I can also experiment with the local Ollama qwen3-coder for the coding part of the RAG at the same time, and used its help to implement a couple of new features - I stored the accumulated vector database into a file, added chat functionality instead of one query, support for “compressing” the history to fit within context window, and even some tests. I found this experience initially quite good, since I didn’t have previous experience on this topic it really helped me with the examples. The problem is mostly the amount of code it generates that needs to be reviewed if it makes any sense or not, but all in all I couldn’t have achieved the same in the same time, because reading documentation and trial and error would have taken more time. Arguably the technical debt is something that offsets any time savings, but for this kind of experimental study purposes, doing something someone else has already done before, it worked well to get quick results.

My 400kB amount of scripts led to a vector database of 440MB in size 🤔 It’ll be interesting how it will be for example if I put 10x of data in, will the database keep on increasing in size in proportion to that?

How’s the actual usage? Well, it does retrieve correctly my knowledge from my own scripts, but OTOH I’m not sure if I’m able to find (old, forgotten) things with this any faster than with grep. I’ll try to integrate this however to my usage so that at some point this is another query feature similar to grep. For that though, I need to accumulate more data to feed to it.

I also added a web interface so that within home network I can query it from a browser.

Web UI example

I Need Moooore AI

I was also looking at moving to ChromaDB for the vector storage. I thought to try OpenCode at the same time, still paired with the qwen3-coder. Oh what an agentic mess it became! It started configuring my already-configured git with its own user name, fetching all kinds of stuff, and committing bloated things that were clearly broken. But, it did all that locally, which is kind of neat.

Some of the end result can be seen at my testing git repo.

At the End

It has been an interesting experiment with ups and downs of LLM usage. I’ve learned things fast at first, but then I’ve already grown hating the need to practice the “art” of prompting and getting to the “maybe next prompt will fix it” phase of developing things.

the avatar of Zoltán Balogh

Redmine RAG system

The Goal

The goal was to extract all issues from a Redmine instance, anonymize the data, and build a local RAG system for semantic search and Q&A.

Just like with the previous experiment with Bugzilla it started with data extraction. I planned to make a simple bulk download via Redmine API. Then came the first problem. Redmine’s API doesn’t return journal entries (comments) when using the project-level endpoint, even with the include=journals parameter. I tried out different ways but nothing worked. The solution was, after all, to change the strategy and fetch each issue individually via /issues/{id}.json. This was much slower but guaranteed complete data including all comments.

the avatar of Zoltán Balogh

I want to have a hot shower

It all started last summer

When my family moved to a new place. In our previous home we had a district heating service that included unlimited hot water for a fixed price. That was awesome - not very environmentally friendly and actually not very cheap, but we never ran out of hot water.

The boiler

In our new home we are independent from the city’s hot water services. This is good because we pay exactly for the energy we use. It means that we have a 300-liter hot water heater that we turn on when we want to make as much hot water as we need. In most households such a hot water boiler has a thermostat set to a specific temperature and the heater keeps all 300 liters of water as hot as it is set. I do not like this, because regardless of how great the insulation on the water tank is, it loses temperature over time. I needed a smarter system.

a silhouette of a person's head and shoulders, used as a default avatar

How to test the syslog-ng Kafka source by building the package yourself?

A long-waited feature for syslog-ng, the Kafka source, is getting ready soon. The development is still in progress, but you can already try it, and it is worth the effort. How? Using the very same tool the syslog-ng testing and release process relies on.

From this blog you can learn how to download and patch syslog-ng git sources and build packages for popular RPM and DEB Linux distributions. Once you have installable packages, comes the fun part: getting the Kafka source working.

Read the rest at https://www.syslog-ng.com/community/b/blog/posts/how-to-test-the-syslog-ng-kafka-source-by-building-the-package-yourself

syslog-ng logo

the avatar of openSUSE News

Intel NPU Driver Now Available in openSUSE Versions

As the founder of the openSUSE Innovator initiative, and a member of the openSUSE and Intel Innovator communities, I maintain my ongoing commitment to bringing cutting-edge technologies to the different flavors of the openSUSE Linux distribution.

With the arrival of NPU (Neural Processing Unit) devices, it has become possible to run LLMs and generative AI applications without relying on a dedicated GPU. In light of this advancement, we began the work of packaging the Intel NPU Driver into RPM for the openSUSE ecosystem. As a result of this collective effort, the openSUSE for Innovators initiative is proud to announce the official availability of this package to the community.

The Intel NPU is an inference accelerator integrated into Intel CPUs starting with the Intel® Core™ Ultra family (formerly known as Meteor Lake). It enables efficient and low-power execution of artificial neural network workloads directly on the processor.

The NPU is integrated into the processor die, designed to perform matrix operations of neural networks with high energy efficiency. Its architecture complements CPU and GPU, enabling intelligent offloading for ONNX models, computer vision pipelines, quantized models, and hybrid model operations with simultaneous use of CPU+GPU+NPU.

Participating in this project is a source of great satisfaction, as we know it will have a significant impact on future applications. As members of an open-source community, we have a responsibility to democratize emerging technologies and help reduce the digital divide, and this delivery is another important step in that direction.

For more information, go to software.opensuse.org!

the avatar of Federico Mena-Quintero

Mutation testing for librsvg

I was reading a blog post about the testing strategy for the Wild linker, when I came upon a link to cargo-mutants, a mutation testing tool for Rust. The tool promised to be easy to set up, so I gave it a try. I'm happy to find that it totally delivers!

Briefly: mutation testing catches cases where bugs are deliberately inserted in the source code, but the test suite fails to catch them: after making the incorrect changes, all the tests still pass. This indicates a gap in the test suite.

Previously I had only seen mentions of "mutation testing" in passing, as something exotic to be done when testing compilers. I don't recall seeing it as a general tool; maybe I have not been looking closely enough.

Setup and running

Setting up cargo-mutants is easy enough: you can cargo install cargo-mutants and run it with cargo mutants.

For librsvg this ran for a few hours, but I discovered a couple of things related to the way the librsvg repository is structured. The repo is a cargo workspace with multiple crates: the librsvg implementation and public Rust API, the rsvg-convert binary, and some utilities like rsvg-bench.

  1. By default cargo-mutants only seemed to pick up the tests for rsvg-convert. I think it may have done this because it is the only binary in the workspace that has a test suite (e.g. rsvg-bench does not have a test suite).

  2. I had to run cargo mutants --package librsvg to tell it to consider the test suite for the librsvg crate, which is the main library. I think I could have used cargo mutants --workspace to make it run all the things; maybe I'll try that next time.

Initial results

My initial run on rsvg-covert produced useful results; cargo-mutants found 32 mutations in the rsvg-convert source code that ought to have caused failures, but the test suite didn't catch them.

The running output of cargo-mutants on the librsvg

The second run, on the librsvg crate, took about 10 hours. It is fascinating to watch it run. In the end it found 889 mutations with bugs that the test suite couldn't catch:

5243 mutants tested in 9h 53m 15s: 889 missed, 3663 caught, 674 unviable, 17 timeouts

What does that mean?

  • 5243 mutants tested: how many modifications were tried on the code.

  • 889 missed: The important ones: after a modification was made, the test suite failed to catch this modification.

  • 3663 caught: Good! The test suite caught these!

  • 674 unviable: These modifications didn't compile. Nothing to do.

  • 17 timeouts: Worth investigating; maybe a function can be marked to be skipped for mutation.

Starting to analyze the results

Due to the way cargo-mutants works, the "missed" results come in an arbitrary order, spread among all the source files:

rsvg/src/path_parser.rs:857:9: replace <impl fmt::Display for ParseError>::fmt -> fmt::Result with Ok(Default::default())
rsvg/src/drawing_ctx.rs:732:33: replace > with == in DrawingCtx::check_layer_nesting_depth
rsvg/src/filters/lighting.rs:931:16: replace / with * in Normal::bottom_left
rsvg/src/test_utils/compare_surfaces.rs:24:9: replace <impl fmt::Display for BufferDiff>::fmt -> fmt::Result with Ok(Default::default())
rsvg/src/filters/turbulence.rs:133:22: replace - with / in setup_seed
rsvg/src/document.rs:627:24: replace match guard is_mime_type(x, "image", "svg+xml") with false in ResourceType::from
rsvg/src/length.rs:472:57: replace * with + in CssLength<N, V>::to_points

So, I started by sorting the missed.txt file from the results. This is much better:

rsvg/src/accept_language.rs:136:9: replace AcceptLanguage::any_matches -> bool with false
rsvg/src/accept_language.rs:136:9: replace AcceptLanguage::any_matches -> bool with true
rsvg/src/accept_language.rs:78:9: replace <impl fmt::Display for AcceptLanguageError>::fmt -> fmt::Result with Ok(Default::default())
rsvg/src/angle.rs:40:22: replace < with <= in Angle::bisect
rsvg/src/angle.rs:41:56: replace - with + in Angle::bisect
rsvg/src/angle.rs:49:35: replace + with - in Angle::flip
rsvg/src/angle.rs:57:23: replace < with <= in Angle::normalize

With the sorted results, I can clearly see how cargo-mutants gradually does its modifications on (say) all the arithmetic and logic operators to try to find changes that would not be caught by the test suite.

Look at the first two lines from above, the ones that refer to AcceptLanguage::any_matches:

rsvg/src/accept_language.rs:136:9: replace AcceptLanguage::any_matches -> bool with false
rsvg/src/accept_language.rs:136:9: replace AcceptLanguage::any_matches -> bool with true

Now look at the corresponding lines in the source:

... impl AcceptLanguage {
135     fn any_matches(&self, tag: &LanguageTag) -> bool {
136         self.iter().any(|(self_tag, _weight)| tag.matches(self_tag))
137     }
... }
}

The two lines from missed.txt mean that if the body of this any_matches() function were replaced with just true or false, instead of its actual work, there would be no failed tests:

135     fn any_matches(&self, tag: &LanguageTag) -> bool {
136         false // or true, either version wouldn't affect the tests
137     }
}

This is bad! It indicates that the test suite does not check that this function, or the surrounding code, is working correctly. And yet, the test coverage report for those lines shows that they are indeed getting executed by the test suite. What is going on?

I think this is what is happening:

  • The librsvg crate's tests do not have tests for AcceptLanguage::any_matches.
  • The rsvg_convert crate's integration tests do have a test for its --accept-language option, and that is what causes this code to get executed and shown as covered in the coverage report.
  • This run of cargo-mutants was just for the librsvg crate, not for the integrated librsvg plus rsvg_convert.

Getting a bit pedantic with the purpose of tests, rsvg-convert assumes that the underlying librsvg library works correctly. The library advertises support in its API for matching based on AcceptLanguage, even though it doesn't test it internally.

On the other hand, rsvg-convert has a test for its own --accept-language option, in the sense of "did we implement this command-line option correctly", not in the sense of "does librsvg implement the AcceptLanguage functionality correctly".

After adding a little unit test for AcceptLanguage::any_matches in the librsvg crate, we can run cargo-mutants just for that the accept_language.rs file again:

# cargo mutants --package librsvg --file accept_language.rs
Found 37 mutants to test
ok       Unmutated baseline in 24.9s build + 6.1s test
 INFO Auto-set test timeout to 31s
MISSED   rsvg/src/accept_language.rs:78:9: replace <impl fmt::Display for AcceptLanguageError>::fmt -> fmt::Result with Ok(Default::default()) in 4.8s build + 6.5s test
37 mutants tested in 2m 59s: 1 missed, 26 caught, 10 unviable

Great! As expected, we just have 1 missed mutant on that file now. Let's look into it.

The function in question is now <impl fmt::Display for AcceptLanguageError>::fmt, an error formatter for the AcceptLanguageError type:

impl fmt::Display for AcceptLanguageError {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            Self::NoElements => write!(f, "no language tags in list"),
            Self::InvalidCharacters => write!(f, "invalid characters in language list"),
            Self::InvalidLanguageTag(e) => write!(f, "invalid language tag: {e}"),
            Self::InvalidWeight => write!(f, "invalid q= weight"),
        }
    }
}

What cargo-mutants means by "replace ... -> fmt::Result with Ok(Default::default()) is that if this function were modified to just be like this:

impl fmt::Display for AcceptLanguageError {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        Ok(Default::default())
    }
}

then no tests would catch that. Now, this is just a formatter function; the fmt::Result it returns is just whether the underlying call to write!() succeeded. When cargo-mutants discovers that it can change this function to return Ok(Default::default()) it is because fmt::Result is defined as Result<(), fmt::Error>, which implements Default because the unit type () implements Default.

In librsvg, those AcceptLanguageError errors are just surfaced as strings for rsvg-convert, so that if you give it a command-line argument with an invalid value like --accept-language=foo, it will print the appropriate error. However, rsvg-convert does not make any promises as to the content of error messages, so I think it is acceptable to not test this error formatter — just to make sure it handles all the cases, which is already guaranteed by its match statement. Rationale:

  • There already are tests to ensure that the error codes are computed correctly in the parser for AcceptLanguage; those are the AcceptLanguageError's enumeration variants.

  • There is a test in rsvg-convert's test suite to ensure that it detects invalid language tags and reports them.

For cases like this, cargo-mutants allows marking code to be skipped. After marking this fmt implementation with #[mutants::skip], there are no more missed mutants in accept_language.rs.

Yay!

Understanding the tool

You should absolutely read "using results" in the cargo-mutants documentation, which is very well-written. It gives excellent suggestions for how to deal with missed mutants. Again, these indicate potential gaps in your test suite. The documentation discusses how to think about what to do, and I found it very helpful.

Then you should read about genres of mutants. It tells you the kind of modifications that cargo-mutants does to your code. Apart from changing individual operators to try to compute incorrect results, it also does things like replacing whole function bodies to return a different value instead. What if a function returns Default::default() instead of your carefully computed value? What if a boolean function always returns true? What if a function that returns a HashMap always returns an empty hash table, or one full with the product of all keys and values? That is, do your tests actually check your invariants, or your assumptions about the shape of the results of computations? It is really interesting stuff!

Future work for librsvg

The documentation for cargo-mutants suggests how to use it in CI, to ensure that no uncaught mutants are merged into the code. I will probably investigate this once I have fixed all the missed mutants; this will take me a few weeks at least.

Librsvg already has the gitlab incantation to show test coverage for patches in merge requests, so it would be nice to know if the existing tests, or any new added tests, are missing any conditions in the MR. That can be caught with cargo-mutants.

Hackery relevant to my tests, but not to this article

If you are just reading about mutation testing, you can ignore this section. If you are interested in the practicalities of compilation, read on!

The source code for the librsvg crate uses a bit of conditional compilation to select whether to export functions that are used by the integration tests as well as the crate's internal tests. For example, there is some code for diffing two images, and this is used when comparing the pixel output of rendering an SVG to a reference image. For historical reasons, this code ended up in the main library, so that it can run its own internal tests, but then the rest of the integration tests also use this code to diff images. The librsvg crate exports the "diff two images" functions only if it is being compiled for the integration tests, and it doesn't export them for a normal build of the public API.

Somehow, cargo-mutants didn't understand this, and the integration tests did not build since the cargo feature to select that conditionally-compiled code... wasn't active, or something. I tried enabling it by hand with something like cargo mutants --package librsvg -- --features test-utils but that still didn't work.

So, I hacked up a temporary version of the source tree just for mutation testing, which always exports the functions for diffing images, without conditional compilation. In the future it might be possible to split out that code to a separate crate that is only used where needed and never exported. I am not sure how it would be structured, since that code also depends on librsvg's internal representation of pixel images. Maybe we can move the whole thing out to a separate crate? Stop using Cairo image surfaces as the way to represent pixel images? Who knows!