Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

2018w05-06: package drop list, Tumbleweed snapshots update, leaper no longer requires maintainer review, and more

package list generator gains drop list generation

The new pkglistgen.py, used by Leap 15.0 gained integration of the drop list generator which adds weakremover(package_name) to the openSUSE-release package to cleanup packages that were provided in previous Leap releases, but are no longer available. The initial batch of changes was fairly significant. The release corresponding to the packages that are no longer provided can be seen as comments in 000product/obsoletepackages.inc. With the drop list integrated the last piece to the upgrade puzzle should be in place and the openQA upgrade tests should be satisfied.

Additionally, as part of the drop list integration a set of functions for determining projects within a product family were added which should be useful for abstracting leaper.py to avoid hard-coding potential request project sources.

Tumbleweed snapshots: update and Mesa postmortem usage

Up until now I had been watching over the snapshotting process manually to gauge when to trigger the full snapshot and ensure things worked properly. Part of the reason was still having to rely on a non-stage.opensuse.org mirror and ensuring the mirror was fully in-sync before snapshotting. After observing the process I went ahead and added a two-phase update mechanism as had been my intention. When a new snapshot is detect the meta-data and redirection rules are immediately updated, but the actual snapshotting is delayed by 4 hours. This allows end-users to immediately begin using the latest snapshot since it will redirect to the mirror network even though the files have not been snapshotted.

Unfortunately, I discovered AWS S3 static website hosting has a limit of 50 redirection rules which therefore limits the number of snapshots to 50 as well. As such a count limiter was added to the expiration routine and the first snapshot 20171115 was removed. Going forward a rolling 50 snapshots will be kept unless proper openSUSE hosting is provided which can use a different redirection mechanism without the limit.

As many of your are likely aware Mesa 18.0.0-rc3 caused quite some issues for Tumbleweed users due to on disk cache corruption issues. As part of the postmortem Tumbleweed snapshots were used to compare released versions of Mesa to verify assumptions and behaviors of Mesa to aid in fully resolving the problem upstream.

leaper no longer requires maintainer review for Factory submissions

Based on feedback from maintainers the leaper bot will no longer require maintainer review for automated submissions from Factory to Leap. Expect this to be deployed early next week.

request splitter grouping by hashtag

An interesting use-case of the request splitter grouping functionality the CaaSP staging master inquired about allowing submitters to place a hashtag, or similar, in request descriptions which could be used to group them together in a single staging. Given the flexible nature of the request splitter the capability already exists so it was added as an example in the documentation. It may also make sense to create a strategy to look for certain hashtags and automatically group in such a manor.

osc staging select --filter-by 'contains(description, "#Portus")'

repo checker improving turnaround time

One of the goals in making comments on devel packages was to notify maintainers of problems that made it past the staging process and to hasten providing a fix. Obviously, the preference being the problems never make it past staging, but given the nature of the process this cannot be guaranteed. Through observation of the comments made by the repo checker and the corresponding problems being removed from the overall list of issues one can observe some rather fast turnaround times (adding to metrics.o.o would be interesting). One such example can be seen with the cinnamon package which had the fix already accepted into Factory by the time a user reported the problem. Certainly an encouraging result and much thanks to the maintainer.

last year

A variety of long-standing bugs and feature requests were handled with regards to factory-auto.

  • #673: source-checker: handle add/remove changes files for patch diff check.
    • packages with multiple subpackages and .changes files would cause error when removing a .changes files
  • #679: check_tags: warn when issue references are removed (plus other fixed)
    • the existing code was backwards among other things
  • #682: check_source: allow self-submission.
    • allows for reverting a package by submitting a previous revision of itself

On a related note the underlying ReviewBot code was refactored to improve re-use.

  • #684: ReviewBot: refactor leaper comment from log functionality.
    • layed the ground-work for a many future improvements to ReviewBot comment handling
  • #685: ReviewBot: use super().check_source_submission() in subclasses.

The request splitter saw a number of bug fixes post-porting and follow-ups to ported behavior.

  • #662: Remove source devel project check carried over from flawed adi logic
  • #674: request_splitter: delete requests should always be considered in a ring.
  • #676: request_splitter: house keeping (and fix for SLE [non-ring] workflow)
  • #681: adi: map wanted_requests to str() before passing to splitter.

A major performance improvement was made to the staging tools along with documentation improvements and a link to the staging dashboard to increase visibility to submitters.

  • #654: stagingapi: Avoid search/package query to determine devel project.
  • #683: stagingapi: update_status_comments() include link to dashboard.
  • #686: osc-staging: add missing documentation and correct/clean existing

The issue diffing tool, for reporting issues fixed in SLE that have not yet been merged into Factory, was enhanced to inteligently pick a user to which to assign the auto-generated bugs.

a silhouette of a person's head and shoulders, used as a default avatar

FOSDEM 2K18

The Free Open Source Developers European Meeting (FOSDEM) 2018 happened over the weekend. FOSDEM is a fantastic conference where Free Software enthusiasts from all over Europe can meet together at once a year (more than 17000 MAC addresses were registered this time). I really like its clearly unique atmosphere. As usual, 2 amazing days in Brussels, Belgium…

The reason why I visit it again and again. It charges you. When you see all these developers, visit these talks/presentations, have so many conversation with contributors, get innovation ideas that are shared there. It motivates you to get a copy of The Linux Programming Interface & TCP/IP Guide and hack away your amazing Linux software 🙂
Most of the talks were recorded. That’s nice, but by watching it online you just get information, by visiting these talks in person and taking part in discussions, you get much much more.

This time I was there together with my LiMux colleagues from Landeshauptstadt München. By the way, as you maybe know, we’re going to share as much as we can until Munich migrates to Windows (unfortunately, because of bureaucratic reasons making software freely (share it) sometimes is not so easy as we want).

Like in last year FOSDEM was held in so-called “developer rooms”. First I planned to visit devrooms such as debugging tools, DNS/DNSSEC and security + encryption. That was my target when I planned my program. But as I noticed later I was not the only one who had the same plans to visit the same talks, hacking sessions and open discussions 🙂 That led to the free-places-problem in the devrooms and it made my program a bit more dynamic than I planned first 🙂 But get it right – that was absolutely no problem for me. Outside we also had very interesting conversations. I met ex-colleges, friends whom I knew from mailing lists and IRC only and of course a lot of openSUSE contributers.

I would like to thank FOSDEM’s staff, everyone who made it happen, who helped to organize it (I’m definitely going to send a feedback to you guys). Thanks GitHub for free coffee. Keep it going 🙂 I also have to say thanks for openSUSE’s Travel Support Program. It supports me to visit this amazing event (and not for the first time!). I’m going to visit FOSDEM again next year. My photos can be found here. See you next time 😉

the avatar of Alberto Garcia

Cambiar la fecha de series de imágenes

Uno de los problemas de las cámaras de fototrampeo es que en cuanto están más de 10 segundos sin pilas se resetea el reloj y la fecha impresa en la foto es errónea o muy errónea. En la mayoría de las ocasiones se vuelve al año de fabricación y en otras, cuando el fallo de baterías es leve o momentáneo pues se vuelven atrás 20 horas ó 30 minutos.
Si (como es mi caso) la fecha/hora en que se capturó la imagen es importante pues es un incordio. Pero fácil de solucionar si sabes cuanto de retrasado estaba el reloj de la cámara.Obviamente cuando este zorro se fotografió no eran las 13:11

La dificultad está en saber la cantidad de tiempo en que está desajustada la cámara, pero gracias a las fotografías que toma la cámara nos toma al instalarla o desinstalarla es relativamente fácil calcular este desajuste con un margen de error de sólo minutos.
En mi caso, recogí las fotografías a las 19:00 y en ese momento la cámara me tomó una foto marcandola como las 02:16 del mismo día, es decir, la cámara iba retrasada 16 horas 45 minutos.

Cambiar datos EXIF

Leer y escribir datos EXIF en archivos de imágenes exiftool es básico. El alias -alldates nos permite ver todas las fechas relativas a una imagen o varias exiftool -alldates IMAG_0012.JPG
Date/Time Original : 2018:01:31 13:05:09
Create Date : 2018:01:31 13:05:09
Modify Date : 2018:01:31 13:05:09

Para corregir todo una serie de fotos y adelantar sus fechas de creación y modificación 16 horas y 45 minutos basta con entrar en la carpeta que las contenga y en un terminalexiftool -alldates+="0:0:0 16:45:0" *
#retrasar 1 año 20 dias y tres horas
exiftool -alldates-="1:0:20 3:0:0" *.JPG

El formato es AÑO:MES:DIA HORA:MINUTO:SEGUNDOS. Hay que especificarlos la fecha incluso si no vais a modificarlos como en el ejemplo poniéndolos a cero.
En un segundo exiftool modifica todas las imágenes indicadas generando un copia de seguridad del archivo sin modificar añadiendo el sufijo “_original”.

Ahora podéis renombrar las imágenes según su fecha de creación simplemente haciendo:exiftool -FileOrder FileNumber "-FileName<CreateDate" -d AGQ_%Y%m%d_%H%M%S%%-c.%%le'

Eso generaría una lista de archivos como:AGQ_20180130_002234.jpg
AGQ_20180130_125225.jpg
AGQ_20180127_144258.jpg
AGQ_20180131_024455.jpg
AGQ_20180206_141620.jpg
AGQ_20180205_200035.jpg
AGQ_20180203_124745.jpg

Nota la opción “-FileOrder FileNumber” es interesante incluirla (no obligatorio) por el siguiente motivo: si las fotos están tomadas con varios horas o minutos de diferencia no hay problema en el renombrado, pero si las fotos corresponden a una ráfaga de (ejemplo) una cámara reflex, en la que varias fotos comparten la hora:minuto:segundo de creación exiftool las renombrará añadiendo al final un -1, -2, -3, etc… pero puede que índice no respete el orden en que fue realizada la ráfaga. Al incluir FileNumber le decimos que respete el orden número que llevaba el nombre original, y lo mantenga. Así las fotos FOTO_0244.NEF, foto FOTO_0245.NEF, foto FOTO_0246.NEF pasarán a ser AGQ_20180110_123456-1.nef, AGQ_20180110_123456-2.nef, AGQ_20180110_123456-3.nef…compartiendo fecha y orden de creación.

Modificar la fecha impresa en la imagen

(puntua para premio :D )
Obviamente el ejemplo del paso anterior únicamente modificábamos los datos exif insertos el archivo fotográfico. Pero muchas cámaras de fototrampeo sobreimprimen en la imagen la fecha en fueron creadas (mal impresa porque el reloj de la cámara estaba desconfigurado).
Pero nada que no tenga arreglo, con un par de líneas y otra fabulosa herramienta ImageMagick podemos arreglarlo rápidamente.
Las siguientes líneas cogen todos los JPG de una carpeta, renombrados como en el paso anterior y sobreimprimen la fecha de la fotografía con la fecha corregida:
mkdir corregidos
for file in $(find . -type f -name "*jpg"); do
t=$(echo "$file"|sed -r 's/.\/AGQ_([0-9]{4})([0-9]{2})([0-9]{2})_([0-9]{2})([0-9]{2})([0-9]{2}).jpg/\2\/\3\/\1 \4:\5:\6/');
convert "$file" -gravity south-east -fill '#dddddd' -draw 'rectangle 1600,1845 2250,1950' -fill black -pointsize 76 -annotate +320+5 "$t" -flatten "./corregidos/$file"
done

Con esa línea se corrigen 200 fotografías en apenas 30 segundos
Imagen con fecha EXIF y fecha impresas corregidas

the avatar of Greg Kroah-Hartman

Linux Kernel Release Model

Note

This post is based on a whitepaper I wrote at the beginning of 2016 to be used to help many different companies understand the Linux kernel release model and encourage them to start taking the LTS stable updates more often. I then used it as a basis of a presentation I gave at the Linux Recipes conference in September 2017 which can be seen here.

With the recent craziness of Meltdown and Spectre , I’ve seen lots of things written about how Linux is released and how we handle handles security patches that are totally incorrect, so I figured it is time to dust off the text, update it in a few places, and publish this here for everyone to benefit from.

the avatar of Federico Mena-Quintero

Writing a command-line program in Rust

As a library writer, it feels a bit strange, but refreshing, to write a program that actually has a main() function.

My experience with Rust so far has been threefold:

  • Porting chunks of C to Rust for librsvg - this is all work on librsvg's internals and no users are exposed to it directly.

  • Working on gnome-class, the procedural macro ("a little compiler") to generate GObject boilerplate from Rust. This feels like working on the edge of the exotic; it is something that runs in the Rust compiler and spits code on behalf of the programmer.

  • A few patches to the gtk-rs ecosystem. Again, work on the internals, or something that feels library-like.

But other than toy programs to test things, I haven't written a stand-alone tool until rsvg-bench. It's quite a thrill to be able to just run the thing instead of waiting for other people to write code to use it!

Parsing command-line arguments

There are quite a few Rust crates ("libraries") to parse command-line arguments. I read about structopt via Robert O'Callahan's blog; structopt lets you define a struct to hold the values of your command-line options, and then you annotate the fields in that struct to indicate how they should be parsed from the command line. It works via Rust's procedural macros. Internally it generates stuff for the clap crate, a well-established mechanism for dealing with command-line options.

And it is quite pleasant! This is basically all I needed to do:

#[derive(StructOpt, Debug)]
#[structopt(name = "rsvg-bench", about = "Benchmarking utility for librsvg.")]
struct Opt {
    #[structopt(short = "s",
                long  = "sleep",
                help  = "Number of seconds to sleep before starting to process SVGs",
                default_value = "0")]
    sleep_secs: usize,

    #[structopt(short = "p",
                long  = "num-parse",
                help  = "Number of times to parse each file",
                default_value = "100")]
    num_parse: usize,

    #[structopt(short = "r",
                long  = "num-render",
                help  = "Number of times to render each file",
                default_value = "100")]
    num_render: usize,

    #[structopt(long = "pixbuf",
                help = "Render to a GdkPixbuf instead of a Cairo image surface")]
    render_to_pixbuf: bool,

    #[structopt(help = "Input files or directories",
                parse(from_os_str))]
    inputs: Vec<PathBuf>
}

fn main() {
    let opt = Opt::from_args();

    if opt.inputs.len() == 0 {
        eprintln!("No input files or directories specified\n");
        process.exit(1);
    }

    ...
}

Each field in the Opt struct above corresponds to one command-line argument; each field has annotations for structopt to generate the appropriate code to parse each option. For example, the render_to_pixbuf field has a long option name called "pixbuf"; that field will be set to true if the --pixbuf option gets passed to rsvg-bench.

Handling errors

Command-line programs generally have the luxury of being able to just exit as soon as they encounter an error.

In C this is a bit cumbersome since you need to deal with every place that may return an error, find out what to print, and call exit(1) by hand or something. If you miss a single place where an error is returned, your program will keep running with an inconsistent state.

In languages with exception handling, it's a bit easier - a small script can just let exceptions be thrown wherever, and if it catches them at the toplevel, it can just print the exception and abort gracefully. However, these nonlocal jumps make me uncomfortable; I think exceptions are hard to reason about.

Rust makes this easy: it forces you to handle every call that may return an error, but it lets you bubble errors up easily, or handle them in-place, or translate them to a higher-level error.

In the Rust world the [failure] crate is getting a lot of traction as a convenient, modern way to handle errors.

In rsvg-bench, errors can come from several places:

  • I/O errors when reading files and directories.

  • Errors from librsvg's parsing stage; you get a GError.

  • Errors from the rendering stage. This can be a Cairo error (a cairo_status_t), or a simple "something bad happened; can't render" from librsvg's old convenience api in C. Don't you hate it when C code just gives up and returns NULL or a boolean false, without any further details on what went wrong?

For rsvg-bench, I just needed to be able to represent Cairo errors and generic rendering errors. Everything else, like an io::Error, is automatically wrapped by the failure crate's mechanism. I just needed to do this:

extern crate failure;
#[macro_use]
extern crate failure_derive;

#[derive(Debug, Fail)]
enum ProcessingError {
    #[fail(display = "Cairo error: {:?}", status)]
    CairoError {
        status: cairo::Status
    },

    #[fail(display = "Rendering error")]
    RenderingError
}

Whenever the code gets a Cairo error, I can translate it to a ProcessingError::CairoError and bubble it up:

fn render_to_cairo(handle: &rsvg::Handle) -> Result<(), Error> {
    let dim = handle.get_dimensions();
    let surface = cairo::ImageSurface::create(cairo::Format::ARgb32,
                                              dim.width,
                                              dim.height)
        .map_err(|e| ProcessingError::CairoError { status: e })?;

    ...
}

And when librsvg returns a "couldn't render" error, I translate that to a ProcessingError::RenderingError:

fn render_to_cairo(handle: &rsvg::Handle) -> Result<(), Error> {
    ...

    let cr = cairo::Context::new(&surface);

    if handle.render_cairo(&cr) {
        Ok(())
    } else {
        Err(Error::from(ProcessingError::RenderingError))
    }
}

Here, the Ok() case of the Result does not contain any value — it's just (), as the generated images are not stored anywhere: they are just rendered to get some timings, not to be saved or anything.

Up to where do errors bubble?

This is the "do everything" function:

fn run(opt: &Opt) -> Result<(), Error> {
    ...

    for path in &opt.inputs {
        process_path(opt, &path)?;
    }

    Ok(())
}

For each path passed in the command line, process it. The program sees if the path corresponds to a directory, and it will scan it recursively. Or if the path is an SVG file, the program will load the file and render it.

Finally, main() just has this:

fn main() {
    let opt = Opt::from_args();

    ...

    match run(&opt) {
        Ok(_) => (),
        Err(e) => {
            eprintln!("{}", e);
            process::exit(1);
        }
    }
}

I.e. process command line arguments, run the whole thing, and print an error if there was one.

I really appreciate that most places that can return an error an just put a ? for the error to bubble up. This is much more legible than in C, where every call must have an if (something_bad_happened) { deal_with_it; } after it... and Rust won't let me get away with ignoring an error, but it makes it easy to actually deal with it properly.

Reading an SVG file quickly

Why, just mmap() it and feed it to librsvg, to avoid buffer copies. This is easy in Rust:

fn process_file<P: AsRef<Path>>(opt: &Opt, path: P) -> Result<(), Error> {
    let file = File::open(path)?;
    let mmap = unsafe { MmapOptions::new().map(&file)? };

    let bytes = &mmap;

    let handle = rsvg::Handle::new_from_data(bytes)?;
    ...
}

Many things can go wrong here:

  • File::open() can return an io::Error.
  • MmapOptions::map() can return an io::Error from the mmap(2) system call, or from the fstat(2) to read the file's size to map it.
  • rsvg::Handle::new_from_data() can return a GError from parsing the file.

The little ? characters after each call that can return an error mean, just give me back the result, or convert the error to a failure::Error that can be examined later. This is beautifully legible to me.

Summary

Writing command-line programs in Rust is fun! It's nice to have neurotically-safe scripts that one can trust in the future.

Rsvg-bench is available here.

the avatar of Klaas Freitag

Kraft Moving to KDE Frameworks: Beta Release!

Kraft LogoKraft is KDE/Qt based desktop software to manage documents like quotes and invoices in the small business. It focuses on ease of use through an intuitive GUI, a well chosen feature set and ensures privacy by keeping data local.

Kraft is around for more than twelve years, but it has been a little quiet recently. However, Kraft is alive and kicking!

I am very happy to announce the first public beta version of Kraft V. 0.80, the first Kraft version that is based on KDE Frameworks 5 and Qt 5.x.

It did not only go through the process of being ported to Qt5/KF5, but I also took the opportunity to refactor and tackle a lot of issues that Kraft was suffering from in the past.

Here are a few examples, a full changelog will follow:

  • Akonadi dependency: Earlier Kraft versions had a hard dependency on Akonadi, because it uses the KDE Addressbook to manage customer addresses. Without having Akonadi up and running, Kraft was not functional. People who were testing Kraft without having Akonadi up were walking away with a bad impression of Kraft.

    Because Akonadi and the KDE contacts integration is perfect for this use case, it still the way to go for Kraft, and I am delighted to build on such strong shoulders. But Kraft now also works without Akonadi. Users get a warning, that the full address book integration is not available, but can enter addresses manually and continue to create documents with Kraft. It remains fully functional.

    Also, a better abstraction of the Akonadi-based functionality in Kraft eases porting to platforms where other system address books are available, such as MacOSX.

  • AppImage: The new Kraft is available as AppImage.

    There was a lot of feedback that people could not test Kraft, because it was hard to set up or compile, and dependency are missing. The major Linux distributions seem to be unable to keep up with current versions of leaf packages like Kraft, and I can not do that for the huge variety of distros. So AppImage as container format for GUI applications seems to be a perfect fit here.

  • A lot more was done. Kraft got simplifications in both the code base and the functionality, careful gui changes, and a decreased dependency stack. You should check it out!

Today (on FOSDEM weekend, which could be a better date?) the pre version 0.80 beta10 is announced to the public.

I would appreciate if people test and report bugs at Github: That is where the development is currently happening.

a silhouette of a person's head and shoulders, used as a default avatar

Ceph Day Germany 2018 - Update

The German Ceph Day 2018 in Darmstadt is finally only a few days away (07. February 2018).
The agenda is now complete. There are 13 talks and a short Q&A session planed during the day. 
Already 150 attendees signed up and due to the support of our latest sponsor Intel we now are able to host for up to 175 interested members of the big Ceph community. There are only a limited number of tickets left, be quick to register for one while they are still available.

I would like to give already a big thanks to the Ceph community team, all sponsors (SUSE, Deutsche Telekom AG, Tallence AG, SAPIntel42on.com, Red Hat, Starline), speakers and supporters which make this event possible. 

Some information if you attend: 

  • The location is at T-Online-Allee 1, 64295 Darmstadt. The entrance to the Deutsche Telekom building is here. Please check this page for directions, traffic, parking and hotel information.
  • The registration will be open from 8:15am on. Please register at eventbrite so that we can be sure that you get a security badge to access the venue. In case the Ceph Day registration desk is closed, get you security badge from the front desk and refer to the Ceph Day in the Forum. You will get you name tag and goodies during the next break.
See you in Darmstadt! Enjoy the Ceph Day!

the avatar of Federico Mena-Quintero

rsvg-bench - a benchmark for librsvg

Librsvg 2.42.0 came out with a rather major performance regression compared to 2.40.20: SVGs with many transform attributes would slow it down. It was fixed in 2.42.1. We changed from using a parser that would recompile regexes each time it was called, to one that does simple string-based matching and parsing.

When I rewrote librsvg's parser for the transform attribute from C to Rust, I was just learning about writing parsers in Rust. I chose lalrpop, an excellent, Yacc-like parser generator for Rust. It generates big, fast parsers, like what you would need for a compiler — but it compiles the tokenizer's regexes each time you call the parser. This is not a problem for a compiler, where you basically call the parser only once, but in librsvg, we may call it thousands of times for an SVG file with thousands of objects with transform attributes.

So, for 2.42.1 I rewrote that parser using rust-cssparser. This is what Servo uses to parse CSS data; it's a simple tokenizer with an API that knows about CSS's particular constructs. This is exactly the kind of data that librsvg cares about. Today all of librsvg's internal parsers work using rust-cssparser, or they are so simple that they can be done with Rust's normal functions to split strings and such.

Getting good timings

Librsvg ships with rsvg-convert, a command-line utility that can render an SVG file and write the output to a PNG. While it would be possible to get timings for SVG rendering by timing how long rsvg-convert takes to run, it's a bit clunky for that. The process startup adds noise to the timings, and it only handles one file at a time.

So, I've written rsvg-bench, a small utility to get timings out of librsvg. I wanted a tool that:

  • Is able to process many SVG images with a single command. For example, this lets us answer a question like, "how long does version N of librsvg take to render a directory full of SVG icons?" — which is important for the performance of an application chooser.

  • Is able to repeatedly process SVG files, for example, "render this SVG 1000 times in a row". This is useful to get accurate timings, as a single render may only take a few microseconds and may be hard to measure. It also helps with running profilers, as they will be able to get more useful samples if the SVG rendering process runs repeatedly for a long time.

  • Exercises librsvg's major code paths for parsing and rendering separately. For example, librsvg uses different parts of the XML parser depending on whether it is being pushed data, vs. being asked to pull data from a stream. Also, we may only want to benchmark the parser but not the renderer; or we may want to parse SVGs only once but render them many times after that.

  • Is aware of librsvg's peculiarities, such as the extra pass to convert a Cairo image surface to a GdkPixbuf when one uses the convenience function rsvg_handle_get_pixbuf().

Currently rsvg-bench supports all of that.

An initial benchmark

I ran this

/usr/bin/time rsvg-bench -p 1 -r 1 /usr/share/icons

to cause every SVG icon in /usr/share/icons to be parsed once, and rendered once (i.e. just render every file sequentially). I did this for librsvg 2.40.20 (C only), and 2.42.{0, 1, 2} (C and Rust). There are 5522 SVG files in there. The timings look like this:

version time (sec)
2.40.20 95.54
2.42.0 209.50
2.42.1 97.18
2.42.2 95.89

Bar chart of timings

So, 2.42.0 was over twice as slow as the C-only version, due to the parsing problems. But now, 2.42.2 is practically just as fast as the C only version. What made this possible?

  • 2.40.20 - the old C-only version
  • 2.42.0 - C + Rust, with a lalrpop parser for the transform attribute
  • 2.42.1 - Servo's cssparser for the transform attribute
  • 2.42.2 - removed most C-to-Rust string copies during parsing

I have started taking profiles of rsvg-bench runs with sysprof, and there are some improvements worth making. Expect news soon!

Rsvg-bench is available in Gnome's gitlab instance.

a silhouette of a person's head and shoulders, used as a default avatar

Pair programming with git

Git is great. It took the crown of version control systems in just a few years. Baked into the git model is that each commit has a committer and one author. Ofen this is the same person. What if there is more than one author for a commit? This is the case with pair programming or with mob programming or with any other way of collaboration where code is produced by more than one person. I talked about this at the git-merge conference last year. There are some workarounds but there is no native support in git yet.

It seems that the predominant convention to express multi-authorship in git commits is to add a Co-authored-by entry in the commit message as a so-called trailer. This adds more flexibility than trying to tweak the author and committer fields and is quite widely accepted, especially by the git community.

I'm happy that GitHub added support for the Co-authored-by convention now. It makes multi-authorship more visible. That's a good thing.

I did some work on adding native support for multiple authors in git. The direct approach of allowing more than one author field might be too intrusive due to the many possible side effects. But the Co-authored-by trailer is still a good solution. I have an unfinished patch to add some native support for that in git. It does need some more work, though.

Contributing to git is an interesting experience. The git mailing list is the central place. The contribution workflow is well-documented. It's good that Junio as maintainer has spelled out how he reviews patches and what that means for contributors. And it's definitely fun to work on a self-contained C project.

I'm looking forward to more multi-author support in git and GitHub. Pair-programming is a great model, and properly reflecting in the commit logs what happened when the code was written is the right thing to do.