Writing a command-line program in Rust
As a library writer, it feels a bit strange, but refreshing, to write
a program that actually has a main() function.
My experience with Rust so far has been threefold:
-
Porting chunks of C to Rust for librsvg - this is all work on librsvg's internals and no users are exposed to it directly.
-
Working on gnome-class, the procedural macro ("a little compiler") to generate GObject boilerplate from Rust. This feels like working on the edge of the exotic; it is something that runs in the Rust compiler and spits code on behalf of the programmer.
-
A few patches to the gtk-rs ecosystem. Again, work on the internals, or something that feels library-like.
But other than toy programs to test things, I haven't written a stand-alone tool until rsvg-bench. It's quite a thrill to be able to just run the thing instead of waiting for other people to write code to use it!
Parsing command-line arguments
There are quite a few Rust crates ("libraries") to parse command-line
arguments. I read about structopt via Robert O'Callahan's
blog; structopt lets you define a struct to hold the values of
your command-line options, and then you annotate the fields in that
struct to indicate how they should be parsed from the command line.
It works via Rust's procedural macros. Internally it generates stuff
for the clap crate, a well-established mechanism for dealing with
command-line options.
And it is quite pleasant! This is basically all I needed to do:
#[derive(StructOpt, Debug)]
#[structopt(name = "rsvg-bench", about = "Benchmarking utility for librsvg.")]
struct Opt {
#[structopt(short = "s",
long = "sleep",
help = "Number of seconds to sleep before starting to process SVGs",
default_value = "0")]
sleep_secs: usize,
#[structopt(short = "p",
long = "num-parse",
help = "Number of times to parse each file",
default_value = "100")]
num_parse: usize,
#[structopt(short = "r",
long = "num-render",
help = "Number of times to render each file",
default_value = "100")]
num_render: usize,
#[structopt(long = "pixbuf",
help = "Render to a GdkPixbuf instead of a Cairo image surface")]
render_to_pixbuf: bool,
#[structopt(help = "Input files or directories",
parse(from_os_str))]
inputs: Vec<PathBuf>
}
fn main() {
let opt = Opt::from_args();
if opt.inputs.len() == 0 {
eprintln!("No input files or directories specified\n");
process.exit(1);
}
...
}
Each field in the Opt struct above corresponds to one command-line
argument; each field has annotations for structopt to generate the
appropriate code to parse each option. For example, the
render_to_pixbuf field has a long option name called "pixbuf";
that field will be set to true if the --pixbuf option gets passed
to rsvg-bench.
Handling errors
Command-line programs generally have the luxury of being able to just exit as soon as they encounter an error.
In C this is a bit cumbersome since you need to deal with every
place that may return an error, find out what to print, and call
exit(1) by hand or something. If you miss a single place where an
error is returned, your program will keep running with an inconsistent
state.
In languages with exception handling, it's a bit easier - a small script can just let exceptions be thrown wherever, and if it catches them at the toplevel, it can just print the exception and abort gracefully. However, these nonlocal jumps make me uncomfortable; I think exceptions are hard to reason about.
Rust makes this easy: it forces you to handle every call that may return an error, but it lets you bubble errors up easily, or handle them in-place, or translate them to a higher-level error.
In the Rust world the [failure] crate is getting a lot of traction
as a convenient, modern way to handle errors.
In rsvg-bench, errors can come from several places:
-
I/O errors when reading files and directories.
-
Errors from librsvg's parsing stage; you get a GError.
-
Errors from the rendering stage. This can be a Cairo error (a cairo_status_t), or a simple "something bad happened; can't render" from librsvg's old convenience api in C. Don't you hate it when C code just gives up and returns NULL or a boolean false, without any further details on what went wrong?
For rsvg-bench, I just needed to be able to represent Cairo errors and
generic rendering errors. Everything else, like an io::Error, is
automatically wrapped by the failure crate's mechanism. I just
needed to do this:
extern crate failure;
#[macro_use]
extern crate failure_derive;
#[derive(Debug, Fail)]
enum ProcessingError {
#[fail(display = "Cairo error: {:?}", status)]
CairoError {
status: cairo::Status
},
#[fail(display = "Rendering error")]
RenderingError
}
Whenever the code gets a Cairo error, I can translate it to a
ProcessingError::CairoError and bubble it up:
fn render_to_cairo(handle: &rsvg::Handle) -> Result<(), Error> {
let dim = handle.get_dimensions();
let surface = cairo::ImageSurface::create(cairo::Format::ARgb32,
dim.width,
dim.height)
.map_err(|e| ProcessingError::CairoError { status: e })?;
...
}
And when librsvg returns a "couldn't render" error, I translate that
to a ProcessingError::RenderingError:
fn render_to_cairo(handle: &rsvg::Handle) -> Result<(), Error> {
...
let cr = cairo::Context::new(&surface);
if handle.render_cairo(&cr) {
Ok(())
} else {
Err(Error::from(ProcessingError::RenderingError))
}
}
Here, the Ok() case of the Result does not contain any value —
it's just (), as the generated images are not stored anywhere: they
are just rendered to get some timings, not to be saved or anything.
Up to where do errors bubble?
This is the "do everything" function:
fn run(opt: &Opt) -> Result<(), Error> {
...
for path in &opt.inputs {
process_path(opt, &path)?;
}
Ok(())
}
For each path passed in the command line, process it. The program sees if the path corresponds to a directory, and it will scan it recursively. Or if the path is an SVG file, the program will load the file and render it.
Finally, main() just has this:
fn main() {
let opt = Opt::from_args();
...
match run(&opt) {
Ok(_) => (),
Err(e) => {
eprintln!("{}", e);
process::exit(1);
}
}
}
I.e. process command line arguments, run the whole thing, and print an error if there was one.
I really appreciate that most places that can return an error an just
put a ? for the error to bubble up. This is much more legible than
in C, where every call must have an if (something_bad_happened) {
deal_with_it; } after it... and Rust won't let me get away with
ignoring an error, but it makes it easy to actually deal with it properly.
Reading an SVG file quickly
Why, just mmap() it and feed it to librsvg, to avoid buffer copies.
This is easy in Rust:
fn process_file<P: AsRef<Path>>(opt: &Opt, path: P) -> Result<(), Error> {
let file = File::open(path)?;
let mmap = unsafe { MmapOptions::new().map(&file)? };
let bytes = &mmap;
let handle = rsvg::Handle::new_from_data(bytes)?;
...
}
Many things can go wrong here:
-
File::open()can return an io::Error. -
MmapOptions::map()can return an io::Error from themmap(2)system call, or from thefstat(2)to read the file's size to map it. -
rsvg::Handle::new_from_data()can return a GError from parsing the file.
The little ? characters after each call that can return an error
mean, just give me back the result, or convert the error to a
failure::Error that can be examined later. This is beautifully
legible to me.
Summary
Writing command-line programs in Rust is fun! It's nice to have neurotically-safe scripts that one can trust in the future.
Kraft Moving to KDE Frameworks: Beta Release!
Kraft is KDE/Qt based desktop software to manage documents like quotes and invoices in the small business. It focuses on ease of use through an intuitive GUI, a well chosen feature set and ensures privacy by keeping data local.
Kraft is around for more than twelve years, but it has been a little quiet recently. However, Kraft is alive and kicking!
I am very happy to announce the first public beta version of Kraft V. 0.80, the first Kraft version that is based on KDE Frameworks 5 and Qt 5.x.
It did not only go through the process of being ported to Qt5/KF5, but I also took the opportunity to refactor and tackle a lot of issues that Kraft was suffering from in the past.
Here are a few examples, a full changelog will follow:
-
Akonadi dependency: Earlier Kraft versions had a hard dependency on Akonadi, because it uses the KDE Addressbook to manage customer addresses. Without having Akonadi up and running, Kraft was not functional. People who were testing Kraft without having Akonadi up were walking away with a bad impression of Kraft.
Because Akonadi and the KDE contacts integration is perfect for this use case, it still the way to go for Kraft, and I am delighted to build on such strong shoulders. But Kraft now also works without Akonadi. Users get a warning, that the full address book integration is not available, but can enter addresses manually and continue to create documents with Kraft. It remains fully functional.
Also, a better abstraction of the Akonadi-based functionality in Kraft eases porting to platforms where other system address books are available, such as MacOSX.
-
AppImage: The new Kraft is available as AppImage.
There was a lot of feedback that people could not test Kraft, because it was hard to set up or compile, and dependency are missing. The major Linux distributions seem to be unable to keep up with current versions of leaf packages like Kraft, and I can not do that for the huge variety of distros. So AppImage as container format for GUI applications seems to be a perfect fit here.
-
A lot more was done. Kraft got simplifications in both the code base and the functionality, careful gui changes, and a decreased dependency stack. You should check it out!
Today (on FOSDEM weekend, which could be a better date?) the pre version 0.80 beta10 is announced to the public.
I would appreciate if people test and report bugs at Github: That is where the development is currently happening.
Ceph Day Germany 2018 - Update
- The location is at T-Online-Allee 1, 64295 Darmstadt. The entrance to the Deutsche Telekom building is here. Please check this page for directions, traffic, parking and hotel information.
- The registration will be open from 8:15am on. Please register at eventbrite so that we can be sure that you get a security badge to access the venue. In case the Ceph Day registration desk is closed, get you security badge from the front desk and refer to the Ceph Day in the Forum. You will get you name tag and goodies during the next break.
rsvg-bench - a benchmark for librsvg
Librsvg 2.42.0 came out with a rather major performance regression
compared to 2.40.20: SVGs with many transform
attributes would slow it down. It was fixed in 2.42.1. We changed
from using a parser that would recompile regexes each time it was
called, to one that does simple string-based matching and
parsing.
When I rewrote librsvg's parser for the transform attribute from C
to Rust, I was just learning about writing parsers in Rust.
I chose lalrpop, an excellent, Yacc-like parser generator for Rust.
It generates big, fast parsers, like what you would need for a
compiler — but it compiles the tokenizer's regexes each time you call
the parser. This is not a problem for a compiler, where you basically
call the parser only once, but in librsvg, we may call it thousands of
times for an SVG file with thousands of objects with transform
attributes.
So, for 2.42.1 I rewrote that parser using rust-cssparser. This is what Servo uses to parse CSS data; it's a simple tokenizer with an API that knows about CSS's particular constructs. This is exactly the kind of data that librsvg cares about. Today all of librsvg's internal parsers work using rust-cssparser, or they are so simple that they can be done with Rust's normal functions to split strings and such.
Getting good timings
Librsvg ships with rsvg-convert, a command-line utility that can
render an SVG file and write the output to a PNG. While it would be
possible to get timings for SVG rendering by timing how long
rsvg-convert takes to run, it's a bit clunky for that. The process
startup adds noise to the timings, and it only handles one file at a
time.
So, I've written rsvg-bench, a small utility to get timings out of librsvg. I wanted a tool that:
-
Is able to process many SVG images with a single command. For example, this lets us answer a question like, "how long does version N of librsvg take to render a directory full of SVG icons?" — which is important for the performance of an application chooser.
-
Is able to repeatedly process SVG files, for example, "render this SVG 1000 times in a row". This is useful to get accurate timings, as a single render may only take a few microseconds and may be hard to measure. It also helps with running profilers, as they will be able to get more useful samples if the SVG rendering process runs repeatedly for a long time.
-
Exercises librsvg's major code paths for parsing and rendering separately. For example, librsvg uses different parts of the XML parser depending on whether it is being pushed data, vs. being asked to pull data from a stream. Also, we may only want to benchmark the parser but not the renderer; or we may want to parse SVGs only once but render them many times after that.
-
Is aware of librsvg's peculiarities, such as the extra pass to convert a Cairo image surface to a GdkPixbuf when one uses the convenience function
rsvg_handle_get_pixbuf().
Currently rsvg-bench supports all of that.
An initial benchmark
I ran this
/usr/bin/time rsvg-bench -p 1 -r 1 /usr/share/icons
to cause every SVG icon in /usr/share/icons to be parsed once, and
rendered once (i.e. just render every file sequentially). I did this
for librsvg 2.40.20 (C only), and 2.42.{0, 1, 2} (C and Rust). There
are 5522 SVG files in there. The timings look like this:
| version | time (sec) |
|---|---|
| 2.40.20 | 95.54 |
| 2.42.0 | 209.50 |
| 2.42.1 | 97.18 |
| 2.42.2 | 95.89 |

So, 2.42.0 was over twice as slow as the C-only version, due to the parsing problems. But now, 2.42.2 is practically just as fast as the C only version. What made this possible?
- 2.40.20 - the old C-only version
- 2.42.0 - C + Rust, with a lalrpop parser for the
transformattribute - 2.42.1 - Servo's cssparser for the
transformattribute - 2.42.2 - removed most C-to-Rust string copies during parsing
I have started taking profiles of rsvg-bench runs with sysprof, and there are some improvements worth making. Expect news soon!
Pair programming with git
Ruby on openSUSE
How To Fix MacBook Air Keyboard on openSUSE Leap (or Systemd Linux)
One of the problems if you install Linux on your MacBook Air is your Tilde/Backtick keys are not recognized with the output. it will be appearing different symbols. As a Zimbra Administrator, i need backtick symbol for specifying email server host (`zmhostname`).
I have openSUSE Leap 42.3 on my MacBook Air, now i usually use openSUSE rather than My MacOS Sierra. But this problem comes to me and still bothering my jobs.
It turns out …
The problem is very simple. This because https://bugzilla.kernel.org/show_bug.cgi?id=60181#c43
To fix it, Just run this command :
echo 0 > /sys/module/hid_apple/parameters/iso_layout
Very simple right?
It’s Solved, but …
The problem will come to you again if you restart your MacBook Air. So if you use systemV or init. you can place those command on rc.local or boot.after.
But, how if your operating system using Systemd?
I have openSUSE Leap 42.3 on my MacBook Air, now i usually use openSUSE rather than My MacOS Sierra. But this problem comes to me and still bothering my jobs.
Simple, you just need to create a service with that command. create a .service file. For example, i use openSUSE Leap, i create it on /etc/systemd/system/mba-keyboard-fix.service with the following script:
[Unit] Description=Fix MacBook Air Keyboard [Service] Type=oneshot ExecStart=/bin/bash -c '/usr/bin/echo 0 > /sys/module/hid_apple/parameters/iso_layout' [Install] WantedBy=multi-user.target
Or, to simplify it. Just download that file using wget:
cd /etc/systemd/system/ wget -c https://dhenandi.com/repo/mba-keyboard-fix.service
Then start the service and enable it.
systemctl enable mba-keyboard-fix.service systemctl start mba-keyboard-fix.service
Try to reboot your MacBook Air, if it doesn’t work for you. Just sell your MacBook Air and buy an openSUSE Tuxedo Infinity instead :-P.
The post How To Fix MacBook Air Keyboard on openSUSE Leap (or Systemd Linux) appeared first on dhenandi.com.
Gold Cards
How does your team prioritise work? Who gets to decide what is most important? What would happen if each team member just worked on what they felt like?
I’ve had the opportunity to observe an experiment: over the past 8 years at Unruly, developers have had 20% of their time to work on whatever they want.
This is not exactly like Google’s famed 20% time for what “will most benefit Google” or “120% time”.
Instead, developers genuinely have 20% of their time (typically a day a week) to work on whatever they choose—whatever they deem most important to themselves. There are no rules, other than the company retains ownership of anything produced (which does not preclude open sourcing).
We call 20% time “Gold Cards” after the Connextra practice it’s based upon. Initially we represented the time using yellow coloured cards on our team board.
It’s important to us—if the team fails to take close to 20% of their time on gold cards it will be raised in retrospectives and considered a problem to address.
While it may seem like an expensive practice, it’s an investment in individuals that I’ve seen really pay off, time after time.
Antidote to Prioritisation Systems
If you’re working in a team, you’ll probably have some mechanism for making prioritisation decisions about what is most important to work on next; whether that be a benevolent dictatorship, team consensus, voting, cost of delay, or something else.
However much you like and trust the decision making process in your team, does it always result in the best decisions? Are there times when you thought the team was making the wrong decision and you turned out to be right?

Gold cards allow each individual in the team time to work on things explicitly not prioritised by the team, guilt free.
This can go some way to mitigating flaws in the team’s prioritisation. If you feel strongly enough that a decision is wrong, then you can explore it further on your gold card time. You can build that feature that you think is more important, or you can create a proof-of-concept to demonstrate an approach is viable.
This can reduce the stakes in team prioritisation discussions, taking some of the stress away; you at least have your gold card time to allocate how you see fit.
Here’s some of the ways it’s played out.
Saving Months of Work
I can recall multiple occasions when gold card activities have saved literally team-months of development work.
Avoiding Yak Shaving
One was a classic yak-shaving scenario. Our team discovered that a critical service could not be easily reprovisioned, and to make matters worse, was over capacity.
Fast forward a few weeks and we were no longer just reprovisioning a service, but creating a new base operating system image for all our infrastructure, a new build pipeline for creating it, and attempting to find/build alternatives for components that turned out to be incompatible with this new software stack.
We were a couple of months in, and estimated another couple of months work to complete the migration.
We’d retrospected a few times, we thought we’d fully considered our other options and we were just best off ploughing on through the long, but now well-understood path to completion.
Someone disagreed, and decided to use their gold card to go back and re-visit one of the early options the team thought they’d ruled out.
Within a day they’d demonstrated a solution to the original problem using our original tech stack, without needing most of the yak shaving activities.
Innovative Solutions
I’ve also seen people spotting opportunities in their gold cards that the team had not considered, saving months of work.
We had a need to store a large amount of additional data. We’d estimated a it would take the team some months to build out a new database cluster for the anticipated storage needs.
A gold card used to look for a mechanism for compressing the data, ended up yielding a solution that enabled us to indefinitely store the data, using our existing infrastructure.
Spawning new Products
Gold cards give people space to innovate, time to try new things, wild ideas that might be too early.
Our first mobile-web compatible ad formats came out of a gold card. We had mobile compatible ads considerably before we had enough mobile traffic to make it seem worthwhile.
Someone wanted to spend time working on mobile on their gold card, which resulted in having a product ready to go when mobile use increased, we weren’t playing catch up.
On another occasion a feature we were exploring had a prohibitively large download size for the bandwidth available at the time. A gold card yielded a far more bandwidth-efficient mechanism, contributing to the success of the product.
“How hard can it be?”
It’s easy to underestimate the complexity involved in building new features. “How hard can it be?” is often a dangerous phrase, uttered before discovering just how hard it really is, or embroiling oneself in large amounts of work.
Gold cards make this safe. If it’s hard enough that you can’t achieve it in your gold card, then you’ve only spent a small amount of time, and only your own discretionary time.
Gold cards also make it easy to experiment—you don’t need to convince anyone else that it will work. Sometimes, just sometimes, things actually turn out to be as easy, or even easier, than our hopes.
For a long time we had woeful reporting capabilities on our financial data. The team believed that importing this data to our data warehouse would be a large endeavour, involving redesigning our entire data pipeline.
A couple of developers disagreed, and decided to spend their gold card time working together, attempting to making this data reportable. They ended up coming up with a simple solution, that was compatible with the existing technology, and has withstood the test of time. Huge value unlocked from just one day spent speculatively.
That thing that bothers you
Whether it’s a code smell you want to get rid of, some UX debt that irritates you every time you see it, or the lack of automation in a task you perform regularly; there are always things that irritate us.
We ought to be paying attention to these irritations and addressing them as we notice them, but sometimes the team has deemed something else is more important or urgent.
Gold cards give you an opportunity to fix the things that matter to you. Not only does this help avoid frustration, but sometimes individuals fixing things they find annoying actually produces better outcomes than the wisdom of the crowd.
On one occasion a couple of developers spent their gold card just deleting code. They ended up deleting thousands of unneeded lines of code. Did this cleanup pay off yet? I honestly don’t know, but it may well have done, we have less inventory cost as a result.
Exploring New Tech

When tasked with solving a problem, we have a bias towards tools & technology that we know and understand. This is generally a good thing, exploring every option is often costly and if we pick something new, then the team has to learn it before we become productive.
Sometimes this means we miss out on tech that makes our lives much easier.
People often spend their gold card time playing around with speculative new technologies that they were unfamiliar with.
Much of the tech our teams now rely upon was first investigated and evangelised by someone who tried it out in gold card time; from build systems to monitoring tools, from to databases to test frameworks.
Learning
Tech changes fast; as developers we need to be constantly learning to stay competitive. Sometimes this can present a conflict of interest between the needs of the team to achieve a goal (safer to use known and reliable technology), and your desires to work with new cutting-edge tech.
Gold cards allow you to to prioritise your own learning for at least a day a week. It’s great for you, and it’s great for the team too as it brings in new ideas, techniques, and skills. It’s an investment that increases the skill level of the team over time.
Do you feel like you’d be able to be a better member of the team if you understood the business domain better? What if you knew the programming language you’re working in to a deeper level? If these feel important to you, then gold cards give you dedicated time that you can choose to spend in that way, without needing anyone else’s approval.
Sharing Knowledge
Some people use gold card time to prepare talks they want to give at conferences, or internally at our fortnightly tech-talks. Others write blog posts.
Sharing in this way not only helps others internally, but also gives back to the wider community. It raises people’s individual profiles as excellent developers, and raises the company’s profile as a potential employer.
Furthermore, many find that preparing content in this way improves their own understanding of a topic.
We’re so keen on this that we now give people extra days for writing blog posts.
Remote Working
Many of our XP practices work really well in co-located teams, but we’ve struggled to apply them to remote working. It’s definitely possible to do things like pair and mob-programming remotely, but it can be challenging for teams used to working together in the same space.
We’ve found that gold card time presented an easy opportunity to experiment with remote working—an opportunity to address some of the pain points as we look for ways to introduce more flexibility.
Remote working makes it easier to hire, and helps avoid excluding people who would be unable to join us without this flexibility
Side Projects
Sometimes people choose to work on something completely not work related, like a side project, a game, or a new app. This might not seem immediately valuable to the team, but it’s an opportunity for people to learn in a different context—gaining experience in greenfield development, starting a project from scratch and choosing technologies.
The more diverse our team’s experience & knowledge, the more likely we are to make good decisions in the future. Change is a constant in the industry—we won’t we’ll be working with the tech we’re currently using indefinitely.
Side projects bring some of this learning forward and in-house; we get new perspectives without having to hire new people.
Gold cards allow people to grow without expecting them to spend all their evenings and weekends writing code, encouraging a healthy work/life balance.
Sometimes a change is just what one needs. We spend a lot of our time pair programming; pairing can be intense and tiring. Gold cards give us an opportunity to work on something completely different at least once a week.
Open Source
Most of what we’re working on day to day is not suitable for open sourcing, or would require considerable work to open up.
Gold cards mean we can choose to spend some of our time working on open source software—giving back to the community by working on existing open source code, or working on opening up internal tools.
Hiring & Retention
Having the freedom to spend a day a week working on whatever you want is a nice perk. Offering it helps us hire, and makes Unruly a hard place to leave. The flexibility introduced by gold cards to do the kinds of things outlined above also contribute towards happiness and retention.
Given the costs of recruitment, hiring, onboarding & training, gold cards are worth considering as a benefit even if you didn’t get any of the extra benefits from these anecdotes.
Pitfalls
One trap to avoid is only doing the activities outlined above on gold card days. Many of the activities above should be things the team is doing anyway.
I’ve seen teams start to rely on others—not cleaning up things as a matter of course during their day to day work, because they expect someone will want to do it on their gold card.
I’ve seen teams not set time aside for learning & exploring because they rely on people spending their gold cards on it.
I’ve seen teams ineffectually ploughing ahead with their planned work without stepping back to try to spike some alternative solutions.
These activities should not be restricted to gold cards. Gold cards just give each person the freedom to work on what is most important to them, rather than what’s most important to the team.
There’s also the opposite challenge: new team members may not realise the full range possible uses for gold cards. Gold card use can drift over time to focus more and more on one particular activity, becoming seen as “Learning days” or “Spike days”.
Gold cards seem to be most beneficial when they are used for a wide variety of activities, helping the team notice the benefits of things they hadn’t seen as important.
Gold card time doesn’t always pay off, but it only has to pay off occasionally to be worthwhile.
Picard management tip: Even without game-changing results, experimentation is time well spent.
— Picard Tips (@PicardTips) October 23, 2017
Can we turn it up?
We learn from extreme programming to look for things that are good and turn them up to the max, to get the most value out of them.

If gold cards can bring all these benefits, what would happen if we made them more than 20% time?
Can we give individuals more autonomy without losing the benefits of other things we’ve seen to work well?
What’s the best balance between individual autonomy and the benefits of teams working collaboratively, pair programming, team goals, and stakeholder prioritisation?
We’ve turned things up a little: giving people extra days for conference speaking and blogging, carving out extra time for code dojos, talk preparation, and learning.
I’m sure there’s more we can do to balance the best of individuals working independently, with the benefits of teams.
What have you tried that works well?
2018w03-04: repo_checker devel package comments, announcer re-deployment, CI tweaks, and more
repo_checker devel package comments from multiple target projects
A while back the repo_checker gained the ability to post comments on packages within devel projects for which problems were detected in the target project. Since openSUSE:Factory has a fair number of existing problems, many of which are long-standing, it seemed the maintainers needed to be notified of problems rather than users reporting them in the opensuse-factory mailing list or similar. One limitation of the original implementation was that only one comment could exist per devel package. This workflow is standard among the ReviewBots which only post one comment per entity, but it would be preferred to also post reports for openSUSE:Leap:15.0 since the issues may be different.
A bot_name_suffix was introduced to allow for differentiation between the comments based on the target project for which they represent. With this new ability devel package comments were enabled for Leap 15.0 in addition to Factory. An example of two comments on the same package, science/avogadro, can be seen below.
The difference being that i586 is built in Factory, but not Leap.
Another example, from devel:tools:scm/git, demonstrates the importance of posting comments from both target projects since the issue only exists in Leap.
In addition to supporting multiple target projects, the pull request also introduced a configuration option to post comments directly on the packages within the target project. This feature is now used for SLE development which also improves the Leap workflow since many packages come from SLE and the additional check improves the quality of those submissions.
announcer packaging and deployment for Leap 15.0
Leap 42.3 used the announcer to send email notifications of new builds during it pre-release cycle starting with beta. In preparation for using the announcer for Leap 15.0 the various configuration and service bits that were not yet polished and placed in git were merged. After which the package was simply updated and the announcer enabled for Leap 15.0. This is part of a ongoing effort to migrate all services to those provided by the package in order to simplify and ease maintenance.
CI tweaks and osc crash caught
Since requests are now automatically submitted to Leap 15.0 from Factory the openSUSE-release-tools package was submitted which broke the automatic deployment. Before a request is created to submit the package to Factory it checks to ensure there are no existing requests to avoid spamming, but the osc request list command does not differentiate between requests targeting the package or being sourced from it. As such the Leap requests were detected as existing requests. The problem was resolved by replacing the osc command with a custom API call.
Since the CI is ensured to run at least once every 24 hours, after recent tweaks, changes to upstream dependencies (like OBS and osc) that break the release tools are caught quickly. One such occurance was caught that caused osc to crash on init. This confirmed the setup works well and allows for such issues to be caught before deployed in production.
devel project code standardized
Over the last year, the various implementations for retrieving devel project and falling back when not available have been extracted and refactored into common ones. The final request to merge the common implementations remaining into one function (and fix some bugs) was merged. With that refactoring completed the check_source bot could use the fallback code to handle add_role requests to Leap properly. Nothing too exciting, but helpful towards maintaining the code going forward as previously the same problem had to be fixed in each implementation.
post-accept staging todo list
One of the cumbersome bits to the workflow is communicating/remembering manual steps that need to be taken after a staging is accepted. In lieu of the tools handling everything automatically a todo field was added to allow staging masters to record information to be displayed after a staging is accepted. This builds atop the staging specific config feature added and is thus rather simple to implement.
status check
In preparation for providing public status information a tool for determining if the various bots are running properly was created. The intent is to integrate it into a public status site and/or provide alerts to staging masters.
dashboard
The staging dashboard was improved by krauselukas to include ready requests (adi requests ready to be accepted) and provide links to see all the requests in the various states. This is not only helpful for staging masters to get a quick overview, but helpful to clarify the process to contributors.
last year
During this time last year, an issue reference diffing tool was created to aid in ensuring all fixes from SLE made their way into Factory by comparing the issues referenced in package change logs. The leaper bot was also extended to support SLE and further merged the workflows to allow for investment into the same toolset.
In an effort to improve the staging osc plugin responsiveness the underlying devel project query was reworked to avoid an expensive query that using timed out after 5-10 minutes. The result was a significant speed up.


