Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

FOSDEM 2K18

The Free Open Source Developers European Meeting (FOSDEM) 2018 happened over the weekend. FOSDEM is a fantastic conference where Free Software enthusiasts from all over Europe can meet together at once a year (more than 17000 MAC addresses were registered this time). I really like its clearly unique atmosphere. As usual, 2 amazing days in Brussels, Belgium…

The reason why I visit it again and again. It charges you. When you see all these developers, visit these talks/presentations, have so many conversation with contributors, get innovation ideas that are shared there. It motivates you to get a copy of The Linux Programming Interface & TCP/IP Guide and hack away your amazing Linux software 🙂
Most of the talks were recorded. That’s nice, but by watching it online you just get information, by visiting these talks in person and taking part in discussions, you get much much more.

This time I was there together with my LiMux colleagues from Landeshauptstadt München. By the way, as you maybe know, we’re going to share as much as we can until Munich migrates to Windows (unfortunately, because of bureaucratic reasons making software freely (share it) sometimes is not so easy as we want).

Like in last year FOSDEM was held in so-called “developer rooms”. First I planned to visit devrooms such as debugging tools, DNS/DNSSEC and security + encryption. That was my target when I planned my program. But as I noticed later I was not the only one who had the same plans to visit the same talks, hacking sessions and open discussions 🙂 That led to the free-places-problem in the devrooms and it made my program a bit more dynamic than I planned first 🙂 But get it right – that was absolutely no problem for me. Outside we also had very interesting conversations. I met ex-colleges, friends whom I knew from mailing lists and IRC only and of course a lot of openSUSE contributers.

I would like to thank FOSDEM’s staff, everyone who made it happen, who helped to organize it (I’m definitely going to send a feedback to you guys). Thanks GitHub for free coffee. Keep it going 🙂 I also have to say thanks for openSUSE’s Travel Support Program. It supports me to visit this amazing event (and not for the first time!). I’m going to visit FOSDEM again next year. My photos can be found here. See you next time 😉

the avatar of Greg Kroah-Hartman

Linux Kernel Release Model

Note

This post is based on a whitepaper I wrote at the beginning of 2016 to be used to help many different companies understand the Linux kernel release model and encourage them to start taking the LTS stable updates more often. I then used it as a basis of a presentation I gave at the Linux Recipes conference in September 2017 which can be seen here.

With the recent craziness of Meltdown and Spectre , I’ve seen lots of things written about how Linux is released and how we handle handles security patches that are totally incorrect, so I figured it is time to dust off the text, update it in a few places, and publish this here for everyone to benefit from.

the avatar of Federico Mena-Quintero

Writing a command-line program in Rust

As a library writer, it feels a bit strange, but refreshing, to write a program that actually has a main() function.

My experience with Rust so far has been threefold:

  • Porting chunks of C to Rust for librsvg - this is all work on librsvg's internals and no users are exposed to it directly.

  • Working on gnome-class, the procedural macro ("a little compiler") to generate GObject boilerplate from Rust. This feels like working on the edge of the exotic; it is something that runs in the Rust compiler and spits code on behalf of the programmer.

  • A few patches to the gtk-rs ecosystem. Again, work on the internals, or something that feels library-like.

But other than toy programs to test things, I haven't written a stand-alone tool until rsvg-bench. It's quite a thrill to be able to just run the thing instead of waiting for other people to write code to use it!

Parsing command-line arguments

There are quite a few Rust crates ("libraries") to parse command-line arguments. I read about structopt via Robert O'Callahan's blog; structopt lets you define a struct to hold the values of your command-line options, and then you annotate the fields in that struct to indicate how they should be parsed from the command line. It works via Rust's procedural macros. Internally it generates stuff for the clap crate, a well-established mechanism for dealing with command-line options.

And it is quite pleasant! This is basically all I needed to do:

#[derive(StructOpt, Debug)]
#[structopt(name = "rsvg-bench", about = "Benchmarking utility for librsvg.")]
struct Opt {
    #[structopt(short = "s",
                long  = "sleep",
                help  = "Number of seconds to sleep before starting to process SVGs",
                default_value = "0")]
    sleep_secs: usize,

    #[structopt(short = "p",
                long  = "num-parse",
                help  = "Number of times to parse each file",
                default_value = "100")]
    num_parse: usize,

    #[structopt(short = "r",
                long  = "num-render",
                help  = "Number of times to render each file",
                default_value = "100")]
    num_render: usize,

    #[structopt(long = "pixbuf",
                help = "Render to a GdkPixbuf instead of a Cairo image surface")]
    render_to_pixbuf: bool,

    #[structopt(help = "Input files or directories",
                parse(from_os_str))]
    inputs: Vec<PathBuf>
}

fn main() {
    let opt = Opt::from_args();

    if opt.inputs.len() == 0 {
        eprintln!("No input files or directories specified\n");
        process.exit(1);
    }

    ...
}

Each field in the Opt struct above corresponds to one command-line argument; each field has annotations for structopt to generate the appropriate code to parse each option. For example, the render_to_pixbuf field has a long option name called "pixbuf"; that field will be set to true if the --pixbuf option gets passed to rsvg-bench.

Handling errors

Command-line programs generally have the luxury of being able to just exit as soon as they encounter an error.

In C this is a bit cumbersome since you need to deal with every place that may return an error, find out what to print, and call exit(1) by hand or something. If you miss a single place where an error is returned, your program will keep running with an inconsistent state.

In languages with exception handling, it's a bit easier - a small script can just let exceptions be thrown wherever, and if it catches them at the toplevel, it can just print the exception and abort gracefully. However, these nonlocal jumps make me uncomfortable; I think exceptions are hard to reason about.

Rust makes this easy: it forces you to handle every call that may return an error, but it lets you bubble errors up easily, or handle them in-place, or translate them to a higher-level error.

In the Rust world the [failure] crate is getting a lot of traction as a convenient, modern way to handle errors.

In rsvg-bench, errors can come from several places:

  • I/O errors when reading files and directories.

  • Errors from librsvg's parsing stage; you get a GError.

  • Errors from the rendering stage. This can be a Cairo error (a cairo_status_t), or a simple "something bad happened; can't render" from librsvg's old convenience api in C. Don't you hate it when C code just gives up and returns NULL or a boolean false, without any further details on what went wrong?

For rsvg-bench, I just needed to be able to represent Cairo errors and generic rendering errors. Everything else, like an io::Error, is automatically wrapped by the failure crate's mechanism. I just needed to do this:

extern crate failure;
#[macro_use]
extern crate failure_derive;

#[derive(Debug, Fail)]
enum ProcessingError {
    #[fail(display = "Cairo error: {:?}", status)]
    CairoError {
        status: cairo::Status
    },

    #[fail(display = "Rendering error")]
    RenderingError
}

Whenever the code gets a Cairo error, I can translate it to a ProcessingError::CairoError and bubble it up:

fn render_to_cairo(handle: &rsvg::Handle) -> Result<(), Error> {
    let dim = handle.get_dimensions();
    let surface = cairo::ImageSurface::create(cairo::Format::ARgb32,
                                              dim.width,
                                              dim.height)
        .map_err(|e| ProcessingError::CairoError { status: e })?;

    ...
}

And when librsvg returns a "couldn't render" error, I translate that to a ProcessingError::RenderingError:

fn render_to_cairo(handle: &rsvg::Handle) -> Result<(), Error> {
    ...

    let cr = cairo::Context::new(&surface);

    if handle.render_cairo(&cr) {
        Ok(())
    } else {
        Err(Error::from(ProcessingError::RenderingError))
    }
}

Here, the Ok() case of the Result does not contain any value — it's just (), as the generated images are not stored anywhere: they are just rendered to get some timings, not to be saved or anything.

Up to where do errors bubble?

This is the "do everything" function:

fn run(opt: &Opt) -> Result<(), Error> {
    ...

    for path in &opt.inputs {
        process_path(opt, &path)?;
    }

    Ok(())
}

For each path passed in the command line, process it. The program sees if the path corresponds to a directory, and it will scan it recursively. Or if the path is an SVG file, the program will load the file and render it.

Finally, main() just has this:

fn main() {
    let opt = Opt::from_args();

    ...

    match run(&opt) {
        Ok(_) => (),
        Err(e) => {
            eprintln!("{}", e);
            process::exit(1);
        }
    }
}

I.e. process command line arguments, run the whole thing, and print an error if there was one.

I really appreciate that most places that can return an error an just put a ? for the error to bubble up. This is much more legible than in C, where every call must have an if (something_bad_happened) { deal_with_it; } after it... and Rust won't let me get away with ignoring an error, but it makes it easy to actually deal with it properly.

Reading an SVG file quickly

Why, just mmap() it and feed it to librsvg, to avoid buffer copies. This is easy in Rust:

fn process_file<P: AsRef<Path>>(opt: &Opt, path: P) -> Result<(), Error> {
    let file = File::open(path)?;
    let mmap = unsafe { MmapOptions::new().map(&file)? };

    let bytes = &mmap;

    let handle = rsvg::Handle::new_from_data(bytes)?;
    ...
}

Many things can go wrong here:

  • File::open() can return an io::Error.
  • MmapOptions::map() can return an io::Error from the mmap(2) system call, or from the fstat(2) to read the file's size to map it.
  • rsvg::Handle::new_from_data() can return a GError from parsing the file.

The little ? characters after each call that can return an error mean, just give me back the result, or convert the error to a failure::Error that can be examined later. This is beautifully legible to me.

Summary

Writing command-line programs in Rust is fun! It's nice to have neurotically-safe scripts that one can trust in the future.

Rsvg-bench is available here.

the avatar of Klaas Freitag

Kraft Moving to KDE Frameworks: Beta Release!

Kraft LogoKraft is KDE/Qt based desktop software to manage documents like quotes and invoices in the small business. It focuses on ease of use through an intuitive GUI, a well chosen feature set and ensures privacy by keeping data local.

Kraft is around for more than twelve years, but it has been a little quiet recently. However, Kraft is alive and kicking!

I am very happy to announce the first public beta version of Kraft V. 0.80, the first Kraft version that is based on KDE Frameworks 5 and Qt 5.x.

It did not only go through the process of being ported to Qt5/KF5, but I also took the opportunity to refactor and tackle a lot of issues that Kraft was suffering from in the past.

Here are a few examples, a full changelog will follow:

  • Akonadi dependency: Earlier Kraft versions had a hard dependency on Akonadi, because it uses the KDE Addressbook to manage customer addresses. Without having Akonadi up and running, Kraft was not functional. People who were testing Kraft without having Akonadi up were walking away with a bad impression of Kraft.

    Because Akonadi and the KDE contacts integration is perfect for this use case, it still the way to go for Kraft, and I am delighted to build on such strong shoulders. But Kraft now also works without Akonadi. Users get a warning, that the full address book integration is not available, but can enter addresses manually and continue to create documents with Kraft. It remains fully functional.

    Also, a better abstraction of the Akonadi-based functionality in Kraft eases porting to platforms where other system address books are available, such as MacOSX.

  • AppImage: The new Kraft is available as AppImage.

    There was a lot of feedback that people could not test Kraft, because it was hard to set up or compile, and dependency are missing. The major Linux distributions seem to be unable to keep up with current versions of leaf packages like Kraft, and I can not do that for the huge variety of distros. So AppImage as container format for GUI applications seems to be a perfect fit here.

  • A lot more was done. Kraft got simplifications in both the code base and the functionality, careful gui changes, and a decreased dependency stack. You should check it out!

Today (on FOSDEM weekend, which could be a better date?) the pre version 0.80 beta10 is announced to the public.

I would appreciate if people test and report bugs at Github: That is where the development is currently happening.

a silhouette of a person's head and shoulders, used as a default avatar

Ceph Day Germany 2018 - Update

The German Ceph Day 2018 in Darmstadt is finally only a few days away (07. February 2018).
The agenda is now complete. There are 13 talks and a short Q&A session planed during the day. 
Already 150 attendees signed up and due to the support of our latest sponsor Intel we now are able to host for up to 175 interested members of the big Ceph community. There are only a limited number of tickets left, be quick to register for one while they are still available.

I would like to give already a big thanks to the Ceph community team, all sponsors (SUSE, Deutsche Telekom AG, Tallence AG, SAPIntel42on.com, Red Hat, Starline), speakers and supporters which make this event possible. 

Some information if you attend: 

  • The location is at T-Online-Allee 1, 64295 Darmstadt. The entrance to the Deutsche Telekom building is here. Please check this page for directions, traffic, parking and hotel information.
  • The registration will be open from 8:15am on. Please register at eventbrite so that we can be sure that you get a security badge to access the venue. In case the Ceph Day registration desk is closed, get you security badge from the front desk and refer to the Ceph Day in the Forum. You will get you name tag and goodies during the next break.
See you in Darmstadt! Enjoy the Ceph Day!

the avatar of Federico Mena-Quintero

rsvg-bench - a benchmark for librsvg

Librsvg 2.42.0 came out with a rather major performance regression compared to 2.40.20: SVGs with many transform attributes would slow it down. It was fixed in 2.42.1. We changed from using a parser that would recompile regexes each time it was called, to one that does simple string-based matching and parsing.

When I rewrote librsvg's parser for the transform attribute from C to Rust, I was just learning about writing parsers in Rust. I chose lalrpop, an excellent, Yacc-like parser generator for Rust. It generates big, fast parsers, like what you would need for a compiler — but it compiles the tokenizer's regexes each time you call the parser. This is not a problem for a compiler, where you basically call the parser only once, but in librsvg, we may call it thousands of times for an SVG file with thousands of objects with transform attributes.

So, for 2.42.1 I rewrote that parser using rust-cssparser. This is what Servo uses to parse CSS data; it's a simple tokenizer with an API that knows about CSS's particular constructs. This is exactly the kind of data that librsvg cares about. Today all of librsvg's internal parsers work using rust-cssparser, or they are so simple that they can be done with Rust's normal functions to split strings and such.

Getting good timings

Librsvg ships with rsvg-convert, a command-line utility that can render an SVG file and write the output to a PNG. While it would be possible to get timings for SVG rendering by timing how long rsvg-convert takes to run, it's a bit clunky for that. The process startup adds noise to the timings, and it only handles one file at a time.

So, I've written rsvg-bench, a small utility to get timings out of librsvg. I wanted a tool that:

  • Is able to process many SVG images with a single command. For example, this lets us answer a question like, "how long does version N of librsvg take to render a directory full of SVG icons?" — which is important for the performance of an application chooser.

  • Is able to repeatedly process SVG files, for example, "render this SVG 1000 times in a row". This is useful to get accurate timings, as a single render may only take a few microseconds and may be hard to measure. It also helps with running profilers, as they will be able to get more useful samples if the SVG rendering process runs repeatedly for a long time.

  • Exercises librsvg's major code paths for parsing and rendering separately. For example, librsvg uses different parts of the XML parser depending on whether it is being pushed data, vs. being asked to pull data from a stream. Also, we may only want to benchmark the parser but not the renderer; or we may want to parse SVGs only once but render them many times after that.

  • Is aware of librsvg's peculiarities, such as the extra pass to convert a Cairo image surface to a GdkPixbuf when one uses the convenience function rsvg_handle_get_pixbuf().

Currently rsvg-bench supports all of that.

An initial benchmark

I ran this

/usr/bin/time rsvg-bench -p 1 -r 1 /usr/share/icons

to cause every SVG icon in /usr/share/icons to be parsed once, and rendered once (i.e. just render every file sequentially). I did this for librsvg 2.40.20 (C only), and 2.42.{0, 1, 2} (C and Rust). There are 5522 SVG files in there. The timings look like this:

version time (sec)
2.40.20 95.54
2.42.0 209.50
2.42.1 97.18
2.42.2 95.89

Bar chart of timings

So, 2.42.0 was over twice as slow as the C-only version, due to the parsing problems. But now, 2.42.2 is practically just as fast as the C only version. What made this possible?

  • 2.40.20 - the old C-only version
  • 2.42.0 - C + Rust, with a lalrpop parser for the transform attribute
  • 2.42.1 - Servo's cssparser for the transform attribute
  • 2.42.2 - removed most C-to-Rust string copies during parsing

I have started taking profiles of rsvg-bench runs with sysprof, and there are some improvements worth making. Expect news soon!

Rsvg-bench is available in Gnome's gitlab instance.

a silhouette of a person's head and shoulders, used as a default avatar

Pair programming with git

Git is great. It took the crown of version control systems in just a few years. Baked into the git model is that each commit has a committer and one author. Ofen this is the same person. What if there is more than one author for a commit? This is the case with pair programming or with mob programming or with any other way of collaboration where code is produced by more than one person. I talked about this at the git-merge conference last year. There are some workarounds but there is no native support in git yet.

It seems that the predominant convention to express multi-authorship in git commits is to add a Co-authored-by entry in the commit message as a so-called trailer. This adds more flexibility than trying to tweak the author and committer fields and is quite widely accepted, especially by the git community.

I'm happy that GitHub added support for the Co-authored-by convention now. It makes multi-authorship more visible. That's a good thing.

I did some work on adding native support for multiple authors in git. The direct approach of allowing more than one author field might be too intrusive due to the many possible side effects. But the Co-authored-by trailer is still a good solution. I have an unfinished patch to add some native support for that in git. It does need some more work, though.

Contributing to git is an interesting experience. The git mailing list is the central place. The contribution workflow is well-documented. It's good that Junio as maintainer has spelled out how he reviews patches and what that means for contributors. And it's definitely fun to work on a self-contained C project.

I'm looking forward to more multi-author support in git and GitHub. Pair-programming is a great model, and properly reflecting in the commit logs what happened when the code was written is the right thing to do.

a silhouette of a person's head and shoulders, used as a default avatar

Ruby on openSUSE

Ruby is a wonderful programming language. Every year in December as a kind of Christmas present there is a new release. It's great to be on a language which is kept up to date but it comes with the challenge to manage multiple Ruby versions. There are a couple of solutions around such as RVMrbenv, or chruby but they all have their drawbacks.

What would a Linux distribution do? At openSUSE, we package all the versions in the Build Service. We also package many gems but this is an effort which is sort of futile given the huge and growing number of gems and their versions. But you do reliably get the Ruby interpreter and gem tool as openSUSE package. To not create conflicts all the executables are suffixed with the Ruby version. That allows for parallel installation of multiple Ruby versions. It also works for all the executables installed through gems. The drawback is that you don't get the executable names you would expect because of the additional suffix.

So how does the ideal solution look like? We want the distribution maintained interpreter packages, the co-installability of multiple Ruby versions, the native executable names, and the full power of gem.

There is a nice solution to that. It looks like this:

Native executable names can be achieved by creating symbolic links in the bin directory of the user for the tools which come with the Ruby package: ruby, gem, erb, and rake.

Setting the GEMHOME environment variable to a directory in the user's home nicely isolates installation of gems. Parallel versions are handled by gem anyway. You don't need root to install gems and you can easily get rid of an installation if needed.

By setting `install: --no-format-executable` in your .gemrc you get executables with their native names in the GEMHOME directory. So you only have to set the PATH environment variable to let your shell pick them up so that you can call bundle, rspec, etc as it's meant to be.

All this needs a little bit of setup work. To make this easier we have created a tool for openSUSE to manage this for you. It's called orr.

With orr you can just do `orr install 2.4` and it installs the required packages, creates the links, and configures the environment variables so that you can use Ruby 2.4 right away. That works for any other version as well of course.

This setup brings you the best of both worlds, Ruby and openSUSE. Enjoy. And provide feedback.

a silhouette of a person's head and shoulders, used as a default avatar

How To Fix MacBook Air Keyboard on openSUSE Leap (or Systemd Linux)

One of the problems if you install Linux on your MacBook Air is your Tilde/Backtick keys are not recognized with the output. it will be appearing different symbols. As a Zimbra Administrator, i need backtick symbol for specifying email server host (`zmhostname`).

I have openSUSE Leap 42.3 on my MacBook Air, now i usually use openSUSE rather than My MacOS Sierra. But this problem comes to me and still bothering my jobs.

It turns out …

The problem is very simple. This because https://bugzilla.kernel.org/show_bug.cgi?id=60181#c43

To fix it, Just run this command :

echo 0 > /sys/module/hid_apple/parameters/iso_layout

Very simple right?

It’s Solved, but …

The problem will come to you again if you restart your MacBook Air. So if you use systemV or init. you can place those command on rc.local or boot.after.

But, how if your operating system using Systemd?

I have openSUSE Leap 42.3 on my MacBook Air, now i usually use openSUSE rather than My MacOS Sierra. But this problem comes to me and still bothering my jobs.

Simple, you just need to create a service with that command. create a .service file. For example, i use openSUSE Leap, i create it on /etc/systemd/system/mba-keyboard-fix.service with the following script:

[Unit]
Description=Fix MacBook Air Keyboard

[Service]
Type=oneshot
ExecStart=/bin/bash -c '/usr/bin/echo 0 > /sys/module/hid_apple/parameters/iso_layout'

[Install]
WantedBy=multi-user.target

Or, to simplify it. Just download that file using wget:

cd /etc/systemd/system/
wget -c https://dhenandi.com/repo/mba-keyboard-fix.service

Then start the service and enable it.

systemctl enable mba-keyboard-fix.service
systemctl start mba-keyboard-fix.service

Try to reboot your MacBook Air, if it doesn’t work for you. Just sell your MacBook Air and buy an openSUSE Tuxedo Infinity instead :-P.

 

The post How To Fix MacBook Air Keyboard on openSUSE Leap (or Systemd Linux) appeared first on dhenandi.com.