Skip to main content

the avatar of Federico Mena-Quintero

Librsvg's continuous integration pipeline

Jordan Petridis has been kicking ass by overhauling librsvg's continous integration (CI) pipeline. Take a look at this beauty:

Continuous integration pipeline

On every push, we run the Test stage. This is a quick compilation on a Fedora container that runs "make check" and ensures that the test suite passes.

We have a Lint stage which can be run manually. This runs cargo clippy to get Rust lints (check the style of Rust idioms), and cargo fmt to check indentation and code style and such.

We have a Distro_test stage which I think will be scheduled weekly, using Gitlab's Schedules feature, to check that the tests pass on three major Linux distros. Recently we had trouble with different rendering due to differences in Freetype versions, which broke the tests (ahem, likely because I hadn't updated my Freetype in a while and distros were already using a newer one); these distro tests are intended to catch that.

Finally, we have a Rustc_test stage. The various crates that librsvg depends on have different minimum versions for the Rust compiler. These tests are intended to show when updating a dependency changes the minimum Rust version on which librsvg would compile. We don't have a policy yet for "how far from $newest" we should always work on, and it would be good to get input from distros on this. I think these Rust tests will be scheduled weekly as well.

Jordan has been experimenting with the pipeline's stages and the distro-specific idiosyncrasies for each build. This pipeline depends on some custom-built container images that already have librsvg's dependencies installed. These images are built weekly in gitlab.com, so every week gitlab.gnome.org gets fresh images for librsvg's CI pipelines. Once image registries are enabled in gitlab.gnome.org, we should be able to regenerate the container images locally without depending on an external service.

With the pre-built images, and caching of Rust artifacts, Jordan was able to reduce the time for the "test on every commit" builds from around 20 minutes, to little under 4 minutes in the current iteration. This will get even faster if the builds start using ccache and parallel builds from GNU make.

Currently we have a problem in that tests are failing on 32-bit builds, and haven't had a chance to investigate the root cause. Hopefully we can add 32-bit jobs to the CI pipeline to catch this breakage as soon as possible.

Having all these container images built for the CI infrastructure also means that it will be easy for people to set up a development environment for librsvg, even though we have better instructions now thanks to Jordan. I haven't investigated setting up a Flatpak-based environment; this would be nice to have as well.

a silhouette of a person's head and shoulders, used as a default avatar

2018w07-08: drop list considers update repos, Leap repo-checker ignores i586, metrics.o.o weekly ingest, and more

package lists generator drop list considers update repos

Following up on the addition of a drop list generator the code now considers update repositories. The released repositories, oss and non-oss, are merged with their update counterparts. This provides a more accurate drop list that should provide for a cleaner post-upgrade system. For an idea of the impact take a look at the diff on OBS after the change was deployed.

There is an ongoing discussion regarding where to store the solv files since they cannot be re-created for releases that are out of support and removed from download.opensuse.org.

Leap 15.0 repo-checker no longer reviews i586

The repo-checker was changed to only review the x86_64 repo, which includes imported i586 pacakges, rather than reviewing both repos in their entirety. Unlike Tumbleweed, Leap does not target i586 as an installable arch, but rather just for providing -32bit packages. Devel packages receiving repo-checker comments regarding i586 dependency chains should desist.

metrics.o.o weekly data ingest enabled

In lieu of OBS providing control over request order via API the data ingest process has been configured to run weekly, instead of daily. The service timer has been enabled and should ingest data regularly.

Tumbleweed snapshot review site

Extensive work was completed towards providing a site for reviewing Tumbleweed snapshot stability by aggregating data from a variety of sources. The goal is to provide insight into the stability trends of Tumbleweed and aid in avoiding troublesome snapshots when desired. More details to be forthcoming.

last year

factory-auto bot was corrected to properly warn when issue references were removed as the diff was backwards and to allow for self-submission for the purpose of reverting a package to a prior version of self.

The osc-staging plugin was enhanced to provide link in staging comment to project dashboard to aid in discovery and some ever important documentation corrections.

The ReviewBot base was significantly refactored to avoid duplication and over complexity in subsequent bots.

  • #684: ReviewBot: refactor leaper comment from log functionality.
  • #685: ReviewBot: use super().check_source_submission() in subclasses.
  • #692: ReviewBot: extract __class__.__name__ as default for self.bot_name.
  • #693: ReviewBot & leaper: provide deduplicate method for leaper comment (and other fixes)

In another attempt to aid in feature discovery the leaper bot was changed to let request submitters know when the request would have been automatically generated. On a similar note the staging unselect command was tweaked to print a message suggesting use of new ignore psuedo-sate.

Significant work was done towards providing completely automated staging strategies via the addition of --non-interactive, --merge, --try-strategies, and --strategies options. The prototype was run against Leap 42.3 and scrutinized to hone the strategies. A related tool was prototyped for triggering flaky package rebuilds in staging projects.

For SUSE Hackweek 15 I worked on what eventually turned into metrics.opensuse.org. The focus was on selecting the appropriate tools and determining what to use as the data source. After the OBS team denied request to get read access of data dumps of request related tables I settled on ingesting the data (over 800MB of XML) via the API.

the avatar of Federico Mena-Quintero

RFC: Integrating rsvg-rs into librsvg

I have started an RFC to integrate rsvg-rs into librsvg. rsvg-rs is the Rust binding to librsvg. Like the gtk-rs bindings, it gets generated from a pre-built GIR file.

It would be nice for librsvg to provide the Rust binding by itself, so that librsvg's own internal tools can be implemented in Rust — currently all the tests are done in C, as are the rsvg-convert(1) and rsvg-view-3(1) programs.

There are some implications for how rsvg-rs would get built then. For librsvg's internal consumption, the binding can be built from the Rsvg-2.0.gir file that gets built out of the main librsvg.so. But for public consumption of rsvg-rs, when it is being used as a normal crate and built by Cargo, that Rsvg-2.0.gir needs to be already built and available: it wouldn't be appropriate for Cargo to build librsvg and the .gir file itself.

If this sort of thing interests you, take a look at the RFC!

a silhouette of a person's head and shoulders, used as a default avatar

OpenStack Summit Vancouver '18: Vote for Speakers

The next OpenStack Summit takes place again in Vancouver (BC, Canada), May 21-25, 2018. The "Vote for Presentations" period started. All proposals are up for community votes. The deadline for your vote is will end February 25 at 11:59pm PST (February 26th at 8:59am CET)

I've submitted two talks this time:
As always: every vote is highly welcome! And don't forget to search for other interesting proposals around Ceph, NFV and OpenStack as e.g. the as always expected to be excellent presentations from Florian Haas [1][2][3].

the avatar of Federico Mena-Quintero

Rust things I miss in C

Librsvg feels like it is reaching a tipping point, where suddenly it seems like it would be easier to just port some major parts from C to Rust than to just add accessors for them. Also, more and more of the meat of the library is in Rust now.

I'm switching back and forth a lot between C and Rust these days, and C feels very, very primitive these days.

A sort of elegy to C

I fell in love with the C language about 24 years ago. I learned the basics of it by reading a Spanish translation of The C Programming Language by K&R second edition. I had been using Turbo Pascal before in a reasonably low-level fashion, with pointers and manual memory allocation, and C felt refreshing and empowering.

K&R is a great book for its style of writing and its conciseness of programming. This little book even taught you how to implement a simple malloc()/free(), which was completely enlightening. Even low-level constructs that seemed part of the language could be implemented in the language itself!

I got good at C over the following years. It is a small language, with a small standard library. It was probably the perfect language to implement Unix kernels in 20,000 lines of code or so.

The GIMP and GTK+ taught me how to do fancy object orientation in C. GNOME taught me how to maintain large-scale software in C. 20,000 lines of C code started to seem like a project one could more or less fully understand in a few weeks.

But our code bases are not that small anymore. Our software now has huge expectations on the features that are available in the language's standard library.

Some good experiences with C

Reading the POV-Ray code source code for the first time and learning how to do object orientation and inheritance in C.

Reading the GTK+ source code and learning a C style that was legible, maintainable, and clean.

Reading SIOD's source code, then the early Guile sources, and seeing how a Scheme interpreter can be written in C.

Writing the initial versions of Eye of Gnome and fine-tuning the microtile rendering.

Some bad experiences with C

In the Evolution team, when everything was crashing. We had to buy a Solaris machine just to be able to buy Purify; there was no Valgrind back then.

Debugging gnome-vfs threading deadlocks.

Debugging Mesa and getting nowhere.

Taking over the intial versions of Nautilus-share and seeing that it never free()d anything.

Trying to refactor code where I had no idea about the memory management strategy.

Trying to turn code into a library when it is full of global variables and no functions are static.

But anyway — let's get on with things in Rust I miss in C.

Automatic resource management

One of the first blog posts I read about Rust was "Rust means never having to close a socket". Rust borrows C++'s ideas about Resource Acquisition Is Initialization (RAII), Smart Pointers, adds in the single-ownership principle for values, and gives you automatic, deterministic resource management in a very neat package.

  • Automatic: you don't free() by hand. Memory gets deallocated, files get closed, mutexes get unlocked when they go out of scope. If you are wrapping an external resource, you just implement the Drop trait and that's basically it. The wrapped resource feels like part of the language since you don't have to babysit its lifetime by hand.

  • Deterministic: resources get created (memory allocated, initialized, files opened, etc.), and they get destroyed when they go out of scope. There is no garbage collection: things really get terminated when you close a brace. You start to see your program's data lifetimes as a tree of function calls.

After forgetting to free/close/destroy C objects all the time, or worse, figuring out where code that I didn't write forgot to do those things (or did them twice, incorrectly)... I don't want to do it again.

Generics

Vec<T> really is a vector of whose elements are the size of T. It's not an array of pointers to individually allocated objects. It gets compiled specifically to code that can only handle objects of type T.

After writing many janky macros in C to do similar things... I don't want to do it again.

Traits are not just interfaces

Rust is not a Java-like object-oriented language. Instead it has traits, which at first seem like Java interfaces — an easy way to do dynamic dispatch, so that if an object implements Drawable then you can assume it has a draw() method.

However, traits are more powerful than that.

Associated types

Traits can have associated types. As an example, Rust provies the Iterator trait which you can implement:

pub trait Iterator {
    type Item;
    fn next(&mut self) -> Option<Self::Item>;
}

This means that whenever you implement Iterator for some iterable object, you also have to specify an Item type for the things that will be produced. If you call next() and there are more elements, you'll get back a Some(YourElementType). When your iterator runs out of items, it will return None.

Associated types can refer to other traits.

For example, in Rust, you can use for loops on anything that implements the IntoIterator trait:

pub trait IntoIterator {
    /// The type of the elements being iterated over.
    type Item;

    /// Which kind of iterator are we turning this into?
    type IntoIter: Iterator<Item=Self::Item>;

    fn into_iter(self) -> Self::IntoIter;
}

When implementing this trait, you must provide both the type of the Item which your iterator will produce, and IntoIter, the actual type that implements Iterator and that holds your iterator's state.

This way you can build webs of types that refer to each other. You can have a trait that says, "I can do foo and bar, but only if you give me a type that can do this and that".

Slices

I already posted about the lack of string slices in C and how this is a pain in the ass once you get used to having them.

Modern tooling for dependency management

Instead of

  • Having to invoke pkg-config by hand or with Autotools macros
  • Wrangling include paths for header files...
  • ... and library files.
  • And basically depending on the user to ensure that the correct versions of libraries are installed,

You write a Cargo.toml file which lists the names and versions of your dependencies. These get downloaded from a well-known location, or from elsewhere if you specify.

You don't have to fight dependencies. It just works when you cargo build.

Tests

C makes it very hard to have unit tests for several reasons:

  • Internal functions are often static. This means they can't be called outside of the source file that defined them. A test program either has to #include the source file where the static functions live, or use #ifdefs to remove the statics only during testing.

  • You have to write Makefile-related hackery to link the test program to only part of your code's dependencies, or to only part of the rest of your code.

  • You have to pick a testing framework. You have to register tests against the testing framework. You have to learn the testing framework.

In Rust you write

#[test]
fn test_that_foo_works() {
    assert!(foo() == expected_result);
}

anywhere in your program or library, and when you type cargo test, IT JUST FUCKING WORKS. That code only gets linked into the test binary. You don't have to compile anything twice by hand, or write Makefile hackery, or figure out how to extract internal functions for testing.

This is a very killer feature for me.

Documentation, with tests

Rust generates documentation from comments in Markdown syntax. Code in the docs gets run as tests. You can illustrate how a function is used and test it at the same time:

/// Multiples the specified number by two
///
/// ```
/// assert_eq!(multiply_by_two(5), 10);
/// ```
fn multiply_by_two(x: i32) -> i32 {
    x * 2
}

Your example code gets run as tests to ensure that your documentation stays up to date with the actual code.

Update 2018/Feb/23: QuietMisdreavus has posted how rustdoc turns doctests into runnable code internally. This is high-grade magic and thoroughly interesting.

Hygienic macros

Rust has hygienic macros that avoid all of C's problems with things in macros that inadvertently shadow identifiers in the code. You don't need to write macros where every symbol has to be in parentheses for max(5 + 3, 4) to work correctly.

No automatic coercions

All the bugs in C that result from inadvertently converting an int to a short or char or whatever — Rust doesn't do them. You have to explicitly convert.

No integer overflow

Enough said.

Generally, no undefined behavior in safe Rust

In Rust, it is considered a bug in the language if something written in "safe Rust" (what you would be allowed to write outside unsafe {} blocks) results in undefined behavior. You can shift-right a negative integer and it will do exactly what you expect.

Pattern matching

You know how gcc warns you if you switch() on an enum but don't handle all values? That's like a little baby.

Rust has pattern matching in various places. It can do that trick for enums inside a match() expression. It can do destructuring so you can return multiple values from a function:

impl f64 {
    pub fn sin_cos(self) -> (f64, f64);
}

let angle: f64 = 42.0;
let (sin_angle, cos_angle) = angle.sin_cos();

You can match() on strings. YOU CAN MATCH ON FUCKING STRINGS.

let color = "green";

match color {
    "red"   => println!("it's red"),
    "green" => println!("it's green"),
    _       => println!("it's something else"),
}

You know how this is illegible?

my_func(true, false, false)

How about this instead, with pattern matching on function arguments:

pub struct Fubarize(pub bool);
pub struct Frobnify(pub bool);
pub struct Bazificate(pub bool);

fn my_func(Fubarize(fub): Fubarize, 
           Frobnify(frob): Frobnify, 
           Bazificate(baz): Bazificate) {
    if fub {
        ...;
    }

    if frob && baz {
        ...;
    }
}

...

my_func(Fubarize(true), Frobnify(false), Bazificate(true));

Standard, useful error handling

I've talked at length about this. No more returning a boolean with no extra explanation for an error, no ignoring errors inadvertently, no exception handling with nonlocal jumps.

#[derive(Debug)]

If you write a new type (say, a struct with a ton of fields), you can #[derive(Debug)] and Rust will know how to automatically print that type's contents for debug output. You no longer have to write a special function that you must call in gdb by hand just to examine a custom type.

Closures

No more passing function pointers and a user_data by hand.

Conclusion

I haven't done the "fearless concurrency" bit yet, where the compiler is able to prevent data races in threaded code. I imagine it being a game-changer for people who write concurrent code on an everyday basis.

C is an old language with primitive constructs and primitive tooling. It was a good language for small uniprocessor Unix kernels that ran in trusted, academic environments. It's no longer a good language for the software of today.

Rust is not easy to learn, but I think it is completely worth it. It's hard because it demands a lot from your understanding of the code you want to write. I think it's one of those languages that make you a better programmer and that let you tackle more ambitious problems.

a silhouette of a person's head and shoulders, used as a default avatar

Students from Indonesia, openSUSE participates in GSoC!

openSUSE participates again in Google Summer of Code (GSoC), a program that awards stipends to university students who contribute to real-world open source projects during three months in summer. :sunny: With this article, I will provide my experience as a former GSoC student and mentor, give you more details about the program and try to encourage Indonesian students to get involved in openSUSE development through GSoC.

Why open source and openSUSE?

First of all, you may wonder why you should want to get involved in open source development. Everybody has their own reasons, but for me there are three main ones:

  • I have fun: The most important reason is that it is fun. At openSUSE, we have great conferences, geekos everywhere, geeko cookies,… and the most important part: we have fun when working!
  • I learn a lot: In most of the projects, every single line of code is reviewed. That means not only that the code quality is better, but also that every time you write something wrong or that can be improved, someone will tell you. In open source, we think that making mistakes is perfectly fine. That people correct you is the best way to learn.
  • People: I have the chance to work with really skilled people all around the world, who are interested in the same things as me.

Why GSoC?

Starting is always difficult, but you don’t have to do it alone! In openSUSE, you will always find people to help you, and with GSoC this is even easier. The best feature of the program is that you will always have at least one mentor (most likely two) who will lead you through it. In addition, you will work in a project used in real word by many users and all your code will be released under an open source license, so everybody can access, use, change and share it. Last, you will receive a stipend, 2400 dollars in Indonesia.

Projects

At openSUSE, you can find projects written in Ruby on Rails, Perl, Ruby, HTML/JavaScript, C/C++ and much more. This year you can work during GSoC in some of the most central and biggest projects in openSUSE: OpenBuildService, openQA and YaST. They will for sure be challenging projects to work in, but don’t get scared, as that means that you will learn a lot from it too. And remember that your mentors and other openSUSE contributors will be there to help you!

But we also have simpler projects such as Trollolo, where any computer science university student could get started with Ruby. The desire to learn is much more important than the previous experience and knowledge.

You can find all the projects and more information in our mentoring page: http://101.opensuse.org. And if the projects at openSUSE doesn’t match your expectations, you can check other organizations: https://summerofcode.withgoogle.com/organizations. You should look for a project that you consider interesting and that will allow you to learn as much as possible.

Let’s do it!

When I travelled to openSUSE Asia in October, I had the chance to speak with some members of the Indonesian community and I found out that we have a huge community of openSUSE users in Indonesia. And a big part of this community are university students of Computer Science. However, those students don’t get involved in the development of openSUSE. I also heard many times during the conference that students in Indonesia and in Asia in general are shy and that they find the open source development scary. I hope that with this blog post I can achieve that Indonesian students lose their fear and encourage them to get involved in openSUSE development.

The GSoC application period starts on March 12th, but you can already take a look at the organizations and projects and find the best one for you. Approaching the people in the project is also important, as you will be working with them for three months. We recommend to make at least one contribution to the project you want to apply for as that will help you to find out if this is the right project for you and to write a good proposal, but you do not need to send a lot of pull requests.

And if you have doubts do not hesitate to contact us. You can tweet us at @opensusementors, write in our mailing list (opensuse-project@opensuse.org) or directly contact the mentors. We are looking forward to hear from you, so don’t be shy! :green_heart:

GSoC meetup at openSUSE conference 2016

About me

My name is Ana María Martínez and I started with openSUSE as a GSoC student. Since then I keep contributing in open source projects inside and outside openSUSE. I am currently working at SUSE in the Open Build Service Frontend Team and I am a mentor for openSUSE at GSoC. You can find me in Github(@Ana06) and contact me by email, Twitter (@anamma_06), IRC (@Ana06) and by writting a comment in this blog post. :wink:

a silhouette of a person's head and shoulders, used as a default avatar

Ceph Day Germany 2018 - Follow-Up

Last week the Ceph Day Germany took place at Deutsche Telekom in Darmstadt. Around 160 people took part in the event and attended the talks of the 13 speakers over the day.

It was a great day with a lot of discussions, feedback and networking for the Germany/European Ceph community. The event ended in a networking reception with drinks and fingerfood, planed for an hour but actually ended after nearly two.

A big thank you to the sponsors (SUSESAPDeutsche Telekom AGTallenceIntel42on.comRed Hat and Starline) which made the event possible. The same applies to the speakers and attendees, without you it would have been a meaningless event. A special thank you to Danielle Womboldt from Red Hat for all the organisational help and Leo as the Ceph community manager.

We didn't manage to record all sessions due to technical problems, but the most of them. You can find the videos and slides from the speakers in the agenda of the Ceph day or directly in the following Google Drive folder. We will upload the recordings also to the Ceph YouTube channel later. There is also a subfolder with some impressions of the day.

I'm proud that we hosted the event. If you could not attend or would like to learn more about the community I recommend to attend to the next Ceph Day in London in April 2018. See you next time and at the Cephalocon APAC next month in Beijing.

a silhouette of a person's head and shoulders, used as a default avatar

2018w05-06: package drop list, Tumbleweed snapshots update, leaper no longer requires maintainer review, and more

package list generator gains drop list generation

The new pkglistgen.py, used by Leap 15.0 gained integration of the drop list generator which adds weakremover(package_name) to the openSUSE-release package to cleanup packages that were provided in previous Leap releases, but are no longer available. The initial batch of changes was fairly significant. The release corresponding to the packages that are no longer provided can be seen as comments in 000product/obsoletepackages.inc. With the drop list integrated the last piece to the upgrade puzzle should be in place and the openQA upgrade tests should be satisfied.

Additionally, as part of the drop list integration a set of functions for determining projects within a product family were added which should be useful for abstracting leaper.py to avoid hard-coding potential request project sources.

Tumbleweed snapshots: update and Mesa postmortem usage

Up until now I had been watching over the snapshotting process manually to gauge when to trigger the full snapshot and ensure things worked properly. Part of the reason was still having to rely on a non-stage.opensuse.org mirror and ensuring the mirror was fully in-sync before snapshotting. After observing the process I went ahead and added a two-phase update mechanism as had been my intention. When a new snapshot is detect the meta-data and redirection rules are immediately updated, but the actual snapshotting is delayed by 4 hours. This allows end-users to immediately begin using the latest snapshot since it will redirect to the mirror network even though the files have not been snapshotted.

Unfortunately, I discovered AWS S3 static website hosting has a limit of 50 redirection rules which therefore limits the number of snapshots to 50 as well. As such a count limiter was added to the expiration routine and the first snapshot 20171115 was removed. Going forward a rolling 50 snapshots will be kept unless proper openSUSE hosting is provided which can use a different redirection mechanism without the limit.

As many of your are likely aware Mesa 18.0.0-rc3 caused quite some issues for Tumbleweed users due to on disk cache corruption issues. As part of the postmortem Tumbleweed snapshots were used to compare released versions of Mesa to verify assumptions and behaviors of Mesa to aid in fully resolving the problem upstream.

leaper no longer requires maintainer review for Factory submissions

Based on feedback from maintainers the leaper bot will no longer require maintainer review for automated submissions from Factory to Leap. Expect this to be deployed early next week.

request splitter grouping by hashtag

An interesting use-case of the request splitter grouping functionality the CaaSP staging master inquired about allowing submitters to place a hashtag, or similar, in request descriptions which could be used to group them together in a single staging. Given the flexible nature of the request splitter the capability already exists so it was added as an example in the documentation. It may also make sense to create a strategy to look for certain hashtags and automatically group in such a manor.

osc staging select --filter-by 'contains(description, "#Portus")'

repo checker improving turnaround time

One of the goals in making comments on devel packages was to notify maintainers of problems that made it past the staging process and to hasten providing a fix. Obviously, the preference being the problems never make it past staging, but given the nature of the process this cannot be guaranteed. Through observation of the comments made by the repo checker and the corresponding problems being removed from the overall list of issues one can observe some rather fast turnaround times (adding to metrics.o.o would be interesting). One such example can be seen with the cinnamon package which had the fix already accepted into Factory by the time a user reported the problem. Certainly an encouraging result and much thanks to the maintainer.

last year

A variety of long-standing bugs and feature requests were handled with regards to factory-auto.

  • #673: source-checker: handle add/remove changes files for patch diff check.
    • packages with multiple subpackages and .changes files would cause error when removing a .changes files
  • #679: check_tags: warn when issue references are removed (plus other fixed)
    • the existing code was backwards among other things
  • #682: check_source: allow self-submission.
    • allows for reverting a package by submitting a previous revision of itself

On a related note the underlying ReviewBot code was refactored to improve re-use.

  • #684: ReviewBot: refactor leaper comment from log functionality.
    • layed the ground-work for a many future improvements to ReviewBot comment handling
  • #685: ReviewBot: use super().check_source_submission() in subclasses.

The request splitter saw a number of bug fixes post-porting and follow-ups to ported behavior.

  • #662: Remove source devel project check carried over from flawed adi logic
  • #674: request_splitter: delete requests should always be considered in a ring.
  • #676: request_splitter: house keeping (and fix for SLE [non-ring] workflow)
  • #681: adi: map wanted_requests to str() before passing to splitter.

A major performance improvement was made to the staging tools along with documentation improvements and a link to the staging dashboard to increase visibility to submitters.

  • #654: stagingapi: Avoid search/package query to determine devel project.
  • #683: stagingapi: update_status_comments() include link to dashboard.
  • #686: osc-staging: add missing documentation and correct/clean existing

The issue diffing tool, for reporting issues fixed in SLE that have not yet been merged into Factory, was enhanced to inteligently pick a user to which to assign the auto-generated bugs.

a silhouette of a person's head and shoulders, used as a default avatar

FOSDEM 2K18

The Free Open Source Developers European Meeting (FOSDEM) 2018 happened over the weekend. FOSDEM is a fantastic conference where Free Software enthusiasts from all over Europe can meet together at once a year (more than 17000 MAC addresses were registered this time). I really like its clearly unique atmosphere. As usual, 2 amazing days in Brussels, Belgium…

The reason why I visit it again and again. It charges you. When you see all these developers, visit these talks/presentations, have so many conversation with contributors, get innovation ideas that are shared there. It motivates you to get a copy of The Linux Programming Interface & TCP/IP Guide and hack away your amazing Linux software 🙂
Most of the talks were recorded. That’s nice, but by watching it online you just get information, by visiting these talks in person and taking part in discussions, you get much much more.

This time I was there together with my LiMux colleagues from Landeshauptstadt München. By the way, as you maybe know, we’re going to share as much as we can until Munich migrates to Windows (unfortunately, because of bureaucratic reasons making software freely (share it) sometimes is not so easy as we want).

Like in last year FOSDEM was held in so-called “developer rooms”. First I planned to visit devrooms such as debugging tools, DNS/DNSSEC and security + encryption. That was my target when I planned my program. But as I noticed later I was not the only one who had the same plans to visit the same talks, hacking sessions and open discussions 🙂 That led to the free-places-problem in the devrooms and it made my program a bit more dynamic than I planned first 🙂 But get it right – that was absolutely no problem for me. Outside we also had very interesting conversations. I met ex-colleges, friends whom I knew from mailing lists and IRC only and of course a lot of openSUSE contributers.

I would like to thank FOSDEM’s staff, everyone who made it happen, who helped to organize it (I’m definitely going to send a feedback to you guys). Thanks GitHub for free coffee. Keep it going 🙂 I also have to say thanks for openSUSE’s Travel Support Program. It supports me to visit this amazing event (and not for the first time!). I’m going to visit FOSDEM again next year. My photos can be found here. See you next time 😉

the avatar of Greg Kroah-Hartman

Linux Kernel Release Model

Note

This post is based on a whitepaper I wrote at the beginning of 2016 to be used to help many different companies understand the Linux kernel release model and encourage them to start taking the LTS stable updates more often. I then used it as a basis of a presentation I gave at the Linux Recipes conference in September 2017 which can be seen here.

With the recent craziness of Meltdown and Spectre , I’ve seen lots of things written about how Linux is released and how we handle handles security patches that are totally incorrect, so I figured it is time to dust off the text, update it in a few places, and publish this here for everyone to benefit from.