Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.

19 August, 2014


A new gem

A new gem has started in the world. Youtube_helper is not just another Youtube gem. It’s the helper gem. What it does? The Youtube_dlhelper gem downloads a youtube video from a defined space, creates the needed directories and transcodes the filde from *.m4a to *.mp3. Read the full README for using the gem.

Where is it?

You can find it there: https://github.com/saigkill/youtube_dlhelper (It goes directly to the README).

Ho to use?

Just run it with: youtube_dlhelper.rb YourUrl The new file is shown inside your $Musicfolder/Groupname/Youtube-Music or if you have choosen a Interpret it goes to $Musicfolder/Surname_Firstname/Youtube-Videos.

Have a lot of fun :-)

Sascha Manns: Welcome

07:00 UTCmember



Hello and welcome to my new Jekyll Bootstrap Page. Because of some Bandwidth problems i moved my blog out to Github.

Sascha Manns: Jekyll Introduction

07:00 UTCmember


This Jekyll introduction will outline specifically what Jekyll is and why you would want to use it. Directly following the intro we’ll learn exactly how Jekyll does what it does.


What is Jekyll?

Jekyll is a parsing engine bundled as a ruby gem used to build static websites from dynamic components such as templates, partials, liquid code, markdown, etc. Jekyll is known as “a simple, blog aware, static site generator”.


This website is created with Jekyll. Other Jekyll websites.

What does Jekyll Do?

Jekyll is a ruby gem you install on your local system. Once there you can call jekyll --server on a directory and provided that directory is setup in a way jekyll expects, it will do magic stuff like parse markdown/textile files, compute categories, tags, permalinks, and construct your pages from layout templates and partials.

Once parsed, Jekyll stores the result in a self-contained static _site folder. The intention here is that you can serve all contents in this folder statically from a plain static web-server.

You can think of Jekyll as a normalish dynamic blog but rather than parsing content, templates, and tags on each request, Jekyll does this once beforehand and caches the entire website in a folder for serving statically.

Jekyll is Not Blogging Software

Jekyll is a parsing engine.

Jekyll does not come with any content nor does it have any templates or design elements. This is a common source of confusion when getting started. Jekyll does not come with anything you actually use or see on your website - you have to make it.

Why Should I Care?

Jekyll is very minimalistic and very efficient. The most important thing to realize about Jekyll is that it creates a static representation of your website requiring only a static web-server. Traditional dynamic blogs like Wordpress require a database and server-side code. Heavily trafficked dynamic blogs must employ a caching layer that ultimately performs the same job Jekyll sets out to do; serve static content.

Therefore if you like to keep things simple and you prefer the command-line over an admin panel UI then give Jekyll a try.

Developers like Jekyll because we can write content like we write code:

  • Ability to write content in markdown or textile in your favorite text-editor.
  • Ability to write and preview your content via localhost.
  • No internet connection required.
  • Ability to publish via git.
  • Ability to host your blog on a static web-server.
  • Ability to host freely on GitHub Pages.
  • No database required.

How Jekyll Works

The following is a complete but concise outline of exactly how Jekyll works.

Be aware that core concepts are introduced in rapid succession without code examples. This information is not intended to specifically teach you how to do anything, rather it is intended to give you the full picture relative to what is going on in Jekyll-world.

Learning these core concepts should help you avoid common frustrations and ultimately help you better understand the code examples contained throughout Jekyll-Bootstrap.

Initial Setup

18 August, 2014

Jos Poortvliet: How else to help out

12:09 UTCmember

Yesterday I blogged about how to help testing. Today, let me share how you can facilitate development in other ways. First of all - you can enable testers!

Help testers

As I mentioned, openSUSE moved to a rolling release of Factory to facilitate testing. KDE software has development snapshots for a few distributions. ownCloud is actually looking for some help with packaging - if you're interested, ping dragotin or danimo on the owncloud-client-dev IRC channel on freenode (web interface for IRC here). Thanks to everybody helping developers with this!

KDE developers hacking in the mountains of Switzerland


Of course, there is code. Almost all projects I know have developer documentation. ownCloud has the developer manual and the KDE community is writing nothing less than a book about writing software for KDE!

Of course - if you want to get into coding ownCloud, you can join us at the ownCloud Contributor Conference in in two weeks in Berlin and KDE has Akademy coming just two weeks later!

And more

Not everybody has the skills to integrate zsync in ownCloud to make it only upload changes to files or to juggle complicated API's in search for better performance in Plasma but there is plenty more you can do. Here is a KDE call for promo help as well as KDE's generic get involved page. ownCloud also features a list of what you can do to help and so does openSUSE.

Or donate...

If you don't have the time to help, there is still something: donate to support development. KDE has a page asking for donations and spends the donations mostly on organizing developer events. For example, right now, planet KDE is full of posts about Randa. Your donation makes a difference!

You can support ownCloud feature development on bountysource, where you can even put money on a specific feature you want. This provides no guarantees - a feature can easily cost tens to hundreds of hours to implement, so multiple people will have to support a feature. But your support can help a developer spend time on this feature instead of working for a client and still be able to put food on the table at home.

So, there are plenty of ways in which you can help to get the features and improvements you want. Open Source software might be available for free, but its development still costs resources - and without your help, it won't happen.


I’ve been talking about how code construction is necessary to your coding project. With Javascript obfuscation or making code shine is more important than ever. There will be many many eyes looking at your code and see if it’s worth of nothing. After getting friend with js-beautify there ain’t no excuse to keep you ugly looking code laying around.


Js-beautify lives as (like last time JSHint tool) as a web service. If you just want to drop you Javascript code to http://jsbeautifier.org/ and see is this tools working go ahead. You can also use site to find your code guidelines parameters because you can test instantly how parameters looks like. Js-beautify github repository can be found https://github.com/beautify-web/js-beautify

How does it work?

Like last time we torture leaflet 0.7.2 minified code. It’s pain to make it look good with hand (and why one should there is good looking source available without touching minified version) with this king of example one can see how this tool really works. After installation with npm-tool or from my personal repo https://build.opensuse.org/project/show/home:illuusio:nodejs-fedora you have tool called js-beautify in you path. Running it is simple as this

js-beautify leaflet.js

after typing command one sees code outputted to STDOUT more readable form. So one can only get this like code style? No you can customize it until the bitter end of you life. If you like more K&R style formatting you can get one with:

js-beautify --brace-style=expand --indent-size=4 -f leaflet.js

Ok one have remember command line parameters every time.. nice. Actually not. There can be JSON formatted config file and defaults are like this: https://github.com/beautify-web/js-beautify/blob/master/js/config/defaults.json so using command line like this you can use your own defaults.

js-beautify --config=yourjsonconfig.json leaflet.js

Wait it’s not over yet!

With this tool there is also possibility to format CSS and HTML. You can use same js-beautify tool or short cuts css-beautify or html-beautify. These are nice tools to format HTML or CSS. There is also Python version of Js-beautify available but it’s not in focus of this post. It have same capabilities than Node.js version. So if you have read this and you code still look like dog have eat your home work then it’s your own fault. Now we have make code looking very good next time i’ll make it ugly..

17 August, 2014

Cornelius Schumacher: The Book

21:02 UTCmember


When inviting to the Randa 2014 meeting, Mario had the idea to write a book about KDE Frameworks. Valorie picked up this idea and kicked off a small team to tackle the task. So in the middle of August, Valorie, Rohan, Mirko, Bruno, and me gathered in a small room under the roof of the Randa house and started to ponder how to accomplish writing a book in the week of the meeting. Three days later and with the help of many others, Valorie showed around the first version of the book on her Kindle at breakfast. Mission accomplished.

Mission accomplished is a bit of an exaggeration, as you might have suspected. While we had a first version of the book, of course there still is a lot to be added, more content, more structure, more beautiful appearance. But we had quickly settled on the idea that the book shouldn't be a one-time effort, but an on-going project, which grows over time, and is continuously updated as the Frameworks develop, and people find the time and energy to contribute content.

So in addition to writing initial content we spend our thoughts and work on setting up an infrastructure, which will support a sustained effort to develop and maintain the book. While there will come more, having the book on the Kindle to show it around indeed was the first part of our mission accomplished.

Content-wise we decided to target beginning and mildly experienced Qt developers, and present the book in some form of cook book, with sections about how to solve specific problems, for example writing file archives, storing configuration, spell-checking, concurrent execution of tasks, or starting to write a new application.

There already is a lot of good content in our API documentation and on techbase.kde.org, so the book is more a remix of existing documentation spiced with a bit of new content to keep the pieces together or to adapt it to the changes between kdelibs 4 and Frameworks 5.

The book lives in a git repository. We will move it to a more official location a bit later. It's a combination of articles written in markdown and compiling code, from which snippets are dragged into the text as examples. A little bit of tooling around pandoc gives us the toolchain and infrastructure to generate the book without much effort. We actually intend to automatically generate current versions with our continuous integration system, whenever something changes.

While some content now is in the book git repository, we intend to maintain the examples and their documentation as close to the Frameworks they describe. So most of the text and code is supposed to live in the same repositories where the code is maintained as well. They are aggregated in the book repository via git submodules.

Comments and contributions are very welcome. If you are maintaining one of the Frameworks or you are otherwise familiar with them, please don't hesitate to let us know, send


Back on line after several weeks in late, I’ve tried from my best to resolve the case of Factory rolling releases.

After some hacks on the latest Sebastian Siebert beta version (Made in June), I’ve been able to build now BETA fglrx rpm for several openSUSE version.

one day AMD will release or not a stable version. (On my side I prefer to see more efforts made on the free radeon driver.)


This release concern only owners of radeon HD5xxx or above. All owner of HD2xx and HD4xx are really encouraged to use the free radeon driver (which received a lot of improvement in 3.11+ kernels)

This is experimental & BETA software, it could fix issues you encountered (FGLRX not working for openSUSE 13.1),

What happen to Sebastian

I would like to have some news about Sebastian Siebert, he’s a essential key for future updates.
This time I was able (even with several weeks in late) to adjust the script to create a build for openSUSE Factory.
But one day something will broke in kernel or somewhere else, and we all need to find a way to fix it.

So if you’re in touch with Sebastian, could you drop me a comment or a private mail?

I would like to continue the good support we created 3.5 years ago, or at least knowning if I’m orphan :-(

Beta Repository

To make things clear about the status of the drivers, it will not be published under the normal stable repository http://geeko.ioda.net/mirror/amd-fglrx.
I’ve created some times ago a beta repository located at http://geeko.ioda.net/mirror/amd-fglrx-beta.
The FGLRX 14.20 beta1 rpm are released for openSUSE version 12.3, 13.1 (+Tumbleweed), Factory

Signer of package my generic builder gpg key at Ioda-Net. (gpg key id 65BE584C)

For those interested by contributing or patches done to last Sebastian version, the raw-src on the server contain all the material used

Installing the new repository

Admitting you’ve the normal repository named FGLRX, (use zypper lr -d to find the number or name you give it). You have to start by disabling it
so you could fallback to it quickly when new stable version will be published. Open a root console or add sudo at your convenience and issue the following command:

zypper mr -dR FGLRX


To add another repository in the same console as root issue the following command which will install normally the right repository for your distribution

zypper ar -n FGLRX-BETA -cgf http://geeko.ioda.net/mirror/amd-fglrx-beta/openSUSE_`lsb-release -r | awk '{print $2}'` FGLRX-BETA

If you are using Tumbleweed use this one

zypper ar -n FGLRX-BETA -cgf http://geeko.ioda.net/mirror/amd-fglrx-beta/openSUSE_Tumbleweed FGLRX-BETA

Now the update/upgrade process

zypper dup -r FGLRX-BETA

Let the system upgrade the package, and try to enjoy the new beta.

Upgrading from the previous beta

Let the magic of zypper


A few weeks ago, during SUSE Hack Week 10 and the Berlin Qt Dev Days 2013, I started to look for Qt-based libraries, set myself the goal of creating one place to collect all Qt-based libraries, and made some good progress. We had come up with this idea when a couple of KDE people came together in the Swiss mountains for some intensive hacking, and where the idea of Inqlude, the Qt library archive was born. We were thinking of something like CPAN for Qt back then. Since then there was a little bit of progress here and there, but my goal for the Hack Week was to complete the data to cover all relevant Qt-based libraries out there.

The mission is accomplished so far. Thanks to the help of lots of people who contributed pointers, meta data, feedback, and help, we have a pretty comprehensive list of Qt libraries now. Some nuts and bolts are still missing in the infrastructure, which are required to put everything on the web site, and I'm sure we'll discover some hidden gems of Qt libraries later, but what is there is useful and up to date. If some pieces are not yet, contributions are more than welcome.

Many thanks as well to the people at the Qt Dev Days, who gave me the opportunity to present the project to the awesome audience of the Qt user and developer community.


The first key component of the project is the format for describing a Qt-based library. It's a JSON format, which is quite straightforward. That makes it easy to be handled programmatically by tools and other software, but is also still quite friendly to the human eye and a text editor.

The schema describes the meta data of a library and its releases, like name, description, release date and version, links to documentation and packages, etc. The data for Inqlude is centrally collected in a git repository using this schema, and the tools and the web site make use of it to provide nice and easy access to users.


The second key component is the tooling around the format. The big advantage of having a structured format to describe the data is that it makes it easy to write tools to deal with the data. We have a command line client, which currently is mostly used to validate and process the data, for example for generation of the web site, but is also meant to help users with installing and downloading libraries. It's not meant to replace a native package manager, but integrate with whatever your platform provides. This area needs some more work, though.

In the future it would be nice to have some more tools. I would like to see a graphical client for managing libraries, and integration with IDEs, such as Qt Creator or KDevelop would also be awesome.

Web site

The third key component is the web site. This is the central place for users


Three years ago, at Randa 2011, the idea and first implementation of Inqlude, the Qt library archive, was born. So I'm particularly happy today to announce the first alpha release of the Inqlude tool, live from Randa 2014.

Picture by Bruno Friedmann

I use the tool for creating the web site since quite a while and it works nicely for that. It also can create and check manifest files, which is handy when you are creating or updating these for publication on the web site.

The handling of download and installation of packages of libraries listed on Inqlude is not ready yet. There is some basic implementation, but the meta data needed for it, is not there yet. This is something for a future release.

I put down the plan for the future into a roadmap. This release 0.7 is the first alpha. The second alpha 0.8 will mostly come with some more documentation about how to contribute. Then there will be a beta 0.9, which marks the point where we will keep the schema for the manifest stable. Release 1.0 will then be the point where the Inqlude tool will come with support for managing local packages, so that it's useful for developers writing Qt applications as end users. This plan is not set in stone, but it should provide a good starting point. Longer term I intend to have frequent releases to address the needs reported by users.

You will here more in my lightning talk Everything Qt at Akademy.

Inqlude is one part of the story to make the libraries created by the KDE community more accessible to Qt developers. With the recent first stable release of Frameworks 5, we have made a huge step towards that goal, and we just released the first update. A lot of porting of applications is going on here at the meeting, and we are having discussion about various other aspects how to get there, such as a KDE SDK, how to address 3rd party developers, documentation of frameworks, and more. This will continue to be an interesting ride.

15 August, 2014


#1: openSUSE Factory Rolling Release Distribution

Over the course of the last several months a lot of changes were made to the development process for openSUSE Factory. Meaning it’s no longer a highly experimental testing dump, but it’s now a viable rolling release distribution in its own right. You can read all about the details here. I installed openSUSE Factory in a virtual machine yesterday and it seems to run pretty great. Of course to really judge a rolling release distribution you need to run it for a sustained period of time.

No rolling release distribution will ever be my preferred day-to-day operating system, but nevertheless I’m pretty excited about the “new” openSUSE Factory. I think the changes will enable version whores and bleeding edge explorers to finally have a truly symbiotic relationship with the users who value productivity and predictability in their PC operating system.


#2: KDE Frameworks 5 and Plasma 5

Since I was already testing openSUSE Factory it was a great opportunity to finally get my feet wet with the new KDE Frameworks 5 and Qt5 based KDE Plasma 5 workspace, initially released about a month ago. Obviously it’s still lacking some features and polish, but it’s already usable for forgiving users who know what they’re doing and showing great promise.


#3: 4G on the Jolla

My provider enabled 4G on my subscription and offered to send me a new SIM Card gratis. So now my Jolla is sporting 4G. Unfortunately it only took about 5-10 minutes of speed testing (peaking at 12 MB/s, averaging about 10 MB/s) to use all my available bandwidth for the month, so for the rest of August I’ve been speed dropped to 64 Kbps, but hey, it’s still 4G!


#4: Richard Stallman presenting with a slideshow

Who’d have ever thought they’d see the day that Stallman would do a presentation with accompanying slides? Well it happened, and I think this great use of slides helps him communicate more effectively. Watch the video and judge for yourselves (27 MB, 13 minutes).




Creating high quality contents takes time. A lot of people write nowadays but very few are writers. In the software industry, most of those who write very well are in the marketing side not on the technical side.

The impact of high quality contents is very high over time. Engineers and other profiles related with technology tend to underestimate this fact. When approaching the creation of contents, their first reaction is to think about the effort that takes, not the impact. Marketers have exactly on the opposite view. They tend to focus in the short term impact.

Successful organizations have something in common. They focus a lot of effort and energy in reporting efficiently across the entire organization, not just vertically but horizontally, not just internally but also externally. Knowing what other around you are doing, their goals, motivations and progress is as important as communicating results

One of the sentences that I do not stop repeating is that a good team gets further than a bunch of rock stars. I think that a collective approach to content creation provides better results in general, in the mid term, than individual ones. Specially if we consider how mainstream Free Software has become. There are so many people doing incredible things out there, it is becoming so hard to get attention....

Technology is everywhere. Everybody is interested on it. We all understand that it has a significant impact in our lives and it will have even more in the future. That doesn't mean everybody understands it. For many of us, that work in the software industry, speaking an understandable language for wider audiences do not comes naturally or simply by practising. It requires learning/training.

Very often is not enough to create something outstanding once in a while to be widely recognized. The dark work counts as much as the one that shines. The hows and whys are relevant. Reputation is not in direct relation with popularity and short term successes. Being recognized for your work is an everyday task, a mid term achievement. The good thing about reputation is that once you achieve it,  the impact of your following actions multiplies.

We need to remember that code is meant to die, to disappear, to be replaced by better code, faster code, simpler code. A lot of the work we do ends nowhere. Both facts, that are not restricted to software, mean That doesn't mean that creating that code or project was not worth it. Creating good content, helps increasing the life time of our work, specially if we do not restrict them to results.

All the above are some of the motivations that drives me to promote the creation of a team blog wherever I work. Sometimes I succeed and sometimes not, obviously.

What is a team blog for me? 

  • It is a team effort. Each post should be led by a person, an author, but created by the team.
  • It focuses on what the team/group do

14 August, 2014


Because it is still reported that the ownCloud Client has an increasing memory footprint when running for long time I am trying to monitor the QObject tree of the client. Valgrind does not report any memory problems with it so my suspicion was that somewhere QObjects are created with valid parent pointers referencing a long living object. These objects might accumulate unexpectedly over time and waste memory.

So I tried to investigate the app with Qt Inspector of Robert Knight. That’s a great tool, but it does not yet completely do what I need because it only shows QWidget based objects. But Robert was kind enough to put me on the right track, thanks a lot for that!

I tried this naive approach:

In the clients main.cpp, I implemented these both callback functions:

 QSet<QObject*> mObjects;

 extern "C" Q_DECL_EXPORT void qt_addObject(QObject *obj)

 extern "C" Q_DECL_EXPORT void qt_removeObject(QObject *obj)

Qt calls these callbacks whenever a QObject is created or deleted respectively. When the object is created I add it’s pointer to the QSet mObjects, and if it is deleted, it is removed from the QSet. My idea was that after the QApp::exec() call returns, I would have to see which QObjects are still in the mObjects QSet. After a longer run of the client, I hoped to see an artificial amount of objects being left over.

Well, what should I say… No success so far: After first tests, it seems that the amount of left-over objects is pretty constant. Also, I don’t see any objects that I would not kind of expect.

So this little experiment left more questions than answer: Is the suspicion correct that QObjects with a valid parent pointer can cause the memory growth? Is my test code as I did it so far able to detect that at all? Is it correct to do the analysis after the app.exec() call returned?

If you have any hints for me, please let me know! How would you tackle the problem?


This is the link to my modified main.cpp:



In the past 4 months during this years Google Summer of Code (GSoC), a global program that offers student developers stipends to write code for open source software projects, Christian Bruckmayer collaborated with other students and mentors to code a dashboard for the Open Source Event Manager (OSEM). In this series of posts Christian will tell you about his project and what he has learned from this experience.

Google Summer of Code 2014 Logo

Christian BruckmayerHey there, Christian here again. This is my last post in a series about my GSoC project. I have already explained the two big features I have implemented: The dashboard and Conference Goals & Campaigns. I hope you enjoyed those articles, if you haven’t read them I recommend you head over and do so. Today I would like to tell you about the most important part of GSoC for me personally: What I have learned during this summer!

The Open Source Way

Open Retrospective I can really say that I gained much experience, both technically and personally, during GSoC. Working together, the open source way, was a great experience. It goes like this: I discuss a feature with the OSEM team in GitHub issues, then I start to implement the feature and send a Pull Request to our Repository. The mentors then review my code and tell me their suggestions to improve it. After I have worked in the suggestions the progress starts again.

This feedback helped me a lot. We discussed code smells, bad design decisions or a wrong assumptions, right there, next to the code on github. And as four eyes see more than two, this process assured that only good code get’s into repository!

Working together, but self-driven

On the one hand it was awesome to work together with experienced and very skilled developers. The constructive criticism that I got for my work helped me a lot to get better every day, and it still does. But on the other hand I was responsible for my own project. It was a challenge because no one would tell me when I had to work, no one gave me a step by step list. I had to learn to organize the work myself somehow. Being a child of self-employed parents was a big advantage for me in GSoC, as I have a basic understanding of prioritization, scheduling my day and being self-dependent. Still, working together with the other students and mentors, but self-driven was something I learned this summer.

Test Driven Development

Another nice thing I got to know was test driven development. During previous student jobs I already programmed software tests but only after other developers implemented the features. In my GSoC project I got to think about the tests first and then I started to implement the feature. Implementing something this way around, tests first, you are forced to think about the design decisions. ‘Does it belong to the model or to the controller?’ or ‘How can I split this up to make it easier testable?’ are questions I


Heya geekos!

We love the fact that the openSUSE News section is being generally well-adopted and well read. But, we’d like to do more, and do better! And for that, we need your input. Don’t worry, we won’t demand any 10000 characters super-articles (for now :P), but what we would like from you is to fill out a little survey. It’s very very very short, as we don’t want it to be too time consuming, but we would like to know if, generally speaking, we’re heading in the right direction. Or in a wrong one. Any way, it would be nice to know what the openSUSE news readers think about its content, so we can make it better. There’s nothing we’d like more than to bring you additional enjoyment while you’re drinking your morning coffee and clicking through your favorite news sites!

So, what we politely ask you to do is drop everything, and click here to fill out the survey. It’s short and shouldn’t take more than a minute or two of your time, but it would help us a great deal.

The survey will be open until the 31st of August. We’ll post the results in the first days of September right here at openSUSE News. And there will be a graph included:

statistics geeko inside

There shall be a fancy graph!

This is just the first step in the news team’s interaction with its geeko reader base. Needless to say the survey is anonymous. Also, I’d like to ask you if you could share this survey through your social networks or with other readers, so we can get the most representative possible input.

Thanks again for helping us out, and remember to…


…have a lot of fun!

13 August, 2014

Short answer: because you should.

When somebody asks about their missing pet feature in KDE or ownCloud software, I always trow in a request for help in the answer. Software development is hard work and these features don't appear out of nowhere. There are only so many hours in a day to work on the a million things we all agree are important. There are many ways to help out and speed things up a little. In this blog I'd like to highlight testing because I see developers spend a lot of time testing their own software - and that is not as good as it sounds.

Developers also do testing!

You see, developers really want their software to be good. So when a Alpha or Release Candidate does not receive much testing from users, the developers take it on themselves to test it.

Developers testing software has two downsides:
  • Developers tend to test the things they wrote the software to do. It might sound obvious, but usually the things that break are things the developer didn't think off: "you have 51,000 songs? Oh, I never tested the music app with more than 4,000" is what I heard just yesterday.
  • And of course, it should be obvious: early and lots of testing speeds up development so you get those features you want!
Take two lessons from this:
  • If you want things to work for you, YOU have to test it.
  • If you want those other features, too, helping out is the name of the game.

It isn't hard

In the past I wrote an extensive article on how to test for KDE and ownCloud, too, has real nice testing documentation.

If you want to get on it now, Klaas Freitag just released ownCloud client 1.7 alpha 1 and openSUSE has moved factory to a rolling release process to make it easy to help test. KDE Applications 4.14 is at the third beta and the Release Candidate is around the corner.

Your testing doesn't just save time: it is inspiring and fun. For everybody involved. For added kicks, consider joining us at the ownCloud Contributor Conference in in two weeks in Berlin and KDE has Akademy coming just two weeks later!

Help make sure we can get our features done in time - help test and contribute your creativity and thoughts!

note: I'm not argueing here against testing by developers, rather that users should help out more! Of course, developers should make sure their code works and unit tests and automated testing are great tools for that. But I believe nothing can replace proper end-user testing in real-life environments and that can only really be properly done by end users.


openSUSE, despite the vastness of the www stating it’s primarily a KDE distro, prides itself in offering a one stop shop for your operating system needs, regardless of your desktop environment preferences. And it’s true. For a couple of months, I’ve been running openSUSE GNOME exclusively on my laptop. And it worked like a charm. But there was one problem.

Whichever system I’m running, I absolutely must have a wallpaper slideshow complementing my desktop theme preferences. That way, I ensure my desktop is fresh and attractive to look at every time I close a window. But, as already stated, there was a problem. The wallpaper slideshow from extensions.gnome.org didn’t seem to work for me and kept on crashing. So I had to find an alternative. How? While search engines can be a good friend, I decided to ask on the forums, to get a first-hand experience. As always,  the kind geekos on the green side of the fence looked into the issue, found a solution and helped a brother out. I’ve been offered a solution, and it’s called Variety.

There was a problem in using Variety on openSUSE – the packages didn’t exist. So, naturally, malcolmlewis stepped in and packaged Variety for us to use!

How to install Variety?

Malcolm created a one-click install for this fantastic wallpaper changer. You can get it here. But, before installing, make sure to install python pillow, as it’s a dependency. You can get it here.

So what’s so special about Variety?

Along with the obvious function, which is changing wallpapers, it’s how it changes them, what really matters. You can add a local pictures folder, or, if you don’t feel like it, you can let Variety download wallpapers directly from the internet from different sources (flickr, wallbase etc.). To learn more about the app’s preferences, check out this video made by a Linux Mint user.


Do you have any experience with Variety or would maybe like to suggest a wallpaper slideshow app? Join us in the comments! And until then, remember to…

…have a lot of fun!

12 August, 2014


Yo yo, geekos! Here we are, for the final chapter of our CLT hangout. Today, we’ll be talking about job control through which we’ll learn how to control processes running on our computer!

An Example

As we have learned, we can run programs directly from the CLI by simply typing the name of the program. For example, dolphin. If we type:


…dolphin, the file manager, opens. If you look at the terminal while this process is opened, you can not access the command prompt and you can not write a new command inside the same window. If you terminate dolphin, the prompt reappears and you can type a new command into the shell. Now, how can we run a program from CLI, while also having our prompt available for further command issuing.

dolphin &

…and you have your dolphin file manager running in background, with the terminal free to type another command you need.

Now imagine you forgot to type the ‘&’ character after dolphin. Simply type ‘ctrl+z’, which will stop your process and put it in idle. To resume the stopped process, type:


…which will restart the process from the background.

jobs, ps

Now that we have processes running in the background, you can list them either using jobs, or using ps. Try it. Just type jobs, or type ps. Here’s what I get:

nenad@linux-zr04:~> ps
8356 pts/1    00:00:00 bash
8401 pts/1    00:00:00 dolphin
8406 pts/1    00:00:00 kbuildsycoca4
8456 pts/1    00:00:00 ps


Kill a Process

How do you get rid of a process if it’s become unresponsive? By using the kill command. Let’s try it out on our previously mentioned dolphin process. First, we have to identify the PID of the process by using ps. In my aforementioned case it’s 8401 for dolphin. So to kill it, I simply type:

kill 8401

…and it kills off dolphin.

More About Kill

Kill doesn’t exist only for terminating processes, but it was originally designed to send signals to processes. And of course, there are a number of kill signals you can use, which can be different in regard to the application you use. See the table below:

killDo try them out.


With this lesson, we conclude our CLT series and our tuesday hanging out. I hope that other n00bs like me managed to demistify the console in their minds and learn the basics. Now all that’s left is for you to play around (just don’t mess around the / directory too much so you don’t bork something :D).

We’ll be seeing a lot more of each other soon, as there’s more series of articles from where these came from. Stay tuned, and meanwhile…


…have a lot of fun!


BuwgqPdCQAAlDvOSo some more bits have arrived for #FrankenPi First up, we have the Samsung Multi Charging Cable for Galaxy S5. It’s basically a USB cable but with three Micro USB ends. This will allow me to power three Pi’s from one source rather than three power bricks, as it’ s a cable the ‘juice’ output will be whatever you chuck down it.

Bu083PGCUAACJ4_Also the Plugable USB 2.0 7 Port Hub and BC 1.2 Fast Charger with 60 Watt Power Adapter arrived this morning.  It was not originally my intention to power the Pi’s off the Hub, that is until I came across this piece of kit. Now a few people have questioned the power output which tbh is all Voodoo to me however the tec specs say the following: The supplied power adapter delivers 10A of available current across all USB ports which leaves us 500mA short (7 * 1.5A = 10.5A). I read that to mean “The more you plug in the less juice” unless it’s managed in some way so that each port only gets 1.5A? At the end of the day the Hub was going to be for the Pi’s to communicate with a 2.5 external drive anyway.

flattr this!


Dan Collins “That’s a nerd ménage à trois!”

flattr this!

10 August, 2014

I few weeks ago I announced I was joining Linaro. I work there as Director of Core Development Group. I moved from Prague to Cambridge (the original), that is, from continental to oceanic climate. From dry, cold in winter and hot in summer to wet, soft in summer and above zero most of the winter. In theory an improvement, you might think. Well, depending on much it rains. I will tell you better in spring.

A few days ago The Mukt published and interview where I explained a little what is Linaro and what do I do as Core Development Director. 

Core Development Group

I can add that Linaro is an engineering focused organization, divided in Engineering Groups. Some of them, like the one I am part of, are formed by several engineering teams, some of them called Working Groups. Core Development is formed by four:

 You can find more details in the Core Development wiki page, at Linaro Wiki.

These first few weeks I have gone through the natural landing process, meeting my colleagues and managers, knowing how we operate, learning about my responsibilities, the work engineers are doing, the plans for the future, analysing our internal processes, etc. Nothing unusual in these cases.

In July I had the opportunity to attend to the Linaro Kernel and Power Management Sprint, hosted by one of our Members, ST, in Le Mans. It was a very interesting week.

Linaro Connect

My following event will be Linaro Connect, in San Francisco, USA, in September. Linaro Connect are the events where all the Linaro employees meet. Those of you who are familiar with the Ubuntu Developer Summit knows what I am talking about. Linaro Connect takes place twice a year in a different continent and it is also an opportunity to have a direct contact with our Members.

Factory as a rolling release: openSUSE development version

A few days ago it was announced that Factory moved to a rolling release model. So the first step of the 2014/2016 plan has been completed. I was very happy to see that the openSUSE team could lead the execution of this relevant step for the distro in time. The Development version of openSUSE is now a reality that not just can increase the overall number of contributors, but also bring significant innovation to SUSE Linux Enterprise integration process. Congratulations. I am very proud of being part of the team. I will always be.

I would like to specially congratulate Roland Haidl, the Director of Communities at SUSE. The most important (and hardest) thing you can get from a manager is trust, and the openSUSE team had it from him to build a good team, support the changes the team went through back in 2012 (tough times), stand strong behind the new strategy defined in 2013 and support the team during the design and execution of this first milestone. And he did this without making noise, letting the results speak. A


FrankenPi frontIn the words of the Rapberry Pi “Two is better than one.” FrankenPi was born. I have a Zyxel GS-105B switch (Mine is an early one that is smaller than later models with the same code.) that has a full metal case. I decided to drill two holes in the switch case and attach two silver (Long) metal motherboard securing nuts. Now the Pi has two securing holes MboardNutsalready created on the board however I discovered these are too small for the securing nuts so in classic low tech style I used a Philips screw driver as a drill come borer and widened the holes to accommodate  the threads. I then screwed two nuts together, which is the right height to accommodate the second Pi and secured the first Pi by screwing the two nuts on to the thread repeating the process for the second Pi (I shall repeat this part again when I get two more Pi’s).

Custom Pi cablesBecause the Pi’s are bolted to the switch there’s no need in my opinion to have the standard 1 metre long cables so I had some custom cables made up at work. I’ve had to guess the length for Pi’s 3 & 4 as I don’t have them yet but I’m confident the cables are long enough.

I’m using 16GB SD cards and Raspbian.  Having installed to FrankenPi-01 I set the network to a static IP (Use the text editor of your choice)

sudo vim /etc/network/interfaces

Hopefully you know how to operate your text editor of choice so I won’t go into that here. You need to change the defaults from dhcp to static and should look like this example;

iface eth0 inet static

I then edited my hosts and hostname files to FrankenPi-01

sudo vim /etc/hosts
sudo vim /etc/hostname

Next step for me was to fetch all the latest updates and install tmux (FrankenPi-01 will be the master Pi) with that done it was time for some cool time-saving stuff and clone FrankenPi-01. As I’m 100% Linux at home I will be using dd and I’m also fortunate to have a card reader.

Insert SD card and from a terminal issue the fdisk command:

sudo fdisk -l

This will tell you the cards name, for me it is /sdd Next decide where you want to store the copy of FrankenPi-01 (Amend to your hostname accordingly) for me it is /run/media/peter/Storage/FrankenPi/FrankenRaspian The FrankenRaspbian can be anything, you can name it dogbreath for all it matters it’s just the name of the output file and you can store it in your /home directory if you prefer.
So now we are ready to start copying (Backup) our installation. ¹

sudo dd if=/dev/sdd of=/run/media/peter/Storage/FrankenPi 

08 August, 2014


Heya there geekos! As we have already reported, there’s an ongoing contest for the official logo for the openSUSE Asia Summit. You can read the announcement here. There have been proposals coming our way, but…

Terry Crews Yelling

For that reason, we’re issuing a reminder for all geekos with a sense of design to help us make this summit successful as it can and should be. You won’t be doing it for the sheer thrill of it, though. You can do it for the geeko pride of having your logo printed all over the place an all of promotional material, and, to top it off, there’s a super secret Geeko prize!

We kindly invite you to join our effort, and hope to have our mailbox crammed with suggestions come deadline, August 18th. And here are the super simple rules for all the potential contestants:

  1. We will accept only SVG format for original design. Both color and monochrome(black and white) version are required.
  2. The elements of your design should reflect the openSUSE community in Asia.
  3. Please note that there are some things should not be used in your design:elements of your design should reflect the openSUSE community in Asia.
    • No brand names or trademarks of any kind.
    • No illustrations some may consider inappropriate, offensive, hateful, tortuous, defamatory, slanderous or libelous.
    • No sexually explicit or provocative images.
    • No images of weapons or violence.
    • No alcohol, tobacco, or drug use imagery.
    • No designs which promotes bigotry, racism, hatred or harm against groups or individuals; or promotes discrimination based on race, gender, religion, nationality, disability, sexual orientation or age.
    • No religious, political, or nationalist imagery.
  4. Your artwork should comply with “openSUSE Project Trademark Guidelines” published at: https://en.opensuse.org/File:OpenSUSE_Trademark_Guidelines.pdf
  5. You should also agree that the openSUSE community have right to interpret the usage of the artwork.
  6. All your artwork will be licensed under CC-BY-SA 3.0.

A few simple guidelines can be found at:

Please send your design to opensuse.asia@gmail.com directly. It should contain the following:

  1. Vector file of the design in attachement, with svg format ONLY.
  2. Bitmap of design in attachment – image size: 256*256px at least. Format: png or jpg. Less than 512KB.
  3. Your name.
  4. Where are you working/studying now. (optional)
  5. Your phone number. (optional)

The contest is open from now until Aug 18, 2014. After that, the openSUSE.Asia team will filter all submitted designs and put the ones which meet the requirements to the website for voting.


  1. The final decision will be made by openSUSE.Asia Summit Committee. Please understand that the highest vote score of the design may not be designated as the final winner.
  2. To create your artwork, we recommend to use Inkscape, which is a powerful vector graphics tool for all kinds of design. It’s free and open sourced.
  3. The article has been updated after discussion with openSUSE Asia team regarding entry


ownCloud 7 We just published ownCloud 7!

This awesome release brings many new features. Among them, I’m most excited about the server to server sharing.

Server to server sharing is a first step in true federation of data with ownCloud: you can add a folder shared with you from another ownCloud instance into your own. Next step would of course be to also share things like user accounts and data like chat, contacts, calendar and more. These things come with their own challenges and we’re not there yet, but if you want to help work on it – join us for the ownCloud Contributor Conference in Berlin next month!

A close runner-up in terms of excitement for me are the improvements to ownCloud Documents – real-time document editing directly on your ownCloud! We have been updating this through the 6.0.x series so the only ‘unique’ ownCloud 7 feature is the support for transparently converting MS Word documents, but that is a feature that makes Documents many times more useful!

There are many more features, you can find more details on the ownCloud website. The official announcement blog post is here.


This would not have been possible without the hard work of the ownCloud community, so a big thank-you goes out to everybody who contributed! We have a large team of almost 100 regular contributors, making ownCloud one of the largest Open Source projects and that makes me proud.

Of course we have a lot of work to do: revelations of companies and governments spying on people keep coming out and our work is crucial to protect our privacy for the future. If you want to help out with this important work, consider contributing to ownCloud. We can use help in many areas, not just coding. Translation, marketing and design are all important for the success of ownCloud!

The release of ownCloud 7 is not only the conclusion of a lot of hard work by the ownCloud community, but also a new beginning! Not only will we release updates to this release, fixing issues and adding translations, but the community now also starts to update the numerous ownCloud apps to ownCloud 7.

Expect more from us. Now, go, install ownCloud 7 and let me know what you think of it!

07 August, 2014


We are proud to announce official Docker containers for our latest openSUSE release, 13.1. Docker is an open-source project that automates the deployment of applications inside software containers. With the official openSUSE Docker containers it’s now easy for developers to leverage the power of our Linux distribution and it’s free software Eco-system as base for their applications.

openSUSE + Docker == Awesome

The Docker project was released in March last year. Until now, during this short amount of time, more than 450 people contributed with patches and 14,000 containers have been published on its central index. Docker recently released version 1.0, the first one declared enterprise-ready.

The container technology has been around since quite some time, think about FreeBSD jails, Solaris zones, OpenVZ, LXC. However none of these tools has ever attracted as much attention as Docker. Docker has been so successful because it makes easy to harness the power of containers and at the same time it provides two important features: a developer oriented work flow to manage containers’ life cycle and a set of collaborative functionalities.

openSUSE at Docker Hub

Managing Docker images shares analogies with version control systems used to track the evolution of source code. Containers are stored on a central repository called Docker Hub. Users can download them using the “pull” command. They can “diff” a running container to see which changes have been made. They can fork containers and “push” their derived work back to the Docker Hub.

The creation of new containers starting from the existing ones is achieved using Docker’s integrated build system. The feature is based on a special file called “Dockerfile”, a text file containing a list of Docker build directives. These commands can do several operations like: select the image to extend, execute a command inside of the container at build time, expose a service running inside of the container to the outside world and more.

Starting today the Docker Hub provides official openSUSE containers for our stable releases. This container can be used as a foundation block to create new awesome containers based on our beloved Linux distribution.

Try the official openSUSE docker containers

The first thing to do is to install Docker by following the official installation instructions for openSUSE. Users of Factory can install docker straight from the main repository. The same should happen pretty soon to Tumbleweed users (the docker package is currently staged in the Tumbleweed:testing repository).

To download the official openSUSE container just run:

docker pull opensuse:13.1

To run a program inside of the container use the following command:

docker run opensuse:13.1 <command> <command params>

There are several options for the docker run command, please refer to Docker’s documentation. However, a use case which is worth mentioning regards the execution of an interactive shell inside of the container. This can be achieved by using the following command:

docker run -t -i opensuse:13.1 /bin/bash

Creating a docker application based on the official containers is easy

06 August, 2014


In the past 4 months during this years Google Summer of Code (GSoC), a global program that offers student developers stipends to write code for open source software projects, Christian Bruckmayer collaborated with other students and mentors to code a dashboard for the Open Source Event Manager (OSEM). In this series of posts Christian will tell you about his project and what he has learned from this experience.

Google Summer of Code 2014 Logo

Christian BruckmayerHey there, Christian here again. This is my second post in a three post series about my GSoC project, last week I explained the dashboard in my post OSEM: Conference Dashboard. You should go and read that if you haven’t already! This week I would like to tell you about another feature that I have implemented during this summer: Conference Goals & Campaigns.

Setting expectations for a conference

While working on the dashboard it became more and more evident that conference organizers have expectations about registrations, call for paper submissions and the program. Stuff like

  • I need more than 50 people to attend to make the conference a success.
  • I need at least a hundred submissions by next month to build a reasonable schedule.
  • In the end I have room for 21 hours of program on my schedule.


We came up with the idea to express these expectations as Targets to be able to compare them to the actual data. I believe that goals are very important, regardless in business or private life, to get motivated, move forward and measure your success! Already at university I learned that setting up goals is not an easy process and we learned different methods for this task. We wanted to make these process as easy as possible so we decided to use the well-known SMART criteria to help the conference organizers. These criteria say that goals should be:

  • Specific: The goal should be clear and unambiguous
  • Measurable: The goal should be trackable
  • Achievable: The goal is realistic and manageable
  • Relevant: The goal should matter for your conference
  • Time-bound: The goal should be achieved in a specific period of time

To fulfill these criteria we decided to implement the Goal model with the following attributes:

  1. due_date
  2. goal_count
  3. unit

The goal is now specific (10 (goal_count) registrations (unit)) and measurable (we can compare the goal with the current registrations, submissions and program hours). Furthermore we believe that the decided units registrations, submissions and program hours are very important and relevant for each conference. And last but not least it’s time-bound (due_date). The conference organizer is now only responsible to set up realistic and achievable goals!

To get the current progress of the goal I simply implemented the following method:

def get_progress
  numerator = 0
  if unit == Goal.units[:submissions]
    numerator = conference.events.where('created_at < ?', due_date).count
  elsif unit == Goal.units[:registrations]
    numerator = conference.registrations.where('created_at < ?', due_date).count
  elsif unit == Goal.units[:program_minutes]
    numerator = conference.current_program_hours
  (numerator / goal_count.to_f  * 100).round(0).to_s

Depending of the unit I query the current amount of it


Now JuliaLang packages are available from Science repository.

About Julia:

Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library. The library, largely written in Julia itself, also integrates mature, best-of-breed C and Fortran libraries for linear algebra, random number generation, signal processing, and string processing. In addition, the Julia developer community is contributing a number of external packages through Julia’s built-in package manager at a rapid pace.

You can get more information about JuliaLang on official web-site.

05 August, 2014

Jos Poortvliet: ownCloud numbers

11:30 UTCmember

Last week, we went over some numbers related to ownCloud. Things like the number of people who contributed in the last 12 months or the speed of code flowing in on average. The numbers are impressive and you can read about them in this press release.


Numbers can tell you a lot. One thing is of course particularly cool: the numbers are big. Really big. ownCloud has had almost 300 people contribute code to it in the last 12 months. That is a lot. Some perspective: wordpress has had 52 contributors over its lifetime! Drupal: 149. phpbb: 190. Mediawiki: 534. Joomla: 483. VLC media player: 662. ownCloud has had 566 contributors over its lifetime. This is just one metric out of many, and the comparisons are between often wildly different projects so take it with some salt.

One thing I think you can safely conclude: ownCloud is certainly in the big leagues. Looking at our competition, the ownCloud Client team alone (59 contributors over its life time) is bigger than any other open source file sync and share technology.

Why numbers

We primarily want to keep an eye on numbers to see if we are doing well or not. Anecdotal evidence is important (I really like to read all the positive feedback on the #ownCloud7 release) but hard numbers are very important too. For example, if we see fewer new people join ownCloud, we can see if we can improve developer documentation or have to offer better help for new developers on IRC.

We have good reasons to keep an eye on that. Open Source projects typically have a huge turnover (60%/year is normal), requiring us to keep attracting new contributors. Not only that, ownCloud Inc. has hired many community members and, through its marketing and sales machine, is increasing the number of ownCloud users enormously. We do numbers on our user base internally, and the number we make public (about 1.7 million at the moment) is a rather conservative estimate. And growing quickly: Germany's upcoming largest-ever cloud deployment will bring ownCloud to half a million users!

What effect does that have? For one, paid developers can create a 'freight train' effect, accelerating development to a point where it is hard for volunteers to catch up. This is a reason why it is good to split up the apps from the core and to improve the API offered by ownCloud. This makes it easier to keep changes more localized and easier to follow. Another effect is that the growing popularity of ownCloud brings more people to our mailing lists and forums, asking questions. That is a tough issue. Improvements in documentation can help here, but we can also think about other tools and ways to answer questions.


We can't stare ourselves blind on numbers, and we won't. Real life matters more: that is why we are working hard on preparing the ownCloud Contributor Conference later this month! But it is cool to see


Heya geekos. I’ve checked the ‘curriculum’, and we’re at part 7 of 8 as of today. Which means there will be one more – and sadly final – CLT next tuesday. So for today, let’s deal with some permissions!

As we all know, we can have many users using one machine. To protect the users from each other, permissions have been devised. And we have already discussed file permissions, so let’s refresh our memories with a single click.


The chmod command is used for changing permissions on a directory or a file. To use it, you first type the chmod command, after that you type the permissions specification, and after that the file or directory you’d like to change the permissions of. It can be done in more way, but mr Shotts focuses on the octal notation method.

Imagine permissions as a series of bits. For every permission slot that’s not empty, there’s a 1, and for every empty one there’s a 0. For example:

rwx = 111

rw- = 110


And to see how it looks in binary:

rwx = 111 —> in binary = 7

rw- = 110 —> in binary = 6

r-x = 101 —> in binary = 5

r– = 100 —> in binary = 4

Now, if we would like to have a file with read, write and executing permissions for the file owner and for the group owner of the file, but make it unavailable to all other users, we do:

chmod 770 example_file

…where example_file is any file you’d like to try this command on. So, you always have to enter three separate digits, for three separate groups known already from our second lesson. The same can be done for directories.

su and sudo

It is sometimes needed for a user to become a super user, so he can accomplish a task (usually something like installing software, for example). For temporary accessing to the super user mode, there’s a program called su, or substitute user. You just have to type in


and type your superuser password, and you’re in. However, a word of warning: don’t remember to log out and use it for a short period of time.

Also there’s an option probably more used in openSUSE and Ubuntu, and it’s called sudo. Sudo is only different in the aspect, that it’s a special command that’s allocated to one specific user. So unlike su, with sudo you can use your user password instead of the superuser’s password. Example:

sudo zypper in goodiegoodie

Changing file and group ownership

To change the owner of the file, you have to run chown as a superuser. For example, if I’d want to change ownership from ‘nenad’ to ‘suse’, I do it this way:


[enter password]

chown suse example_file

I can also accomplish the same with changing group ownership, but with a slightly different command chgrp. Easy peasy:

chgrp suse_group example_file

…and that’s it.

Next time

As I already stated

Jakub Steiner: GUADEC

10:20 UTCmember


This blog post is mostly about showing some photos I took, but I may as well give a brief summary from my point of view.

Had a good time in Strasbourg this week. Hacked a bit on Adwaita with Lapo, who has fearlessly been sanding the rough parts after the major refactoring. Jim Hall uncovered the details of his recent usability testing of GNOME, so while we video chatted before, it was nice to meet him in person. Watched Christian uncover his bold plans to focus on Builder full time which is both awesome and sad. Watched Jasper come out with the truth about his love for Windows and Federico’s secret to getting around fast. Uncovered how Benjamin is not getting more aerodynamic (ie fat) like me. Enjoyed a lot of great food (surprisingly had crêpes only once).

In a classic move I ran out of time in my lightning talk on multirotors, so I’ll have to cover the topic of free software flight controllers in a future blog post. I managed to miss a good number of talks I intended to see, which is quite a feat, considering the average price of beer in the old town. Had a good time hanging out with folks which is so rare to me.

During the BOFs on Wednesday I sat down with the Boxes folks, discussing some new designs. Sad that it was only few brief moments I managed to talk to Bastian about our Blender workflows. Unfortunately the Brno folks from whom I stole a spot in the car had to get back on Thursday so I missed the Thursday and Friday BOFs as well.

Despite the weather I enjoyed the second last GUADEC. Thanks for making it awesome again. See you in the next last one in Gothenburg.

03 August, 2014


Fosdem now invite proposals for main track presentations and developer rooms.

FOSDEM offers open source developers a place to meet, share ideas and collaborate. Renowned for being highly developer-oriented, the event brings together some 5000+ geeks from all over the world.

The fifteenth edition will take place on Saturday 31 January and Sunday 1 February 2015 at the usual location: ULB Campus Solbosch in Brussels.

We now invite proposals for main track presentations and developer rooms.

Previous editions have featured tracks centered around security, operating system development, community building, and many other topics. Presentations are expected to be 50 minutes long and should cater to a varied technical audience. The conference covers travel expenses and arranges accommodation for accepted main track speakers.

Developer rooms are assigned to self-organizing groups to work together on open source projects, to discuss topics relevant to a broader subset of the community, etc. Content may be scheduled in any format, subject to approval. Popular formats include presentation tracks, hacking sessions and panel discussions. Proposals involving collaboration across project or domain boundaries are strongly encouraged.

Proposals for main track presentations should be submitted using Pentabarf:


Developer room proposals should be emailed to devrooms@fosdem.org and be as detailed as possible. In particular, coordinators should indicate their affinity with the topic being proposed and provide a rough idea of the content they plan to schedule.

Key dates:

15 September
deadline for developer room proposals

1 October
deadline for main track proposals
accepted developer rooms announced


Older blog entries ->