Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.

21 November, 2014


So someone leaked 2011 era PowerVR SGX microcode and user space... And now everyone is pissing themselves like a bunch of overexcited puppies...

I've been fed links from several sides now, and i cannot believe how short-sighted and irresponsible people are, including a few people who should know better.


Having gotten that out of the way, I am writing this blog to put everyone straight and stop the nonsense, and to calmly explain why this leak is not a good thing.

Before i go any further, IANAL, but i clearly do seem to tread much more carefully on these issues than most. As always, feel free to debunk what i write here in the comments, especially you actual lawyers, especially those lawyers in the .EU.

LIBV and the PVR.

Let me just, once again, state my position towards the PowerVR.

I have worked on the Nokia N9, primarily on the SGX kernel side (which is of course GPLed), but i also touched both the microcode and userspace. So I have seen the code, worked with and i am very much burned on it. Unless IMG itself gives me permission to do so, i am not allowed to contribute to any open source driver for the PowerVR. I personally also include the RGX, and not just SGX, in that list, as i believe that some things do remain the same. The same is true for Rob Clark, who worked with PowerVR when at Texas Instruments.

This is, however, not why i try to keep people from REing the PowerVR.

The reason why i tell people to stay away is because of the design of the PowerVR and its driver stack: PVR is heavily microcode driven, and this microcode is loaded through the kernel from userspace. The microcode communicates directly with the kernel through some shared structs, which change depending on build options. There are sometimes extensive changes to both the microcode, kernel and userspace code depending on the revision of the SGX, customer project and build options, and sometimes the whole stack is affected, from microcode to userspace. This makes the powervr a very unstable platform: change one component, and the whole house of cards comes tumbling down. A nightmare for system integrators, but also bad news for people looking to provide a free driver for this platform. As if the murderous release cycle of mobile hardware wasn't bad enough of a moving target already.

The logic behind me attempting to keep people away from REing the PowerVR is, at one end, the attempt to focus the available decent developers on more rewarding GPUs and to keep people from burning out on something as shaky as the PowerVR. On the other hand, by getting everyone working on the other GPUs, we are slowly forcing the whole market open, singling out Imagination Technologies. At one point, IMG will be forced to either do this work itself, and/or to directly support open


I bought my wife a "new" old Thinkpad (T400, Core2 duo) to replace her old compaq nc6000 (Pentium M Dothan). Of course I installed it with openSUSE 13.2. Everything works fine. However, we soon found out that it takes ages to boot, something around 50 seconds, which is much more than the old machine (running 13.1 on an IDE SSD vs 13.2 on a cheap SATA SSD in the T400).
Investigating, I found out that in 13.2 the displaymanager.service is now a proper systemd service with all the correct dependencies instead of the old 13.1 xdm init script.
At home, I'm running NIS and autofs for a few NFS shares and an NTP server for the correct time.
The new displaymanager.service waits for timesetting, user account service and remote file systems, which takes lots of time.
So I did:

systemctl disable ypbind.service autofs.service ntpd.service
In order to use them anyway, I created a short NetworkManager dispatcher script which starts / stops the services "manually" if an interface goes up or down.
This brings the startup time (until the lightdm login screen appears) down to less than 11 seconds.
The next thing I found was that the machine would not shut down if an NFS mount was active. This was due to the fact that the interfaces were already shut down before the autofs service was stopped or (later) the NFS mounts were unmounted.
It is totally possible that this is caused by the violation in proper ordering I introduced by the above mentioned hack, but I did not want to go back to slow booting. So I added another hack:

  • create a small script /etc/init.d/before-halt.local which just does umount -a -t nfs -l (a lazy unmount)
  • create a systemd service file /etc/systemd/system/before-halt-local.service which is basically copied from the halt-local.service, then edited to have Before=shutdown.target instead of After=shutdown.target and to refer to the newly created before-halt.local script. Of course I could have skipped the script, but I might later need to add other stuff, so this is more convenient.
  • create the directory /etc/systemd/system/shutdown.target.wants and symlink ../before-halt-local.service into it.
And voila - before all the shutdown stuff starts, the nfs mounts are lazy unmounted and shutdown commences fast.


I was never very fond of dracut, but I did not think it would be so totally untested: openSUSE Bug #906592. Executive summary: hibernate will most likely silently corrupt (at least) your root filesystem during resume from disk.
If you are lucky, a later writeback from buffers / cache will "fix" it, but the way dracut resumes the system is definitely broken and I already had the filesystem corrupted on my test VM, while investigating the issue, so it is not only a theoretical problem.

Until this bug is fixed: Do not hibernate on openSUSE 13.2.

Good luck!


This is an attempt to make a list of things that someone-group of people can follow to develop a healthy community or team. This post is an overview of what I did with Kostas for the Greek openSUSE community.
A small detail is that we were only 2. So we took decisions fast. We didn't have to vote or something.
We had an "advantage" because we have an awesome global community and we asked for something we weren't sure how to proceed.

Let's start:

0. Have a clear goal. What you want to do. Have a big goal that some parts aren't "visible" when you start.
1. Web page: This is the web page-blog that will show information about community, the distro or the project. Make it visible on planets. BE CAREFUL. Don't focus on how to make a great site-blog using personal wordpress, drupal etc. Set it up on blogger and start post articles. You want CONTENT (write an article every other day). Don't spend time to maintain or secure your web page.
2. Mailing list: Ask the project if they can setup for you. If not, then try to find alternatives such as google groups.
3. IRC Channel
4. Forum: Prefer to ask from the project to setup a section for your language. If your project doesn't have forum, then ask a LUG or tech forum to use their's. Do not have your forum setup in your host for the same reasons as before. Don't spend time to maintain or secure the forum.

The above list is the MUST have to start.
A key to everything is to try to have all information in your language, so it'll be "attractive" to people who like the idea of open source but they don't speak English. What's the role of such people? They can organize local events.

Next step is to advertise the whole project-distro. This can happen:
1. Write to blogs-forums (technological or not).
2. Create Facebook group/page and advertise your attempt to other groups/pages.
3. Create Twitter account and tweet news about your community.
4. Create Google Plus Profile/Community.
5. Contact press. First contact local and then national press.
6. If you have a newsletter or weekly magazine, it's good to translate it (or a piece of it), so the open source community in your country will learn about you and your projects.

Before deciding what social media accounts to create, be aware that you have to maintain them. So search the web, what social media is more famous to users. For "tech" users, Google Plus Communities is the perfect place. It also can be used instead of Forums.

A distro or project, it's not all about write code. It's have fun. So advertise it.
1. Release parties. When a new release is out, it's time to party.
2. Meet ups. A good place to organize them is http


I'm happy to announce the release of mylvmbackup version 0.16. The source package is now available for download from http://lenzg.net/mylvmbackup/ and https://launchpad.net/mylvmbackup.

Installation packages for a number of platforms can be obtained from the openSUSE Build Service.

Version 0.16 adds support for sending out SNMP traps in case of backup successes or failures. I'd like to thank Alexandre Anriot for contributing this new feature and his patience with me.

Please see the ChangeLog and bzr history for more details.

20 November, 2014


On nokia n900, pulseaudio is needed to have a correct call. Unfortunately that piece of software fights back.

pavel@n900:~$ pulseaudio --start
N: [pulseaudio] main.c: User-configured server at {d3b6d0d847a14a3390b6c41ef280dbac}unix:/run/user/1000/pulse/native, refusing to start/autospawn.

Ok, I'd really like to avoid complexity of users here. Let me try as root.

root@n900:/home/pavel# pulseaudio --start
W: [pulseaudio] main.c: This program is not intended to be run as root (unless --system is specified).
N: [pulseaudio] main.c: User-configured server at {d3b6d0d847a14a3390b6c41ef280dbac}unix:/run/user/1000/pulse/native, refusing to start/autospawn.

Ok, I don't need per-user sessions, this is cellphone. Lets specify --system.

root@n900:/home/pavel# pulseaudio --start --system
E: [pulseaudio] main.c: --start not supported for system instances.

Yeah, ok.root@n900:/home/pavel# pulseaudio --system
W: [pulseaudio] main.c: Running in system mode, but --disallow-exit not set!
W: [pulseaudio] main.c: Running in system mode, but --disallow-module-loading not set!
N: [pulseaudio] main.c: Running in system mode, forcibly disabling SHM mode!
N: [pulseaudio] main.c: Running in system mode, forcibly disabling exit idle time!
W: [pulseaudio] main.c: OK, so you are running PA in system mode. Please note that you most likely shouldn't be doing that.
W: [pulseaudio] main.c: If you do it nonetheless then it's your own fault if things don't work as expected.
W: [pulseaudio] main.c: Please read http://pulseaudio.org/wiki/WhatIsWrongWithSystemMode for an explanation why system mode is usually a bad idea.

Totally my fault that someone forgot to document this pile of code. Thanks for blaming me. I'd actually like to read what is wrong with that, except that the page referenced does not exist. :-(.

Michael Meeks: 2014-11-20: Thursday

21:00 UTCmember

  • Into Cambridge, meeting with Lauren, Laura & Rob. Quarterly mgmt meetings, snatched lunch, board meeting; poked about in the server room; caught up with Daniel and discussed OpenGL funkiness.
  • Train home, stories, dinner, unscrewed various bits, heated the car bumper with a hair dryer (with the air inlet covered to reduce airflow & increase temperature) to make the plastic malleable; pushed out the worst of the dents, glued up the reflectors: much better. Bed.


This is my fourth post on Service Design Patterns in Rails. These posts have been inspired by the Service Design Patterns book by Robert Daigneau.

The previous posts were:

The Web Service Evolution patterns talk about what to do to make the API evolve with the minimum breaking changes. What that means? It means that changing your API should not break clients that consume it, otherwise said, that your API is backward compatible.

From this chapter, three patterns apply to the Rails framework:

  • Tolerant Reader
  • Consumer driven contract
  • Versioning

Tolerant Reader refers to the client. It means that if you write a client, you should be "tolerant" on what you get as a response. Rails provides the ActiveResource object. This object will get the response and create an object from it. This object per se is very tolerant.

However, only using that object does not make you tolerant reader. ActiveResource will create a model object like ActiveRecord does. For example, you can have a Product object that inherits from ActiveResource, the same way you would with ActiveRecord

  class Product < ActiveResource

ActiveResource will create an attribute for each attribute found in the response. Thus if you expect a "name" attribute, you would try:


However, what if name is not in the response? Then, if you really want to be tolerant, you should do

  p = Product.find(1)
  if p.respond_to?(:name)

Thus, with little effort you can have a tolerant reader, even thought Rails per se does not fully implement it.
Consumer driven contract means testing. Actually, it means having a test suite provided by your consumer on what it expects from the service. Testing is a common good practice in Rails and Rails provides you with a lot of resources for that (rspec, factory girl, stubs, mock ups). You only need to use them to specify what you expect from the API, or what a consumer should expect, depending on whether you are writing the producer or the consumer.

And finally, versioning. Versioning is a very good practice for an API. How you can implement versioning is very well explained on this rails cast, better than I could do myself, so I'd recommend you watching it, to see how to use namespaces in the routes file and in the controller implementation:

Web Service Evolution patterns are partly implemented on Rails. However, it needs you to do some implementation in order to follow them, because the typical example (the one you would generate with "rails generate") does not implement them, but give you the tools or the base so that you can do it.
For example, versioning is not implemented when you generate a new rails application, but you can easily do it with namespacing; tests are not implemented but rails gives you the tools to do so; tolerant reader can be implemented by using ActiveResource and respond_to


The last days I played with lcms‘ unbound mode. In unbound mode the CMM can convert colours with negative numbers. That allows to use for instance the LMS colour space, a very basic colour space to the human visual system. As well unbound RGB, linear gamma with sRGB primaries, circulated long time as the new one covers all colour space, a kind of replacement of ICC or WCS style colour management. There are some reservations about that statement, as linear RGB is most often understood as “no additional info needed”, which is not easy to build a flexible CMS upon. During the last days I hacked lcms to write the mpet tag in its device link profiles in order to work inside the Oyranos CMS. The multi processing elements tag type (mpet) contains the internal state of lcms’ transform as a rendering pipeline. This pipeline is able to do unbound colour transforms, if no table based elements are included. The tested device link contained single gamma values and matrixes in its D2B0 mpet tag. The Oyranos image-display application renderd my LMS test pictures correctly, in opposite to the 16-bit integer version. However the speed was decreased by a factor of ~3 with lcms compared to the usual integer math transforms. The most time consuming part might be the pow() call in the equation. It is possible that GPU conversions are much faster, only I am not aware of a implementation of mpet transforms on the GPU.


For quite some time I was pretty confident that Weblate will need some UI rewrite at some point. This is always problematic thing for me as I'm no way an UI designer and thus I always hope that somebody else will do that. I've anyway spent few hours on train home from LinuxTag to check what I could do with that.

The first choice for me was to try Twitter Bootstrap as I've quite good experience with using that for UI at work, so I hoped it will work quite well for Weblate as well. The first steps went quite nicely, so I could share first screenshots on Twitter and continue to work on that.

After few days, I'm quite happy with basic parts of the interface, though the most important things (eg. the page for translating) are still missing. But I think it's good time to ask for initial feedback on that.

Main motivation was to unite two tab layout used on main pages, which turned out to be quite confusing as most users did not really get into bottom page of the page and thus did not find important functions available there. So all functions are accessible from top page navigation, either directly or being in menu.

I've also decide to use colors a bit more to indicate the important things. So the progress bars are more visible now (and the same progress bar now indicates status of translation per words). The quality checks also got their severity, which in turn is used to highlight the most critical ones. The theme will probably change a bit (so far it's using default theme as I did not care much to change that).

So let's take a look at following screenshot and let me know your thoughts:

Number of applications over time

You can also try that yourself, everything is developed in the bootstrap branch in our Git repository.

Filed under: English phpMyAdmin SUSE Weblate | 4 comments | Flattr this!

Michal Čihař: Weblate 1.9

16:00 UTC


Weblate 1.9 has been released today. It comes with lot of improvements and bug fixes and with experimental Zen mode for editing translations.

Full list of changes for 1.9:

  • Django 1.6 compatibility.
  • No longer maintained compatibility with Django 1.4.
  • Management commands for locking/unlocking translations.
  • Improved support for Qt TS files.
  • Users can now delete their account.
  • Avatars can be disabled.
  • Merged first and last name attributes.
  • Avatars are now fetched and cached server side.
  • Added support for shields.io badge.

You can find more information about Weblate on http://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Ready to run appliances will be soon available in SUSE Studio Gallery.

Weblate is also being used https://l10n.cihar.com/ as official translating service for phpMyAdmin, Gammu, Weblate itself and others.

If you are free software project which would like to use Weblate, I'm happy to help you with set up or even host Weblate for you.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!


Same as in past year, I'm attending FOSDEM 2014. This is the best opportunity to meet with free software world in Europe and get in touch with people you know only from mailing lists.

If you want to meet me in person and discuss anything, just get in touch with me and we'll arrange it.

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!

Michal Čihař: Weblate 1.8

16:00 UTC


Weblate 1.8 has been released today. It comes with lot of improvements, especially in registration process where you can now use many third party services.

Full list of changes for 1.8:

  • Please check manual for upgrade instructions.
  • Nicer listing of project summary.
  • Better visible options for sharing.
  • More control over anonymous users privileges.
  • Supports login using third party services, check manual for more details.
  • Users can login by email instead of username.
  • Documentation improvements.
  • Improved source strings review.
  • Searching across all units.
  • Better tracking of source strings.
  • Captcha protection for registration.

You can find more information about Weblate on it's website, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Ready to run appliances will be soon available in SUSE Studio Gallery.

Weblate is also being used https://l10n.cihar.com/ as official translating service for phpMyAdmin, Gammu, Weblate itself and others.

If you are free software project which would like to use Weblate, I'm happy to help you with set up or even host Weblate for you.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!

Filed under: English SUSE Weblate | 3 comments | Flattr this!


Thanks to great amount of changes I've been able do in Weblate during Hackweek, the 1.8 release is quite close.

All features I wanted there are implemented and it is already running for some time on my production servers which look quite stable. The only thing which needs still some improvement are translations. So that's your chance to contribute.

Translation status

If there won't be any blocking issue, Weblate 1.8 will be released during next week.

Filed under: English SUSE Weblate | 0 comments | Flattr this!


I noticed lots of spam in my system logs:

20141120-05:15:01.9 systemd[1]: Starting user-30.slice.
20141120-05:15:01.9 systemd[1]: Created slice user-30.slice.
20141120-05:15:01.9 systemd[1]: Starting User Manager for UID 30...
20141120-05:15:01.9 systemd[1]: Starting Session 1817 of user root.
20141120-05:15:01.9 systemd[1]: Started Session 1817 of user root.
20141120-05:15:01.9 systemd[1]: Starting Session 1816 of user wwwrun.
20141120-05:15:01.9 systemd[1]: Started Session 1816 of user wwwrun.
20141120-05:15:01.9 systemd[22292]: Starting Paths.
20141120-05:15:02.2 systemd[22292]: Reached target Paths.
20141120-05:15:02.2 systemd[22292]: Starting Timers.
20141120-05:15:02.2 systemd[22292]: Reached target Timers.
20141120-05:15:02.2 systemd[22292]: Starting Sockets.
20141120-05:15:02.2 systemd[22292]: Reached target Sockets.
20141120-05:15:02.2 systemd[22292]: Starting Basic System.
20141120-05:15:02.2 systemd[22292]: Reached target Basic System.
20141120-05:15:02.2 systemd[22292]: Starting Default.
20141120-05:15:02.2 systemd[22292]: Reached target Default.
20141120-05:15:02.2 systemd[22292]: Startup finished in 21ms.
20141120-05:15:02.2 systemd[1]: Started User Manager for UID 30.
20141120-05:15:02.2 CRON[22305]: (wwwrun) CMD (/usr/bin/php -f /srv/www/htdocs/owncloud/cron.php)
20141120-05:15:02.4 systemd[1]: Stopping User Manager for UID 30...
20141120-05:15:02.4 systemd[22292]: Stopping Default.
20141120-05:15:02.4 systemd[22292]: Stopped target Default.
20141120-05:15:02.4 systemd[22292]: Stopping Basic System.
20141120-05:15:02.4 systemd[22292]: Stopped target Basic System.
20141120-05:15:02.4 systemd[22292]: Stopping Paths.
20141120-05:15:02.4 systemd[22292]: Stopped target Paths.
20141120-05:15:02.4 systemd[22292]: Stopping Timers.
20141120-05:15:02.4 systemd[22292]: Stopped target Timers.
20141120-05:15:02.4 systemd[22292]: Stopping Sockets.
20141120-05:15:02.4 systemd[22292]: Stopped target Sockets.
20141120-05:15:02.4 systemd[22292]: Starting Shutdown.
20141120-05:15:02.4 systemd[22292]: Reached target Shutdown.
20141120-05:15:02.4 systemd[22292]: Starting Exit the Session...
20141120-05:15:02.4 systemd[22292]: Received SIGRTMIN+24 from PID 22347 (kill).
20141120-05:15:02.4 systemd[1]: Stopped User Manager for UID 30.
20141120-05:15:02.4 systemd[1]: Stopping user-30.slice.
20141120-05:15:02.4 systemd[1]: Removed slice user-30.slice.

This is a server only system. I investigated who is starting and tearing down a sytemd instance for every cronjob, every user login etc.
After some searching, I found that pam_systemd is to blame: it seems to be enabled by default. Looking into the man page of pam_systemd, I could not find anything in it that would be useful on a server system so I disabled it, and pam_gnome_keyring also while I was at it:
pam-config --delete --gnome_keyring --systemd
...and silence returned to my logfiles again.

19 November, 2014

Michael Meeks: 2014-11-19: Wednesday

21:00 UTCmember

  • Into Cambridge taking a machine to return to the server room. Quarterly Productivity Mgmt meetings, frantically prepared slides, crunched numbers etc. Meetings much of the day with a partner meeting in the middle.

18 November, 2014

Michael Meeks: 2014-11-18: Tuesday

21:00 UTCmember

  • Chewed away at master, GL rendering, texture and context lifecycle, chased misc. rendering problems. Mail catchup. Worked late.


Weblate has started as a translation system tightly bound to Git version control system. This was in no means design decision, but rather it was the version control I've used. But this has shown not to be sufficient and other systems were requested as well. And Mercurial is first of them to be supported.

Weblate 2.0 already had separated VCS layer and adding another system to that is quite easy if you know the VCS you're adding. Unfortunately this wasn't the case for me with Mercurial as I've never used it for anything more serious than cloning a repository, committing fixes and pushing it back. Weblate needs a bit more than that, especially in regard to remote branches. But nevertheless I've figured out all operations and the implementation is ready in our Git.

In case somebody is interested in adding support for another version control, patches are always welcome!

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!

17 November, 2014

Michael Meeks: 2014-11-17: Monday

21:00 UTCmember

  • Whiled away the very late night / early morning prototyping some Alpha recovery approach to try to drag legacy gtk2 theming into something we can use for GL rendering without loosing more hair; render the captive widget to first a white background, then a black one - and do some math to extract the alpha & original pixels; what could go wrong.
  • Slept an hour on the coach, arrived home eventually at 8am, got a few hours of shut-eye, team meeting(s), paperwork. Dinner with the babes, read various stories.
  • Back to the hacking; discovered that calling wglMakeCurrent on an already active GL Context is unfeasibly slow - whereas checking via wglGetCurrentContext first to avoid re-setting it gave a two to three orders of magnitude performance improvement: hmm.


gcc tried to help me with figuring pulseaudio-module-cmtspeech-n9xx compilation... It says:

/lib/x86_64-linux-gnu/libz.so.1: error adding symbols: DSO missing from command line

To decrypt it, you should understand that "DSO" is a library. So it wants you to add /lib/x86_64-linux-gnu/libz.so.1 to command line you are using to compile. It took me a while to figure out...


Mozilla announced a few days ago a new flavour of the Firefox browser called Firefox Developer Edition. This new Firefox edition is in fact replacing the previous Aurora version. Just to add some background how it was structured until a few days ago:

  • Nightly – nightly builds of mozilla-central which is basically Mozilla’s HEAD
  • Aurora – regular builds published for openSUSE under mozilla:alpha
  • Beta – weekly builds for openSUSE under mozilla:beta
  • Release – full stable public releases as shipped as end user product for openSUSE under mozilla and in Factory/Tumbleweed

There is a 6 weeks cycle where the codebase goes from Nightly to Aurora to Beta to Release while it stabilizes.

Now as Aurora is replaced to be Firefox Developer Edition I am also changing the way how to deliver those to openSUSE users:

  • Nightly – there are no RPMs provided. People who want to run it can/should grab an upstream tarball
  • Firefox Developer Edition – now available as package firefox-dev-edition from the mozilla repository
  • Beta – no changes, available (as long time permits) from mozilla:beta
  • Release – no changes, available in mozilla and submitted to openSUSE Factory / Tumbleweed

A few more notes on the Firefox Developer Edition RPMs for openSUSE:

  • it’s brand new so please send me feedback about packaging issues
  • it can be installed in parallel to Release or Beta and is started as firefox-dev and is using a different profile unless you change that default; therefore it can even run in parallel with regular Firefox
  • it carries most of the openSUSE specific patches (including KDE integration)
  • it currently has NO branding packages and therefore does not use exactly the same preferences as the openSUSE provided Firefox so it behaves like Firefox when installed with MozillaFirefox-branding-upstream

flattr this!

16 November, 2014

Michael Meeks: 2014-11-16: Sunday

21:00 UTCmember

  • Off to the venue, more meetings, hacking on misc OpenGL-ness with Markus; plugged away at various irritating bugs, helped to explain the recent exciting revival in the LibreOffice UX team - worth giving it another try. Out for a swift beer, and on to the airport with Markus. Flights, missed bus; waited at Heathrow until 5am for a coach.

This is the third post on Service Design Patterns in Rails. The first two where

 In these series of posts, I am trying to map the patterns described in the Service Design Patterns book by Robert Daigneau to a typical Rails application.

So far, I've proved that Rails already implements some of the patterns described in the book and thus, by only using the Rails framework, you are already following those patterns. This means you already follow certain good practices on implementing services without actually knowing about them.

However, we've also seen that some patterns are not implemented by default. The common thing on them is that they are related to the response: Response Mapper, Response Data Transfer Object Mapper, and Linked Service.

The chapter about Web Service Implementation Styles, is about five patterns:

  • Transaction script
  • Datasource adapter
  • Operation script
  • Command invoker
  • Workflow connector

Transaction script pattern means writing a service that, without any helper, nor model, accesses the database. This is not the Rails way, since in Rails we access the database through a domain model object. Personally, I think it is good that this pattern is not in the Rails framework. I am surprised to have find out this pattern in the book.

Datasource adapter means having a class that will get the data from the database, making the controller unaware of, for example, if the data is stored on a postgresql db or sqlite, and managing metadata, like the relationships between different entities (i.e. one-to-many, belong-to, many-to-many). Does it sound familiar? Yes! This looks like ActiveRecord, doesn't it?

Operation script means delegating work to a class that can be used by different web-services. Translating that to Rails means implementing features in the model so that different controllers can reuse it.

Command invoker means having a class that can be either call from a web service or put to a queue to be run later. For example, a class with a perform method can be run by calling the perform method, or queued to a delayed job queue. This may not be the preferred way of using delayed job, but certainly can be done (See Custom Jobs) . This class will then call domain model methods.

Workflow connector is something more complicated. It means designing a workflow where each step can be a web service. However, you need to maintain information on the state and be able to rollback (typically done by compensating, by calling a web service that performs the negative function).

I don't think you can implement a workflow in Rails with what the framework provides, nor with the typical gems. From what I know, you will need to add additional gems/tools.

So far, I've seen that rails implements the patterns for a basic REST API, which is cool. However, when it comes to more complex workflows where the services you implement are part of a bigger services infraestucture, rails does not implement patterns


A few days ago I told Andrew Wafaa I’d write up some notes for him and publish them here. I became hungry contemplating this work, so decided cooking was the first order of business:

Salt and Pepper Squid with Fresh Greens

It turned out reasonably well for a first attempt. Could’ve been crispier, and it was quite salty, but the pepper and chilli definitely worked (I’m pretty sure the chilli was dried bhut jolokia I harvested last summer). But this isn’t a post about food, it’s about some software I’ve packaged for managing Ceph clusters on openSUSE and SUSE Linux Enterprise Server.

Specifically, this post is about Calamari, which was originally delivered as a proprietary dashboard as part of Inktank Ceph Enterprise, but has since been open sourced. It’s a Django app, split into a backend REST API and a frontend GUI implemented in terms of that backend. The upstream build process uses Vagrant, and is fine for development environments, but (TL;DR) doesn’t work for building more generic distro packages inside OBS. So I’ve got a separate branch that unpicks the build a little bit, makes sure Calamari is installed to FHS paths instead of /opt/calamari, and relies on regular packages for all its dependencies rather than packing everything into a Python virtualenv. I posted some more details about this to the Calamari mailing list.

Getting Calamari running on openSUSE is pretty straightforward, assuming you’ve already got a Ceph cluster configured. In addition to your Ceph nodes you will need one more host (which can be a VM, if you like), on which Calamari will be installed. Let’s call that the admin node.

First, on every node (i.e. all Ceph nodes and your admin node), add the systemsmanagement:calamari repo (replace openSUSE_13.2 to match your actual distro):

# zypper ar -f http://download.opensuse.org/repositories/systemsmanagement:/calamari/openSUSE_13.2/systemsmanagement:calamari.repo

Next, on your admin node, install and initialize Calamari. The calamari-ctl command will prompt you to create an administrative user, which you will use later to log in to Calamari.

# zypper in calamari-clients
# calamari-ctl initialize

Third, on each of your Ceph nodes, install, configure and start salt-minion (replace CALAMARI-SERVER with the hostname/FQDN of your admin node):

# zypper in salt-minion
# echo "master: CALAMARI-SERVER" > /etc/salt/minion.d/calamari.conf
# systemctl enable salt-minion
# systemctl start salt-minion

Now log in to Calamari in your web browser (go to http://CALAMARI-SERVER/). Calamari will tell you your Ceph hosts are requesting they be managed by Calamari. Click the “Add” button to allow this.

calamari-authorize-hosts calamari-authorize-hosts-wait

Once that’s complete, click the “Dashboard” link at the top to view the cluster status. You should see something like this:


And you’re done. Go explore. You might like to put some load on your cluster and see what the performance graphs do.

Concerning ceph-deploy

The instructions above have you manually installing and configuring salt-minion on each node. This isn’t too much of a pain, but is even

15 November, 2014


After talking to colleagues about how easy it is to contribute to the Linux Kernel simply by reporting a bug, I was actually wondering why I was the first and apparently the only one to hit this bug.
So there are two possible reasons:

  • nobody is testing -rc kernels
  • nobody is using PPP anymore
To be honest, I'm also only using PPP for some obscure VPN, but I would have expected it to be in wider usage due to UMTS/3G cards and such. So is nobody testing -rc kernels? This would indeed be bad...

Michael Meeks: 2014-11-15: Saturday

21:00 UTCmember

  • Sleep; to the venue, some hackery, met with various interesting people in the French LibreOffice scene, great to see another Michael hacking away at easy-hacks, and lots of interest. Beer & nibbles event in the evening, out rather late at a sandwich on a block type place.


openSUSE 13.2 is out. This is the first version that MATE is included officially. Unfortunately, there's no option to install it as a whole but you can install it following the instructions.

Initial Installation

1. First of all, download the NET installation ISO or the DVD (prefer the NET installation because it'll be faster to download it).

2. Start your computer either with USB (NET installation) or DVD. You have to check your BIOS settings for that. Click Installation and wait.



3. Accept the Licence. Choose English (NOT your native language).


4. Next step, SYSTEM PROBING.

System Probing

5. New part of the installation procedure is this question. Do you want to add extra repositories? Do you want to use add-on products?

Extra repositories

6. Here is the partitioning. Default filesystem is btrfs for root partition.


You can change anything you want by clicking on Expert Partitioner.


7. Next step is time zone. Set the correct timezone.

Time zone

8. Here choose Minimal X Window. This will install ICEWM.

Install Minimal X Window

9. Write your name/username/password you want to use.


If the password is weak, it'll warn you.

Weak password

10. All set. You're ready to install.

Here is an overview.
Install overview

Are you sure you want to install?

11. Wait until it's over.





12. Restart and wait for the login screen.



Login Screen

MATE Installation

The environment is pretty ugly. You can install MATE over ncurses YaST or terminal.

Here are some images from ncurses YaST.

YaST MATE Install

YaST MATE Install

YaST MATE Install

or better use zypper. Here what you have to install:

zypper in gnome-main-menu mate-backgrounds mate-control-center mate-dialogs caja mate-icon-theme mate-notification-daemon mate-polkit marco mate-session-manager mate-settings-daemon mate-desktop mate-panel caja-image-converter caja-open-terminal caja-sendto caja-share dconf-editor mate-dictionary mate-disk-usage-analyzer mate-icon-theme-faenza mate-netspeed mate-screenshot mate-search-tool mate-sensors-applet mate-system-log mate-user-share mozo python-caja atril engrampa eom gucharmap mate-applets mate-calc mate-power-manager mate-media mate-screensaver mate-system-monitor mate-terminal mate-themes mate-menus atril-caja caja-engrampa marco-themes mate-common mate-icon-theme-faenza-dark mate-icon-theme-faenza-gray mate-indicator-applet patterns-openSUSE-mate_basis pluma

According to Benjamin, it's easier to install it using pattern. There wasn't a pattern when I was installing MATE. It will be available soon. So use the following command to install basic system (you should install some extra applications).

zypper in -t pattern mate_basis

Setup Window Manager:

Go to YaST to System-->/etc/sysconfig Editor.

System /etc/sysconfig Editor

Go to Desktop-->Window Manager-->DEFAULT_WM and add mate-session.


Personally, I like LightDM as Displaymanager. So install it:

zypper in lightdm

Then set it up on YaST. Go to Desktop-->Display Manager-->DISPLAYMANAGER and write lightdm.


Now restart and you'll see the following login screen. Don't forget to change the session to MATE (as you see here).


The final result is promising. If the sonar theme isn't default, you can change it to that easily.

openSUSE MATE with sonar theme

Now you have fully MATE Desktop Environment. For the rest of the programs (firefox, libreoffice etc) go ahead and install them.

For more information, check our Portal: https://en.opensuse.org/Portal:MATE and contact us for any question you have.

Check the video:


We've seen the first tutorial how to install MATE on openSUSE 13.2. Then the second one about install MATE using NET installation.

Although there's a pattern to install, you can install it using the following script, so it'll place the Display Manager as well. Next release, I guess it'll be as option during installation on DVD (or NET Install).

1. Perform the installation as described here from 1-12.

2. Open your terminal and download the script mate.sh

3. Make the file mate.sh, executable.

chmod +x mate.sh

4. Run mate.sh


Answer y (yes) to every question.

Here what's inside this script.

1. Install the necessary programs.

sudo zypper in gnome-main-menu mate-backgrounds mate-control-center mate-dialogs caja mate-icon-theme mate-notification-daemon mate-polkit marco mate-session-manager mate-settings-daemon mate-desktop mate-panel caja-image-converter caja-open-terminal caja-sendto caja-share dconf-editor mate-dictionary mate-disk-usage-analyzer mate-icon-theme-faenza mate-netspeed mate-screenshot mate-search-tool mate-sensors-applet mate-system-log mate-user-share mozo python-caja atril engrampa eom gucharmap mate-applets mate-calc mate-power-manager mate-media mate-screensaver mate-system-monitor mate-terminal mate-themes mate-menus atril-caja caja-engrampa marco-themes mate-common mate-icon-theme-faenza-dark mate-icon-theme-faenza-gray mate-indicator-applet patterns-openSUSE-mate_basis pluma lightdm git

2. It'll delete the /etc/sysconfigdisplaymanager. It will download from my git, my working displaymanager (lightdm as default).

git clone https://github.com/iosifidis/13.2.git

Will copy the file to /etc/sysconfig

3. After that, it'll delete all downloaded files.

4. Reboot. If everything was OK, then you'll see the following screen. Change session to MATE and you're all set.

Login Screen


I had looked into rsyslog years ago, when it became default in openSUSE and for some reason I do not remember anymore, I did not really like it. So I stayed at syslog-ng.
Many people actually will not care who is taking care of their syslog messages, but since I had done a few customizations to my syslog-ng configuration, I needed to adapt those to rsyslog.

Now with Bug 899653 - "syslog-ng does not get all messages from journald, journal and syslog-ng not playing together nicely" which made it into openSUSE 13.2, I had to reconsider my choice of syslog daemon.

Basically, my customizations to syslog-ng config are pretty small:

  • log everything from VDR in a separate file "/var/log/vdr"
  • log everything from dnsmasq-dhcp in a separate file "/var/log/dnsmasq-dhcp"
  • log stuff from machines on my network (actually usually only a VOIP telephone, but sometimes some embedded boxes will send messages via syslog to my server) in "/var/log/netlog"
So I installed rsyslog -- which due to package conflicts removes syslog-ng -- and started configuring it to do the same as my old syslog-ng config had done. Important note: After changing the syslog service on your box, reboot it before doing anyting else. Otherwise you might be chasing strange problems and just rebooting is faster.

Now to the config: I did not really like the default time format of rsyslog:
2014-11-10T13:30:15.425354+01:00 susi rsyslogd: ...
Yes, I know that this is a "good" format. Easy to parse, unambiguosly, clear. But It is usually me reading the Logs and I still hate it, because I do not need microsecond precision, I do know in which timezone I'm in and it uses half of a standard terminal width if I don't scroll to the right.
So the first thing I changed was to create /etc/rsyslog.d/myformat.conf with the following content:
$template myFormat,"%timegenerated:1:4:date-rfc3339%%timegenerated:6:7:date-rfc3339%%timegenerated:9:10:date-rfc3339%-%timegenerated:12:21:date-rfc3339% %syslogtag%%msg%\n"
$ActionFileDefaultTemplate myFormat
This changes the log format to:
20141110-13:54:23.0 rsyslogd: ...
Which means the time is shorter, still can be parsed and has sub-second-precision, the hostname is gone (which might be bad for the netlog file, but I don't care) and it's 12 characters shorter.
It might be totally possible to do this in an easier fashion, I'm not a rsyslog wizard at all (yet) ;)

For /var/log/vdr and /var/log/dnsmasq-dhcp, I created the config file /etc/rsyslog.d/myprogs.conf, containing:
if $programname == 'dnsmasq-dhcp' then {
if $programname == 'vdr' then {
That's it! It's really straightforward, I really can't understand why I hated rsyslog years ago :)

The last thing missing was the netlog file, handled by /etc/rsyslog.d/mynet.conf:
$ModLoad imudp.so # provides UDP syslog reception
$UDPServerRun 514 # start syslog server, port 514
if $fromhost-ip startswith '192.168.' then {

14 November, 2014

Michael Meeks: 2014-11-14: Friday

21:00 UTCmember

  • Breakfast, piano / sight-reading practice with H. Early train to Gatwick - hacked away at OpenGL rendering. Fixed 4bit / palette bitmap load / rendering. Significantly expanded vcldemo Bitmap rendering tests. Arrived, met up with Markus & Bjoern at the hotel, then with Arnaud - to the venue & out for a later meal with lots of cheery conference types.

Older blog entries ->