Skip to main content

the avatar of Stephen Shaw

Strengths Finder 2.0

I was at an Agile Roundtable not too long ago and someone was talking up Strengths Finder 2.0, so I decided to pick up the book off of Amazon:

 

 

The book has a short introduction, which is a quick read, details about the strengths, and a code to take the test on their website. The test is timed and probably takes about 35 minutes. After the series of questions are answered it calculates your strengths and gives you your top five strengths. Once you have your five strengths the site and the book gives you an explanation of the strength as well as “Ideas for Action” for that strength.

According to the test these are my top five strengths.

  1. Adaptability
  2. Input
  3. Learner
  4. Communication
  5. Achiever

 

What are yours?

a silhouette of a person's head and shoulders, used as a default avatar

Magic: It is now possible to use MS Silverlight based websites via pipelight

It has long been a challenge to use MS Silverlight based websites on linux systems. Especially in The Netherlands this is a big hurdle as many (>80%) of the secondary school websites that pupils must use to communicate with their school (for homework, marks, etc) are equipped with Silverlight. Yes, really… 🙁

Fortunately at the end of August 2013 I discovered pipelight, a very smart idea to use MS Silverlight based website natively on Linux. The problem was however to find a working pipelight package for openSUSE. As there was none, I decided to build one myself using the incredible openSUSE Build Service. It was quite a quest to obtain a working package, but due to very good cooperation with the pipelight developers, I’m now able to present a working pipelight package to the openSUSE community. Oh, and while working on the package I reported a bug via the bug report system, that was solved and published via an rpm package within 1 hour after reporting it (that was during out of office hours). Indeed within 1 hour after reporting the problem it was; accepted, investigated, analysed, fixed, tested, handed over to me, packaged, tested and published! The amazing world of Open Source Software!

Pipelight works okay for the following sites (among many others): arte, LOVEFiLM, Netflix, Magister based NL schoolwebsites, WATCHEVER, etc. View the complete list on the pipelight website.

The installation instructions are on the pipelight website. Be aware though, that pipelight requires the wine package that is provided via the home:rbos:pipelight repository. With any other wine package, pipelight will (very likely) not work. If you rely on your currently installed wine package and installed MS applications and are unsure that the wine package provided via the home:rbos:pipelight repository will leave your currently in use MS applications untouched: don’t install pipelight (or only after making very good backups). You can always start by installing pipelight in a virtual machine.

Have fun with pipelight.

a silhouette of a person's head and shoulders, used as a default avatar

KDE in openSUSE: repository and maintainership changes!

Summer is ending soon (at least for those living in the northern hemisphere) and while usually cleaning is done during spring, the KDE team decided to do what I’d call… autumn cleaning of repositories.

You may know that the KDE presence in openSUSE, aside being the default desktop, is quite a long one. In the past years different repositories were created by the members of the openSUSE KDE team (at the time mostly made up by KDE people hired by Novell) in order to review and test packages, like newer Qt versions, KDE software, and so on. Fast forward to the present: nowadays the members of the KDE team are almost completely from the openSUSE community, and quite a number of changes went by the repositories as well. For example, newer releases of KDE software are submitted as maintenance updates for the latest available version of the distribution, and there are the KDE:Release:xy repositories for those who want the latest and greatest KDE software.

That also meant that a lot of repositories were unused, and were left bitrotting (and consuming the OBS’s precious build power). But no more! Recently, thanks to the input from Raymond (tittiatcoke on IRC), a rather large cleaning of repositories is taking place.

The following repositories are going to be deleted:

  • KDE:Qt45

  • KDE:Qt46

  • KDE:Qt47

  • KDE:Qt:Stable

  • KDE:Netbook

  • KDE:Qt50

In the (unlikely) case you are using them, you should remove them ASAP.

This repository instead will be **moved **(thanks kdepepo for reminding me):

  • KDE:Frameworks → KDE:Unstable:Frameworks

As with repository cleaning, there was also a reorganization of the maintainership, because a number of former maintainers had moved on. In practice this will mean that notifications and reports will get to the right people instead of just clogging the mailboxes of unrelated people. ;) Of course, the present KDE team stands on the shoulders of giants, and is extremely thankful for the work done by those people in the past.

That’s all, now we’re back to our regularly scheduled programs.

the avatar of Greg Kroah-Hartman

binary blobs to C structures

Sometimes you don’t have access to vim’s wonderful xxd tool, and you need to use it to generate some .c code based on a binary file. This happened to me recently when packaging up the EFI signing tools for Gentoo. Adding a build requirement of vim for a single autogenerated file was not an option for some users, so I created a perl version of the xxd -i command line tool.

This works because everyone has perl in their build systems, whether they like it or not. Instead of burying it in the efitools package, here’s a copy of it for others to use if they want/need it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#!/usr/bin/env perl
#
# xxdi.pl - perl implementation of 'xxd -i' mode
#
# Copyright 2013 Greg Kroah-Hartman <gregkh@linuxfoundation.org>
# Copyright 2013 Linux Foundation
#
# Released under the GPLv2.
#
# Implements the "basic" functionality of 'xxd -i' in perl to keep build
# systems from having to build/install/rely on vim-core, which not all
# distros want to do.  But everyone has perl, so use it instead.

use strict;
use warnings;
use File::Slurp qw(slurp);

my $indata = slurp(@ARGV ? $ARGV[0] : \*STDIN);
my $len_data = length($indata);
my $num_digits_per_line = 12;
my $var_name;
my $outdata;

# Use the variable name of the file we read from, converting '/' and '.
# to '_', or, if this is stdin, just use "stdin" as the name.
if (@ARGV) {
        $var_name = $ARGV[0];
        $var_name =~ s/\//_/g;
        $var_name =~ s/\./_/g;
} else {
        $var_name = "stdin";
}

$outdata .= "unsigned char $var_name\[] = {";

# trailing ',' is acceptable, so instead of duplicating the logic for
# just the last character, live with the extra ','.
for (my $key= 0; $key < $len_data; $key++) {
        if ($key % $num_digits_per_line == 0) {
                $outdata .= "\n\t";
        }
        $outdata .= sprintf("0x%.2x, ", ord(substr($indata, $key, 1)));
}

$outdata .= "\n};\nunsigned int $var_name\_len = $len_data;\n";

binmode STDOUT;
print {*STDOUT} $outdata;

Yes, I know I write perl code like a C programmer, that’s not an insult to me.

the avatar of Greg Kroah-Hartman

Binary Blos to C Structures

Sometimes you don’t have access to vim’s wonderful xxd tool, and you need to use it to generate some .c code based on a binary file. This happened to me recently when packaging up the EFI signing tools for Gentoo. Adding a build requirement of vim for a single autogenerated file was not an option for some users, so I created a perl version of the xxd -i command line tool.

This works because everyone has perl in their build systems, whether they like it or not. Instead of burying it in the efitools package, here’s a copy of it for others to use if they want/need it.

the avatar of Klaas Freitag

After the 1.4.0 ownCloud Client Release

You might have heard, ownCloud Client 1.4.0 was released last week. It is available from our sync clients page for all major desktop platforms, investigate the Changelog.

Danimos Visual Guide has outlined the new stuff in the release already, so no need to repeat it here. You should install and try it, that seems to be the opinion of many people who tried it.

Also people who shared their critical view on the client very publically in the past are much more pleased now with 1.4.0. One example is a recent blog post on BITBlokes. It is a blog about all kind of topics around FOSS. I regularly read it and often share its opinions. He concludes very positively about the 1.4.0 client.

It is good to see the positive feedback overall. That shows a couple of things from my engineering point of view: The concentrated work we continously do on all parts of ownCloud pays off. That is obvious of course, but still nice to see. And our (also obvious) actions to improve code quality such as the consequent use of continous integration, code reviews and such helps to improve quality.

“People are always excited if releases come with GUI changes!” I heard people saying. Well, maybe, but that’s not the whole truth. It also proves for me again is how important UI design and UX is. Me as a knee-deep-developer have an interesting relationship to all UX topics: I always have an opinion. Often a strong opinion. But the results coming out of that have not always been the, well, the most optimal. Very fortunate on the client we work together with our UX guy Jan and the positive feedback also shows how good that is for the software.

But enough of release pride. There is more work to do: The bug tracker is still not empty, the list of feature ideas is long. We will continue to focus on correctness, stability and robustness of syncing, performance and useful features and work on a version 1.5 for you.

These are a couple of concrete points we’re focussing on for 1.5:

  1. we already merged the client code on the new upstream sync version in git.
  2. performace improvements through further reduction of the number of requests and more efficiency in database operations on the client.
  3. we are working on a new propagator component that allows us to do the changes mentioned in 2 more easily.
  4. File manager integration, which means havingn icons in Explorer, Dolphin and friends.

A more detailed list can be found at github.

Thank you for all your help and support. It’s big fun!

a silhouette of a person's head and shoulders, used as a default avatar

Installing openSUSE Factory on laptop

First let me explain why openSUSE Factory and not in fact Gentoo.

Factory points:

  • + Much faster install/update
  • + Better out of the box experience – no need to fiddle with everything to get it running
  • + I can throw stuff to OBS and get it back fast as a binary when I am on battery
  • – It has more issues than Gentoo stable in some areas
  • – Unable to remove things like pulseaudio easily

Gentoo points:

  • + More stable on stable – this will change a lot in near future if everything goes right for openSUSE
  • + Faster system (matter of few percents)
  • + Possibility to remove stuff you really don’t want
  • – Compilation of everything – I update/install on battery often
  • – Smaller chance to fix anything while I am traveling as I have to compile it on the box

So how should I install it? (Break your machine in 3 easy steps)

We will need ISO that is easy to grab from our build service.
Sometimes when we do fancy TM stuff on Yast side the ISO might not work, YAY, but fear less you can still grab latest released version and start from there.

Installation

You really don’t expect me to give you guide on how to install it right? Just get it to state where it reboots for the first login.

Actual migration

Check the repositories and verify that they are pointing on the Factory target, depends on media we went from but it is not hard to change.
Just remove the repositories installer provided us during the installation and put the following ones in (or edit them if you want) the yast:

mosquito:~ # zypper lr -u
# | Alias              | Name                     | Enabled | Refresh | URI                                                    
--+--------------------+--------------------------+---------+---------+--------------------------------------------------------
1 | repo-debug         | openSUSE-Factory-Debug   | No      | Yes     | http://download.opensuse.org/factory/repo/debug/       
2 | repo-non-oss       | openSUSE-Factory-Non-Oss | Yes     | Yes     | http://download.opensuse.org/factory/repo/non-oss/     
3 | repo-oss           | openSUSE-Factory-Oss     | Yes     | Yes     | http://download.opensuse.org/factory/repo/oss/         
4 | repo-source        | openSUSE-Factory-Source  | No      | Yes     | http://download.opensuse.org/factory/repo/src-oss/
Q: How should I add them in the zypper from cli?
A: zypper ar -f -n openSUSE-Factory-Oss http://download.opensuse.org/factory/repo/oss/

After this change we just migrate to the Factory by updating the whole distribution (even if nothing changed as the ISOs sometimes won’t build for a bit).
I recommend running this from CLI and not having anything running at the time, even if it won’t hurt much it is just safer :-)

zypper dup

As a note keep in mind that you should always run zypper dup on factory as we actually sometimes even downgrade and so on, so forget the zypper up there.

I did it. Now what?

Well mate, now you have rolling based binary distribution that is quite usable and working even despite all the openSUSE contributors trying to break it every day ;-)

If you find any problems during the usage just drop by on respective development IRC channel (#opensuse-kde/…) for the package that might be causing the trouble for you and ask the members if they already have fix and if yes if it is on its way to Factory. Then sit back for a while and in ~1 day enjoy your updated/fixed package.

If there is no fix, or nobody else willing to fix your issue (as that might happen as we all see our priorities differently), just try to file a bug or do even better. Hack on it and fix it yourself and get some cool stuff like irc cloak and email alias for being contributor (that does not come with the first fix obviously :P).

What are your current issues with the Factory?

Actually it is running quite well and if I find some issue I can annoy somebody/fix it pretty fast but let me list some things I am currently indiferent or are not annoying enough to fix:

  • Czech translation of SUSE internals is sometimes funny
  • Disk decrypt password dialog shows the disk name the way it overflow the screen, not a biggie
  • Artwork for obvious reasons is moving target so sometimes it says 12.3 sometimes it changes every day and so on, that is not bug that is expected
  • Akonadi is PITA, not a problem of Factory at all but it is biggest issue I have on all my machines
  • Kernel updated too often – that was my bad as I approved the 3.11 too early and then I had to put in the fixes so you guys do not suffer :-)

As endnote I have to mention I udpated bit our wiki to explain who we are and what we do. It is more of work in progress but it should give at least some explanation how we achieve the Factory and what are the issues we face. We are always looking for contributors that want to work on fixing the current factory state or improving our tools. If you are bored help us out so 13.1 is awesome release for everyone…

Mandatory screenshot

Excuse the Nexus 4 camera, too lazy to reach out for normal one :-)

a silhouette of a person's head and shoulders, used as a default avatar

YastTeam@Freenode

After moving all the code to GitHub and translating YCP to Ruby, we, Yast team, have decided we would like to open the Yast development even more. We've found out that we should be easily reachable by the community online. That's why we've made a decision to go where the community already is and move our communication to IRC at Freenode.

Find us at irc://irc.freenode.net channel #yast

Come and see the real life of Yast developers! Share your thoughts with us! Test and try the Yast's sharpest edge! See you there! :)

a silhouette of a person's head and shoulders, used as a default avatar

Intel & Mir: The point-of-view of a graphics driver developing bystander.

Only a few days ago did I write about how open source software is not about "code or design or doing The Right Thing". "Open source software is about power, politics, corporate affiliation, and loads and loads of noise." I would like to thank Intel for so succinctly underlining that now with their current action.

Before I go any further, this seems to not be Chris Wilsons decision or his preferred solution. Chris wrote the patch which he was told, by unnamed party or parties in Intel, to back out. Also, I personally do not condone the actions taken by Canonical, but, as a graphics driver developer, I find Intels actions far worse. I rather doubt that Intel thought this one through properly.

What's the problem?

As a graphics driver developer, I fail to see the big problem with Mir.

So what if Canonical has decided to reinvent Wayland? Apart from the weird contribution agreement (which will only limit contributions), Mir is fully free software isn't it? Who are they hurting apart from their own resources and their own users? It's not that I am applauding Canonical for their decision, but I really don't see the massive problem here.

Why is Canonical not allowed to do this?

Reinvention galore

I personally really hate things being reinvented all the time. It is the disease that plagues open source software, and that what makes sure that we don't have a growing linux market.

How often have we heard that something is outdated and broken and doesn't fit modern demands anymore? We are then invariably being told that something new is being built afresh, from the lessons learned of what was done "wrong" before, and that in a few months time, everything is going to be fantastic. Sadly, such timeframes never pan out, and while the known "errors" are fixed, everything else gets broken, which then has to be reinvented or ported as well (or which simply remains broken). And then several years down the line, things are still not perfect, and then someone else (or sometimes even the same person) goes off and implements the next great thing from scratch, again.

We never have something that just works, we just go from broken state to broken state. And nobody learns from this, nobody apparently ever states "Hang on, isn't that pretty much the same story we heard 3 years ago?"

To me, as a stupid shortsighted driver developer, Wayland seems like X reinvented. A server/client display architecture with the new lessons learned implemented, but with everything else broken. We've been waiting for getting all those little niggles worked out ever since 2009, and at one point, networking was added to Wayland making it even more of an X replacement.

So then Mir was announced... And suddenly the world was ablaze. Huge flamewars broke out everywhere and effigies of Mark Shuttleworth were getting burned in the forums. I found the Mir move quite ironic, at first, and thought that the outrage was quite out of proportion, but then I read this article. It is a who's who of reinventers, complaining about Canonical reinventing Wayland. I was appalled.

What exactly gives these people the sole monopoly on reinvention?

What is Intel afraid of?

How could Mir possibly threaten Wayland?

Intel is a pretty big company, and it probably has the largest contingent of open source developers devoted to graphics. It employs some of the brightest and most influential people in the business. On top of that, Wayland was there first, has had more time to mature, has had more applications and toolkits ported, and has a much larger mindshare. Most people would think that Waylands future is pretty secure.

So what could possibly be so much better about Mir that makes Mir such a big threat to Wayland that Intels graphics driver developers have to be told not to support XMir at all? Honestly, in the above constellation, how vastly more superior technically does Mir have to be to justify such an action? If Intel really feels that it has to react like this, well, then it might as well just throw in the towel and go Mir immediately, as Wayland clearly must be completely useless.

What a way to oust your own insecurity.

Software Fascism

Intel finds it necessary to play games with their X.org graphics driver, instead of having Wayland battle it out directly with Mir.

This kind of powerplay is quite insidious, and far more damaging than most people would expect. It completely skews the ability of software to compete on a fair and equal grounds, and hurts us all as it is mostly applied by those who are not able to compete properly, or those who feel as if they shouldn't need to bother to compete properly. It tends to favour the least technically advanced and the least morally acceptable.

The best example which I have come across so far is the RadeonHD versus Radeon battle. RadeonHD beat ATI by actually providing a solid open source driver in September 2007, and we at SuSE had a stated goal of being able to ship a solid open source driver on enterprise desktop rollouts. 3 months later, Radeon came around with support for the same hardware. It was technically inferior, and "borrowed" much of the hard work of radeonHD plus some noise added on top. What was worse was how the so-called X.org community used software fascism to artificially boost the Radeon driver. This started out with the refusal of a mailing list at the usual place, hit a low point with RadeonHD being dropped from the build script for the xserver, and sank to whole new levels when, 2 years after the obvious death of the RadeonHD driver, the RadeonHD repository got vandalized (and the whistleblower got tarred and feathered while the perpetrators were commended for their "quick" confession).

So who won?

Well, it definitely was not RadeonHD, as that died early 2009 with Novell laying off a large portion of SuSE developers in Nuremberg. As luck had it, at the same time, AMD experienced serious financial difficulties and did not continue the RadeonHD project with SuSE. But although Radeon did survive, it did not win either. ATI won, AMD (which wanted a proper open source driver, whereas ATI seriously didn't) lost, and we all lost with it. Fglrx still rules supreme today, but now it does not get as much flack as it did before, as the figleaf driver provides some sort of an alternative for those who are unhappy with fglrx. But it goes beyond that, the radeon driver consistently applies or applied the solutions ATI fglrx developers recommended, instead of the empirical solutions we at RadeonHD usually chose, and the radeon driver is not as good as it could be.

Software fascism goes further than just badly skewing competition, and it always is a negative influence on software. Who knows what other bad decisions will make their way into the Intel driver now?

The responsibility of a graphics driver

The main responsibility of a graphics driver is to support the users of your graphics hardware. If you are actually employed by the vendor, your users are those who bought your hardware and who will buy your hardware again if they are satisfied. This is the business case for providing optimal support for your hardware for a given operating system or infrastructure. On top of that, in open source software, the users are more than just the customers, they are also the testers.

Canonicals plan and marketing seem to have worked out quite well over the years, to the extent that half the planet thinks that linux equals ubuntu, and ubuntu probably has the larger part of the linux desktop market. This means that ubuntu users are a sizable portion of Intels userbase, and as a hardware vendor (and only secondarily a maker of display servers), Intel simply cannot afford to refuse to support or even alienate these users. Canonical has decided that Mir will be the primary display server on future Ubuntu releases, and this in turn means that Intel has an obligation to support Mir.

The Xmir patch to the Intel graphics driver seems rather minimal and not very invasive. There also seems or seemed, as the case may be now, direct communication between Intels graphics driver developers and Ubuntus developers. As Mir will ship on the next Ubuntu versions, there will be a large amount of users which will test the Xmir code in the Intel graphics driver. There is no chance that the Xmir code will bitrot for the foreseeable future, and Intels own investment in this code will be minimal.

The real art of writing good drivers is to provide for quick and painless debugging. Graphics hardware is complex, the drivers for this hardware are also complex, and neither is ever perfect, so one has to work hard to maximize the chance for bug resolution. This means easy communication with users, and giving the user an easy route to test changes so that proper feedback can be provided quickly. If you fail to make it easy enough for users, you will simply not get your bugs fixed, and the higher the resolution threshold becomes, the worse your driver will become.

By not carrying this patch, Intel forces Ubuntu users to only report bugs to Ubuntu, which then means that only few bug reports will filter through to the actual driver developers. At the same time, Ubuntu users cannot simply test upstream code which contains extra debugging or potential fixes. Even worse, if this madness continues, you can imagine Intel stating to its customers that they refuse to fix bugs which only appear under Mir, even though there is a very very high chance of these bugs being real driver bugs which are just exposed by Mir.

The reality of the matter is, Intel is hurting its own graphics driver more than it could potentially hurt Mir or Canonical.

The andriodization of linux

The biggest installed base of Linux is android, and it is bigger by many orders of magnitude. Sadly the linux which we call android is little more than the linux kernel and some new-ish (mostly) open source infrastructure on top. While this, to some extent, is quite the boon for open source software, it also holds a major threat. If we are not careful, we get fully locked hardware. We are only sporadically able to enforce the GPL on the kernel, and we have no chance at all to get open source userspace drivers. This limits the usefulness of the now ubiquitous linux hardware out there, and with the way the desktop and mobile are evolving, this will soon limit the availability of hardware for which more-or-less complete open source is available. On top of that, all those electronics companies that are churning out hardware at an amazing rate, they are either unable to see the advantages of actively contributing to open source, or they are having a very hard time in learning how to do so.

This is exactly why I created the lima driver, and why some other brave souls created their respective GPU reverse engineering projects. We recognized this danger, and are sacrificing a large portion of our lives trying to prevent catastrophe. And even though things are not going as fast or as smooth as we expected, we have come a very long way.

Things took a wrong turn a while back though. In an effort to create a stopgap solution, Jolla developer Munk created libhybris, a wrapper library which allows the usage of android drivers on top of glibc, and thus on a normal linux installation. I find this hack pretty dangerous, as it makes all vendors complacent, and it cements the android way of working and the it makes binary drivers the default. Our biggest open source hopes for mobile; Sailfish, Firefox-OS and Ubuntu-Phone Mir readily embraced this way of working.

I have, so far, not seen anything from either Jolla, The Mozilla Foundation or Canonical, along the lines of active support of the route we have chosen with open ARM GPU drivers, and we've been at it for quite some time now. Those companies are more dependent on open source software than your average android vendor, and know how to do things the open source way, but they have fully embraced the binary drivers built for android only, with no signs of them wanting to change this.

The only reason why I favour Wayland over Mir is that Canonical immediately chose the libhybris route with Mir. Wayland currently has patches for libhybris, so soon Wayland sadly will have sunk to Mirs level as well, from a graphics driver point of view.

Intel employs a small army for their open source software, and specifically for their open source graphics driver. But Intel also has other teams working on graphics drivers, and while I am not certain, I do think that Intel ships binary only drivers on their android devices.

Canonical is happy with using libhybris, but currently would prefer to use a proper graphics driver for their future products. This preference now got significantly reduced. Intel now potentially has driven one of the last big users of open source graphics drivers to exclusively using android binaries as well, seriously reducing the relevance of its own OSTC driver developer team in the process.

The low road

Up until now, intel had the moral high ground in the Wayland versus Mir situation. With the simple decision to revert the Xmir patch, this situation now got reversed.

Well done.
a silhouette of a person's head and shoulders, used as a default avatar

Really basic intro to encrypted filesystems in openSUSE

When I was in high school in the 1980s, I was a total computer geek. (We called geeks "computer geeks" back then, because there used to be such a thing as a non-computer geek.) After high school, though, my life took a different direction, and for 20 years I was only a casual computer user. Only in 2009 did I start getting serious about getting back into the field. During those 20 years a lot of stuff happened in the IT field that I simply missed, and am now catching up on. One of those things is encrypted filesystems. Here's a really basic introduction to setting up and using one of these in openSUSE:

Read more »