GNOME.Asia summit 2014 - Call for The T-shirt Design Contest
The T-shirt Design Contest
Prizes
- Winner: A Special gift from local team and two t-shirts with your winning design
4.13 Beta 1 Workspaces, Platform and Applications for openSUSE: start the testing engines!
Yesterday KDE released their first beta of the upcoming 4.13 version of Workspaces, Applications and Development platform. As usual with the major releases from KDE, it’s packed with a lot of “good stuff”. Giving a list of all the improvements is daunting, however there are some key points that stand out:
-
Searching: KDE’s next generation semantic search is a prominent feature of this release. It’s several orders of magnitude faster, much leaner on memory and generally is a great improvement from the previous situation (this writer has been testing it for the past months and he’s absolutely delighted about it).
-
PIM: Aside with tight integration with the new search feature, KMail gained a new quick filter bar and search, many fixes in IMAP support (also thanks to the recent PIM sprint) and a brand new sieve editor.
-
Okular has a lot of new features (tabs, media handling and a magnifier)
-
A lot more ;)
Given all of this, could the openSUSE KDE team stay still? Of course not! Packages are available in the KDE:Distro:Factory repository (for openSUSE 13.1 and openSUSE Factory) as there are lot of changes and need more testing. The final release will be provided also in the KDE:Relase413 repository (which will be created then).
Some notes on the search changes: you will need to migrate existing data from Nepomuk (should you want to: it’s optional). You can do that by running “nepomukbaloomigrator” when Nepomuk is running: it will automatically migrate your data and switch off the old system (virtuoso-t included). Also bear in mind that since the old Nepomuk support is considered “legacy” (but still provided), the programs that have not yet been ported to the new architecture have their Nepomuk integration disabled. One significant regression is file-activity linking, which will not work.
As usual, this is an unstable release and it is only meant for testing. Don’t use this in production environments! If you encounter a bug, if it is packaging related use Novell’s Bugzilla, otherwise head to bugs.kde.org. Also, before reporting anything, please check out the Beta section of the KDE Community Forums first.
That’s all, enjoy this new release!
Short report from Installfest 2014 Prague
This week, Michal and Tomáš write about their visit of the Installfest 2014 in Prague.
Installfest Prague is a local conference that despite it’s name has talk tracks and sometime even quite technical topics. We, Michal Hrušecký and Tomáš Chvátal, attended this event and spread the Geeko News around…
Linux for beginners workshop
Tomáš worked with local community member Amy Winston to show that our lovely openSUSE 13.1 is great for day-to-day usage, even (or especially) for people migrating from windows. We demonstrated KDE’s Plasma Desktop, how to use YaST to achieve common tasks and then asked the audience for specific requests about things to demonstrate. We can tell you the participants liked Steam a lot! Apart from the talking about and demonstrating openSUSE we also explained people that SUSE is selling SLE and they can buy it in the store if they want a rock-solid stable OS.
The workshop setup was a bit tricky because the machines didn’t have optical drives and we didn’t have enough usb sticks to work around the issue. Our solution was to boot up the ISO directly from PXE server and force NetworkManager to not start during the boot because it happily overwrote the network configuration and thus loosing the ISO it was booting from.
Factory workflow presentation
Over the last months, our team has been working on improving the way we develop openSUSE: theF Factory Workflow. You can read up on it in earlier blogs here. During the Installfest, we showcased our new workflow which uses openQA and staging projects. We discussed what we try to achieve what we can already do right now with osc/obs/openQA combo. People were quite enthusiastic and two people in the audience already used Factory!
As always we mentioned that we have Easy hacks and we really really want people to work on them.
Overall SUSE presence
openSUSE/SUSE presence was as usual high on this event so we tried our best to let people know that we are cool as project and great as company. We shared with everyone our openSUSE/SLE dvds, posters, stickers, … During the talks/workshops we also gave out SUSE swag as presents to people who answered some tricky questions.
Job offerings
We let everyone know that we are looking for plenty of new people in various departements. Namely QA/QAM to which few people got interested and took the printed out prospects. So hopefully our teams will grow 
Dynamic iptables port-forwarding for NAT-ed libvirt networks
Libvirt is particularly awesome when it comes to managing virtual machines, their underlying storage and networks. However, if you happen to use NAT-ed networking and want to allow external access to services offered by your VMs, you've got to do some manual work. The simplest way to get access is to set up some iptables … Continue reading Dynamic iptables port-forwarding for NAT-ed libvirt networks
The Raspberry Pi, en route to becoming a proper open platform.
Given that I was the one to poke a rather big hole in the Raspberry Pi image a year and a half ago, and since i have been playing a bit of a role in freeing ARM GPUs these last few years, I thought I'd say a few words on how I see things, and why I said what I said back then, and why I didn't say other things.
It has become a bit of a long read, but this it's a eventful story that spans about two years.
Getting acquainted.
I talked about the lima driver at linuxtag in may 2012. As my contract work wasn't going any where real, I spent two weeks of codethink time (well, it's not as if I ever did spent just 8h per day working on lima) on getting the next step working with lima, so I had something nice to demo at Linuxtag. During that bringup, I got asked to form a plan of attack for freeing the Raspberry Pi GPU, for a big potential customer. Just after my lima talk at the conference, I downloaded some code and images through the linuxtag wifi, and set about finding the shader compiler on the train on the way back.I am not one for big micro-managed teams where constant communication is required. I am very big on structure and on finding natural boundaries (which is why you have the term modesetting and the structured approach to developing display drivers today). I tend to split up a massive task along those natural boundaries, and then have clued folks take full control and responsibility of those big tough subtasks. This is why I split the massive task of REing a GPU between the command stream side and the shader compiler. This is the plan of attack that I had formed in the first half of 2011 for Mali (which we are still sticking to today), and I was of course looking to repeat that for the RPi.
On the train, I disassembled glCompileShader from the VideoCore userspace binary, and found the code to be doing surprisingly little. It quickly went down to some routines which were used by pretty much all gl and egl calls. And those lower routines were basically taking in some ids and some data, and then passing it to the kernel. A look at the kernel code then told me that there was nothing but passing data through happening at that side...
There was nothing real going on, as all the nifty bits were running somewhere else.
In the next few days, I talked to some advanced RPi users on IRC, and we figured out that even the bootloader runs on the Videocore and that that brings up the ARM core. A look at the 3 different binary blobs, for 3 different memory divisions between the ARM and the VideoCore, revealed that they were raw blobs, running directly on the videocore. Strings were very clearly visible, and what was immediately clear is that it was mostly differences in addresses between the 3 blobs. Apart from the addresses, nothing was immediately apparent, apart from the fact that even display code was done by the Videocore. I confirmed with Ben Brewer, who was then working on the shaders for Lima, and who has a processor design background.
The news therefor wasn't good. The Raspberry Pi is a completely closed system with a massive black box pulling all the strings. But at least it is consistently closed, and there is only limited scope for making userspace and 'The Blob' getting out of sync, unlike the PowerVR, where userspace, kernel and microcode are almost impossible to keep in sync.
So I wrote up the proposal for the potential customer (you know who you are), basically a few manmonths of busywork (my optimistic estimate doubled, and then doubled again, as I never do manage to keep my own optimistic timelines) to free the userspace shim, and a potential month for Ben for poking at the videocore blobs. Of course, I never heard back from codethink sales, which was expected with news that was this bad.
I decided to stay silent about the Raspberry Pi being such a closed system, at least for a few weeks. For once, this customer was trying to do The Right Thing by talking to the right people, instead of just making big statements, no matter what damage they cause or what morals they breach. I ended up being quite busy in the next few months and I kind of forgot about making some noise. This came back to haunt me later on.
The noise.
So on oktober 24th of 2012, the big announcement came. Here is the excerpt that I find striking:"[...] the Raspberry Pi is the first ARM-based multimedia SoC with fully-functional, vendor-provided (as opposed to partial, reverse engineered) fully open-source drivers, and that Broadcom is the first vendor to open their mobile GPU drivers up in this way. We at the Raspberry Pi Foundation hope to see others follow."While the text of the announcement judiciously talks about "userland" and "VideoCore driver code which runs on the ARM", it all goes massively downhill with the above excerpt. Together wit the massive cheering coming from the seemingly very religious RPi community, the fact that the RPis SoC was a massively closed system was completely drowned out.
Sure, this release was a very good step in the right direction. It allowed people to port and properly maintain the RPi userspace drivers, and took away a lot of the overhead for integration. But the above statement very deliberately brushed the truth about the BCM2835 SoC under the carpet, and then went on to brashly call for other vendors to follow suit, even though those others had chosen very different paths in their system design, and did not depend on a massive all-controlling black box.
I was appalled. Why was this message so deliberately crafted in what should be a very open project? How could people be so shortsighted and not see the truth, even with the sourcecode available?
What was even worse was the secondary message here. To me, it stated "we are perfectly happy with the big binary blob pulling the strings in the background". And with their call to other vendors, they pretty much told them: "keep the real work closed, but create an artificial shim to aid porting!". Not good.
So I went and talked to the other guys on the #lima channel on irc. The feeling amongst the folks in our channel was pretty uniform one of disgust. At first glance, this post looked as a clear and loud message to support open drivers, it only looked like that on the absolute surface. Anyone who would dig slightly deeper would soon see the truth, and the message would turn into "Keep the blob guys!".
It was clear that this message from the RPi foundation was not helping the cause of freeing ARM GPUs at all. So I decided to react to this announcement.
The shouting
So I decided to go post in the broadcom forums, and this is what I wrote:Erm… Isn’t all the magic in the videocore blob? Isn’t the userland code you just made available the simple shim that passes messages and data back and forth to the huge blob running on the highly proprietary videocore dsp? – the developer of the lima driver.Given the specific way in which the original RPI foundation announcement was written, i had expected all from the RPI foundation itself to know quite clearly that this was indeed just the shim driver. The reminder of me being the lima driver developer, should've taken away any doubt about me actually knowing what i was talking about. But sadly, these assumptions were proven quite wrong when Liz Upton replied:
No. There’s some microcode in the Videocore – not something you should confuse with an ARM-side blob, which could actually prevent you from understanding or modifying anything that your computer does. That microcode is effectively firmware.She really had no idea to what extent the videocore was running the show, and then happily went and argued with someone whom most people would label an industry expert. Quite astounding.
Things of course went further downhill from there, with several RPI users showing themselves from their best side. Although I did like the comment about ATI and ATOMBIOS, some people at least hadn't forgotten that nasty story.
All-in-all, this was truly astounding stuff, and it didn't help my feeling that the RPI foundation was not really intending to do The Right Thing, but instead was more about making noise, under the guise of marketing. None of this was helping ARM GPUs or ARM userspace blobs in any way.
More shouting
What happened next was also bad. Instead of sticking to the facts, Dave Airlie went and posted this blog entry, reducing himself to the level of Miss Upton.A few excerpts:
"Why is this not like other firmware (AMD/NVIDIA etc)? The firmware we ship on AMD and nvidia via nouveau isn't directly controlling the GPU shader cores. It mostly does ancillary tasks like power management and CPU offloading. There are some firmwares for video decoding that would start to fall into the same category as this stuff. So if you want to add a new 3D feature to the AMD gpu driver you can just write code to do it, not so with the rPI driver stack."
"Will this mean the broadcom kernel driver will get merged? No. This is like Ethernet cards with TCP offload, where the full TCP/IP stack is run on the Ethernet card firmware. These cards seem like a good plan until you find bugs in their firmware stack or find out their TCP/IP implementation isn't actually any good. The same problem will occur with this. I would take bets the GLES implementation sucks, because they all do, but the whole point of open sourcing something is to allow other to improve it something that can't be done in this case."Amazing stuff.
First of all, with the videocore being responsible for booting the bcm2835, and the bcm2835 already being a supported platform in the linux kernel (as of version 3.7), all the damage had been done already. Refusing yet another tiny message passing driver, which has a massive user-base and which will see a lot of testing, then is rather short-sighted. Especially since this would have made it much easier for the massive RPI userbase to use the upstream kernel.
Then there is the fact that this supposed kernel driver would not be a drm driver. It's simply not up to the self-appointed drm maintainer to either accept or refuse this.
And to finish off, a few years ago, Dave forked RadeonHD driver code to create a driver that did things the ATI way (read: much less free, much more dependent on bios bytecode, and with many other corners being implemented the broken ATI way), wasted loads of AMD money, stopped register documentation from coming out, and generally made sure that fglrx still rules supreme today. And all of this for just "marketing reasons", namely the ability to make trump SuSE with noise. With a history like that, you just cannot go and claim the moral high ground on anything anymore.
Like I've stated before, open source software is about making noise, and Dave his blog entry was nothing more and nothing less. Noise breeds more noise in our parts. This is why I decided to not provide a full and proper analysis of the situation back then, as we had lowered ourselves to the level that the RPI foundation had chosen, and nothing I could have said would've improved on that.
Redemption
I received an email from Eben Upton last friday, where he pointed out the big news. I replied that I would have to take a thorough look first, but then also stated that I didn't expect Eben to email me if this hadn't been the real thing. I then congratulated him extensively.And congratulations really are in order here.
The Raspberry PI is a global sales phenomenon. It created a whole new market, a space where devices like the Cubieboard and Olimex boards also are in now. It made ARM development boards accessible and affordable, and put a linux on ARM in the hands of 2.5 million people. It is so big that the original goal of the Raspberry Pi, namely that of education, is kind of drowned out. But just the fact that so many of these devices were sold already, will have a bigger educational impact than any direct action towards that goal. This gives the RPI foundation a massive amount of power, so much so, that even the most closed SoC around is being opened now.
But in Oktober of 2012, my understanding was that the RPI foundation wasn't interested in wielding that power to reach those goals which I and many others hold dear. And I was really surprised when it now was revealed that they did, as I had personally given up on this ever happening, and I had assumed that the RPI foundation was not intending to go all the way.
What's there?
After the email from Eben, I joined the #raspberrypi-internals channel on freenode to ask the guys who are REing the VideoCore, to get an expert opinion from those actually working on the hardware.So what has appeared:
* A user manual for the 3D core.
* Some sample code for a different SoC, but code which which runs on the ARM core and not on the videocore.
* This sample code includes some interesting headers which helps the guys who are REing the Videocore.
* There's a big promise of releasing more about the Videocore and at least providing a properly free bootloader.
These GPU docs and code not only means that Broadcom has beaten ARM, Qualcomm, Nvidia, Vivante and Imagination technologies to the post, it also makes broadcom second only to Intel. (yes, in front of AMD, or should I say ATI, as ATI has run most of the show since 2008).
Then there is the extra info on the videocore and the promise of a fully free bootloader. Yes, there still is the massive dependance on the VideoCore for doing all sorts of important SoC things, but releasing this information takes time, especially since this was not part of the original SoC release process and it has to be made up for afterwards. The future is looking bright for this SoC though, and we really might get to a point where the heavily VideoCore based BCM2835 SoC becomes one of the most open SoCs around.
Today though, the BCM2835 still doesn't hold a candle to the likes of the Allwinner A10 with regards to openness. But the mid to long term difference is that most of Allwinners openness so far was accidental (they are just now starting to directly cooperate with the linux-sunxi community), and Allwinner doesn't own all the IP used in their SoC (ARM owns the GPU for instance, and it is being a big old embedded dinosaur about it too). The RPI SoC not only has very active backing from the SoC vendor, that SoC vendor also happens to be the full owner of the IP at the same time.
The RPI future looks really bright!
Whining.
I do not like the 10k bounty on getting Q3A going with the freshly released code. I do not think this is the best way to get a solid gpu driver out, as it mostly encourages a single person to make a quick and dirty port of the freshly released code. I would've preferred to have seen someone from the RPI foundation create a kanban with clearly defined goals, and then let the community work off the different work items, after which those work-items are graded and rewarded. But then, it's the RPI foundations money, and this bounty is a positive action overall.I am more worried about something else. With the bounty and with the promise of a free bootloader, the crazy guys who are REing the VideoCore get demotivated twice. This 10k bounty rules them out, as it calls for someone to do a quick and dirty hack, and not for long hard labour towards The Right Thing. The promise of a free bootloader might make their current work superfluous, and has the potential to halt REing work altogether. These guys are working on something which is or was at least an order of magnitude harder than what the normal GPU REing projects have or had to deal with. I truly hope that they get some direct encouragement and support from the RPI foundation, and that both parties work together on getting the rest of the VideoCore free.
All in all, this Broadcom release is the best news I have heard since I started on the open ARM GPU path, it actually is the best news I've had since AMD accepted our proposal to free the ATI Radeon (back in june 2007). This release fully makes up for the bad communication of Oktober 2012, and has opened the door to making the BCM2835 the most free SoC out there.
So to Eben and the rest of the guys from the RPI Foundation: Well done! You made us all very happy and proud.
Monitor large file transfers with iostat
# iostat -mx 1
It will show you I/O performance statistics for your box, updated approximately once per second.
LibreOffice GSoC user interface tasks
Working on a GSoC task has a real impact - for example the new LibreOffice 4.2 start center is the result of a GSoC task. Yes, the very face of new LibreOffice version is here thanks to Summer of Code.
I am going to mentor this year again, mostly user interface ideas. Let me summarize them here:
-
Improved Color Selection
Something that is really badly needed in LibreOffice. Selecting a color that is not pre-defined is a painful task for users - help us to finally improve the experience there! -
Further improvements in the Template manager
The Template manager is a result of an older GSoC task too. We would like to integrate it into the new Start center, to get a seamless experience for the user. -
Improve usability of Personas
This is a task to make the selecting the themes for your LibreOffice more pleasant, with better integration with Firefox Themes. -
Revamp the Gallery tool
This task will enable the user to browse online gallery of shapes, based on an older work that made this somehow possible.
If you want to apply for any of these, don't forget that we require you to solve a programming Easy Hack first; the more of them you do, the better chances you have that we will select you.
And if you have any questions, don't hesitate to ask :-)
About openQA and authentication

As you might have read in our previous post, our most important milestone in improving openQA is to make it possible to run the new version in public and accessible to everyone. To achieve this we need to add support for some form of authentication. This blog is about what we chose to support and why.
So what do we need?
openQA has several components that communicate with each other. The central point is the openQA server. It coordinates execution, stores the results and provides both the web interface for human beings and a REST-like API for the less-human beings. The REST API is needed for the so-called ‘workers’, which run the tests and send the results back to the server, but also for client command line utilities. Since this API can be used to schedule new tests, control job execution or upload results to the server, some kind of authentication is needed. We don’t want to write a whole article about REST authentication, but maybe some overview of the general problem could be useful to explain the chosen solution.
Authentication vs authorization
Let’s first clarify two related concepts: authentication vs authorization. Web authentication is easy when there is no API involved. You simply have a user and a password for accessing the service. You provide both to the server through your web browser so the server knows that you are the legitimate owner of the account, that’s all. That means a (hopefully different) combination of user and password for each service which is annoying and insecure. This is where openID comes into play. It allows any website to delegate authentication to a third party (an openID provider) so you can login in that website using your Google, Github or openSUSE account. In any case, it doesn’t matter if you are using a specific user, an openID user or another alternative system – authentication is always about proving to the server that you are the legitimate owner of the account so you can use it for whatever purpose in the website.
![]()
But, what about if you also provide a API meant to be consumed by applications instead of humans using a web browser? That’s where authorization comes into play. The owner of the account issues an authorization that can be used by an application to access a given subset of the information or the functionality. This authorization has nothing to do with the system used for real authentication. The application has no access to the user credentials (username or password) and is not impersonating the user in any way. It is just acting on his or her behalf for a limited subset of actions. The user can decide to revoke this authorization at any time (they usually simply expire in a certain period of time). Of course, a new problem arises: we need to verify that every request comes from a legitimately authorized entity.
Technologies we picked
We need a way to authenticate users, a way to authorize the client applications (like the workers) to act on their behalf and a way to check the validity of the authorizations. We could rely on the openSUSE login infrastructure (powered by Access Manager), but we want openQA to be usable and secure for everybody willing to install it. Without caring if he is inside or outside the openSUSE umbrella.
openID
There’s not much to say about user authentication; openID was the obvious choice. Open, highly standarized and supported and compatible with the openSUSE system and with some popular providers like Google or Github. And even more important, a great way to save ourself the work and the risk of managing user credentials.
authentication – harder
According to the level of popularity and standardization, the equivalent of openID for authorization would be oAuth, but a closer look into it revealed some drawbacks, complexity being the first. oAuth is also not free of controversy, resulting in three alternative specifications: 1.0, 1.0a and 2.0. Despite what the numbers suggest, different specifications are not just versions of the same thing, but different approaches to the problem that have their own supporters and detractors. Using oAuth would mean enabling the openQA server to be able to act as an oAuth provider and the different client applications to act as oAuth consumers. Aside from the complexity and various versions of the standard, one of the steps of the oAuth authorization flow (used for adding a new script or tool) is redirecting the user to the oAuth provider web page in order for him/her to explicitly authorize the application. Which means that a browser is needed, not nice at all when the consumer is a command line tool (usually executed remotely). We want to keep it simple and we don’t want to run a browser to configure the workers or any other client tool. Let’s look for less popular alternatives.
There are two sides to the authorization coin: identifying who is sending the request (and hence the privileges) and verifying that the request is legitimate and the server is not being cheated about that identity. The first part is usually achieved with an API key, a useful solution for open APIs in order to keep service volume under control, but not enough for a full authorization system. To achieve the second part, there is a strategy which is both easy to implement and safe enough for most cases: HMAC with a “shared secret”.
In short, every authorization includes two automatically generated hash keys, the already mentioned API key and an additional private one which is only known to the server and to the application using that authorization. In each request, several headers are added: at least one with the timestamp of the request, another with the API key and a third one which is the result of applying an encryption algorithm (such as SHA-1) to the request itself (including the timestamp and the parameters) using the private hash as salt. In the server, queries with old timestamps are discarded. For fresh requests, the API key header is used to determine the associated private key and the same encryption algorithm is applied to the query, checking the result against the corresponding header to verify both the integrity of the request and the legitimacy of the sender.
We decided to go for this as it is both simple and secure enough for our use case. Of course, the private key still needs to be stored both server and client side, but it’s not the users’ password. It’s just a random shared secret with no relation to the real user credentials or identity. It also can be revoked at any point and can only be used for actions like upload a test result or cancel a job. Secure enough for a system like openQA, with no personal information stored and no critical functionality exposed.
Conclusion
So, we’ve implemented openID and are working on a lightweight HMAC based authorization system. Next week we’ll update you on the other things we are working on, including our staging projects progress!
Goodbye EC2 Tools Long Live AWS Tools
For quite some time we had a package named ec2-api-tools in the Cloud:EC2 project and I suspect many that work with EC2 had found the package and were using the ec2-* commands to manage stuff in EC2. Along with the ec2-api-tools Amazon maintained a separate ec2-* tool set for various services. Keeping up with the armada of Amazon developers is not easy and thus the other ec2-* tool sets never got packaged.
Now a new integrated set of tools is available called with the “aws” command and provided by the aws-cli package. The package is available from the Cloud:Tools project and a submit request to Factory is pending. The new package does not obsolete the ec2-api-tools package as there is no issue with having both packages installed. However, I did take the liberty to remove the ec2-api-tool package from the Cloud:EC2 project as it would no longer receive updates considering that we have a nice new tool that unifies all Amazon services. The documentation for the new command can be found .
The aws code is hosted on github and thus contribution of fixes is easy and that is another big plus over the ec2-* tool sets.
Yes, and of course we need to get openSUSE 13.1 into EC2, and I am working on that, stay tuned….
