Standing for re-election: openSUSE Board Election Jan. 2015
My name is Robert Schweikert, IRC handle robjo, and I am standing for re-election in the upcoming openSUSE Board election in January of 2015.
With the end of 2014 my first term on the openSUSE board is already coming to an end, time flies. During my first term we collectively have seen many changes to our project. Many of these changes were difficult and I would say we had a rough ride for a good chunk of th last 2 years. I think, and am hoping others agree, that I was able to help smooth some of the rough spots and help the project move into what could now be considered calmer waters. It was not easy, but I am glad I was able to contribute.
Since 2009 I work for SUSE in the ISV Engineering team. When I started I primarily worked with IBM on joint projects. I also worked with other ISVs helping them with questions regarding their application on top of SUSE Linux. In recent years my role has transitioned and I am now focused on Public Cloud work, working with our partners.
I have been using SUSE Linux, now known as the openSUSE distribution since the beginning, i.e. I still remember when SUSE Linux 10 was released. I have been contributing to the project for many years, not from the get go, it took me some time to move from user to contributor, by maintaining packages, more recently also maintaining and publishing openSUSE images in the public cloud, and helping with organization of events. For the past two years I also had the privilege to contribute to the project as a board member. I would very much enjoy being able to continue my contribution to the board for another 2 years.
Looking forward I see the need that an effort needs to be made to re-invigorate our project. As a whole the distribution “business” has lost some of its appeal and shine. Something that is certainly to be expected. Never the less even in a world that is getting more and more dominated by cloud services, containers, and whatever else, distributions are a necessity and the openSUSE distributions always stands out as one of the top notch community distributions. We have also proven that there is still plenty of innovation potential with the recent merge of Tumbleweed and Factory, turning what was previously a pure development stream into a usable rolling release. The credit for this of course goes to the Factory team, release team and many others that contributed to the new tools and backend infrastructure that make all this possible. Re-invigoration for me not only means being proud and excited about such major technical accomplishments but also means we need to be better organized when it comes the representation of our project at FOSS events. Although the new booth box material is great we have had a difficult time getting things organized and helping those that want to represent the project at events. I want to continue to push on this part and help make the distribution of material better. There is plenty of work to be done at the board level and I am asking for your vote in the upcoming election to allow me to continue what is already in the works and help start new initiatives to re-invigorate our project.
Testing Fedora with openQA
I've been working a lot with openQA lately, openSUSE's great operating system testing tool, and I think some of what I've been getting up is kind of interesting.
This is likely to be the first of a series of blog posts about my crazy adventures with this awesome tool.
What is openQA?
openQA is a fully featured operating system testing tool.
It takes an operating system ISO or Disk Image, and using its Domain Specific Language (DSL) conducts a series of tests consisting of keyboard and mouse inputs while analysing the screen output for specific matches (called 'needles') to ensure the operating system is behaving as intended.
A typical, simple example of an openQA test for looks like this
#Header, doesn't normally need to be changed
use base "basetest";
use strict;
use testapi;
#run() subroutine, where the test code goes
sub run() {
#test functions
assert_screen "inst-bootmenu", 6;
send_key 'ret';
}
1;
This test is used to check a boot loader. The 'assert_screen' function in this case is looking for a matching 'needle' (screenshot) which has a tag named 'inst-bootmenu'. If it doesn't find a matching needle within the defined timeout (6 seconds) the test fails. If it finds a matching needle within 6 seconds, the test continues, and presses the return key with the 'send_key' function.
The results of these tests are provided in the form of logs, screenshots, and video recordings delivered over the openQA web interface. You can have a look at openSUSE's production instance at http://openqa.opensuse.org and an example of a single set of test results HERE.
Screenshots? Isn't it a pain in the arse maintaining the Needles?
Not at all. The openQA webUI contains a needle editor (example HERE) which lets testers see both the reference screenshots and the accompanying JSON file, and edit both with a WYSIWYG editor.
Select the areas of the UI you're interested in, click save, done. It takes seconds to update your needles to accept intended changes to a UI.
As needles are only looking for specific UI elements on the screen, well crafted needles can often allow regular UI changes to pass unmolested. If you're not interested in confirming the version number drawn on the screen, don't include the version number in your needles, and you'll have a version neutral needle - like in the example HERE.
The openQA web interface can automatically commit and (if you want) push your needles to a git repository (openSUSE's are HERE) making it really easy for testers/test developers to work together on keeping your needle collection up to date.
Is openQA just for openSUSE?
NO! While it's mainly used and contributed to by the openSUSE Project for testing Tumbleweed and the openSUSE Distribution, it's always been designed to be not only distribution-neutral (ie. it can test any Linux Distribution) but technically operating system-neutral - Linux, BSD, Windows, openQA doesn't care about what it's testing.
Yeah, right...prove it.
I haven't been working on openQA for a very long time. For a few weeks I've been working on new openQA tests for openSUSE and SUSE Linux Enterprise, but as we already have a huge collection of tests in our public git repository, a good number of my new tests are borrowing code and ideas from our existing test library.
So I got wondering, how hard is it to test a totally differing operating system, with a totally different installation workflow?
So, after reading the recent Register review of Fedora 21 that had a subtitle mentioning 'install woes', I decided to pick on our friends at the Fedora Project and downloaded a copy of Fedora 21 Workstation.
'Designing' tests for openQA is easy. I fired up the ISO in a VM and kept notes on which screens would make good needles and which keys to press. Each of the screens became an 'assert_screen' call in my tests, and each keypress became a 'send_key'.
NOTE: I could have used openQA's 'assert_and_click' function to check the screen for a needled area and click in the middle of it - it's a great for pressing buttons, but I decided to take a very keyboard centric approach.
In a few hours I had thrown together a basic set of tests, which you can get on GitHub
These tests successfully complete a full installation of Fedora 21 Workstation, as you can see from the openQA-produced Video I uploaded to YouTube
It wasn't perfect though, no QA tool is any good if it doesn't find bugs - luckily even these really simple tests were able to reliably reproduce a known issue in anaconda which apparently had been causing problems since October and hasn't yet been fixed.
Wanting to prove the tests and needles were generic enough that they could be reused for future testing, I've also run these against a recent Rawhide build.
Everything looks really promising, but a bug in Rawhide with Keyboard Shortcuts needs to be fixed before my tests can complete a full run.
Not bad for a few hours work if I say so myself, and I'd really like it if other people want to pick up where I've left off and extend these tests to provide more comprehensive coverage.
I'm interested, where can I find out more?
The main openQA website on GitHub is at http://os-autoinst.github.io/openQA/ and contains all the Documentation and Downloads you need to get started.
It takes about 1 hour to setup openQA on a basic openSUSE installation, and it doesn't have any crazy hardware requirements. If you can run KVM virtualisation, you can run openQA. All these Fedora tests have been run on a local instance of openQA on my X220 Laptop running openSUSE Tumbleweed.
I'd especially recommend the Getting Started Guide which explains a lot more about the fundamentals of openQA
For more help, the best place is the #opensuse-factory IRC channel on irc.freenode.org (where I can be found as ilmehtar and/or sysrich) or the opensuse-factory@opensuse.org mailinglist
Have a lot of fun!
GNOME.Asia Summit 2015 to be hosted in Depok Indonesia
GNOME.Asia Summit 2015 to be hosted in Depok Indonesia
The ssh client has a config file?
A long time a go a coworker documented the ssh forwarding with xinetd trick that i showed him. We use it heavily on our virtualisation cluster for openSUSE. Of course it gets annoying if you have to pass the port parameter to each invocation of ssh or scp. Especially since ssh and scp are so nicely consistent for that. ;)
As the title already implies, ssh has a config file which we can use. It allows us to basically set every setting that can also pass via command line argument and all that per host.
more pam_systemd madness...
I quickly found out, that the recent update of the systemd package had reenabled pam_systemd in the pam config.
Now I'm fighting with the systemd package maintainer about if reenabling this on every package update is a good idea in openSUSE bug 908798. I certainly think it's not.
pam_systemd might have its merits on a desktop system, but I'd really like to know what it should be good for on a server? The manpage has shown me no feature that would be helpful there.
Let's see how many "RESOLVED INVALID" / "REOPENED" cycles this bug has to go through...
Kurz práce v příkazové řádce Linuxu nejen pro MetaCentrum 2015
Since academic year 2015/16 is the course scheduled in SIS as MB120C17.
Dolphin Overlay Icons for ownCloud Sync Client
Our recent ownCloud Client 1.7.0 release contains the new feature of overlay icons in GNOME nautilus, MacOSX and Windows. That is nice, but that makes us as old KDE guys sad as Dolphin was missing on the list. [caption id=“attachment_536” align=“alignnone” width=“554”]
KDE’s Dolphin with overlay icons for ownCloud’s file sync[/caption] That needs to change, and here we go: Olivier Goffart wrote a patch to do overlay icons also in Dolphin, which was not straightforward, because in addition to an dolphin plugin, also a patch for libkonq was required.
We prepared some test packages in our development repository isv:ownCloud:devel for those who wanna try and know their way around. Current it only builds for a couple of openSUSE Distros. You need to install kdebase4 and dolphin-plugins and after installation, it’s easiest to restart KDE to make it registered. But be warned: The two packages replace packages from the previous installation, only do it if you really know what you’re doing!
It would be great if at least the libkonq patch could make it to upstream, and I would appreciate if somebody who is a bit more fluent with recent KDE libs development could give me a hand on that. Otherwise, if distros wanna pick up the patches to make the overlays work, of course the patches are here: patch for libkonq and the ownCloud Dolphin plugin. The plugin will work with the released version 1.7.0 of ownCloud Client.
Workshop at CERN
Last week, Thomas, Christian and myself were attending a workshop in CERN, the European Organization for Nuclear Research in Geneve, Switzerland.
CERN is a very inspiring place, attracting intelligent people from all over the world to get behind the secrets of our being. I felt honored to be at the place where for example the world wide web was invented.
The event was called Workshop on Cloud Services for File Synchronisation and Sharing and was hosted by CERN IT department. There have been around 100 attendees.
I was giving a talk called The File Sync Algorithm of the ownCloud Desktop Clients, which was very well received. If you happen to be interested in the sync algorithm we’re using, the slides are a nice starting point.
What amazed me most was the great atmosphere and the very positive attitude towards ownCloud. Many representatives of edu organizations that use ownCloud to which I talked were very happy with the product (even though there are problems here and there) from the technical POV. A lot of interesting setups and environments were explained and also showcased ownCloud’s flexibility to integrate into existing structures.
What also was pointed out by the attendees of the workshop was the importance of the fact that ownCloud is open source. Non free software does not have a chance at all in that market. That was the very clear statement in the final discussion session of the workshop.
The keynote was given by Prof. Benjamin Pierce from Pennsylvania with the title Principles of Synchronization. He is the lead author of the project Unison which is another opensource sync project. It’s sync engine marks very high quality, but is not “up-to-date software” any more as he said. I had the pleasure to spend quite some time with him to discuss syncing in general and our sync algorithms in particular, amongst other interesting things.
[caption id=“attachment_525” align=“alignleft” width=“254”]
Atlas Detectors[/caption]As part of his work, he works with a tool called QuickCheck to do very enhanced testing. One night we were sitting in the cantina there hacking to adopt the testing to ownCloud client and server. The first results were very promising, for example we revealed a “problem” in our sync core that I knew of, which formally is a sync error, yet very very unlikely to happen and thus accepted for the sake of an easier algorithm. It was impressive how fast the testing method was identifying that problem. I like to follow up with the testing method.
Furthermore we met with a whole variety of other interesting people, backend developers, operators of the huge datasets (100 Peta-Byte), the director of CERN IT, a maintainer of the Scientific Linux and others.
Also we had the chance to visit the Atlas experiment, it is 100 meter underneath the surface and huge. That is where the particles are accelerated, and it was great to have the chance to visit that.
The trip was a great experience and very motivating for me, and I think it should be for all of us all doing ownCloud. Frank was really hitting a nerv when he was seeding the idea, and we all were doing a nice product of it so far.
Lets do more of this cool stuff!
PowerVR SGX code leaked.
I've been fed links from several sides now, and i cannot believe how short-sighted and irresponsible people are, including a few people who should know better.
STOP TELLING PEOPLE TO LOOK AT PROPRIETARY CODE.
Having gotten that out of the way, I am writing this blog to put everyone straight and stop the nonsense, and to calmly explain why this leak is not a good thing.
Before i go any further, IANAL, but i clearly do seem to tread much more carefully on these issues than most. As always, feel free to debunk what i write here in the comments, especially you actual lawyers, especially those lawyers in the .EU.
LIBV and the PVR.
Let me just, once again, state my position towards the PowerVR.I have worked on the Nokia N9, primarily on the SGX kernel side (which is of course GPLed), but i also touched both the microcode and userspace. So I have seen the code, worked with and i am very much burned on it. Unless IMG itself gives me permission to do so, i am not allowed to contribute to any open source driver for the PowerVR. I personally also include the RGX, and not just SGX, in that list, as i believe that some things do remain the same. The same is true for Rob Clark, who worked with PowerVR when at Texas Instruments.
This is, however, not why i try to keep people from REing the PowerVR.
The reason why i tell people to stay away is because of the design of the PowerVR and its driver stack: PVR is heavily microcode driven, and this microcode is loaded through the kernel from userspace. The microcode communicates directly with the kernel through some shared structs, which change depending on build options. There are sometimes extensive changes to both the microcode, kernel and userspace code depending on the revision of the SGX, customer project and build options, and sometimes the whole stack is affected, from microcode to userspace. This makes the powervr a very unstable platform: change one component, and the whole house of cards comes tumbling down. A nightmare for system integrators, but also bad news for people looking to provide a free driver for this platform. As if the murderous release cycle of mobile hardware wasn't bad enough of a moving target already.
The logic behind me attempting to keep people away from REing the PowerVR is, at one end, the attempt to focus the available decent developers on more rewarding GPUs and to keep people from burning out on something as shaky as the PowerVR. On the other hand, by getting everyone working on the other GPUs, we are slowly forcing the whole market open, singling out Imagination Technologies. At one point, IMG will be forced to either do this work itself, and/or to directly support open sourcing themselves, or to remain the black sheep forever.
None of the above means that I am against an open source driver for PVR, quite the opposite, I just find it more productive to work on the other GPUs amd wait this one out.
Given their bad reputation with system integrators, their shaky driver/microcode design, and the fact that they are in a cut throat competition with ARM, Imagination Technologies actually has the most to gain from an open source driver. It would at least take some of the pain out of that shaky microcode/kernel/userspace combination, and make a lot of peoples lives a lot easier.
This is not open source software.
Just because someone leaked this code, it has not magically become free software.It is still just as proprietary as before. You cannot use this code in any open source project, or at all, the license on it applies just as strongly as before. If you download it, or distribute it, or whatever other actions forbidden in the license, you are just as accountable as the other parties in the chain.
So for all you kiddies who now think "Great, finally an open driver for PowerVR, let's go hack our way into celebrity", you couldn't be more wrong. At best, you just tainted yourself.
But the repercussion go further than that. The simple fact that this code has been leaked has cast a very dark shadow on any future open source project that might involve the powervr. So be glad that we have been pretty good at dissuading people from wasting their time on powervr, and that this leak didn't end up spoiling many man-years of work.
Why? Well, let's say that there was an advanced and active PowerVR reverse engineering project. Naturally, the contributors would not be able to look at the leaked code. But it goes further than that. Say that you are the project maintainer of such a reverse engineered driver, how do you deal with patches that come in from now on? Are you sure that they are not taken more or less directly from the leaked driver? How do you prove this?
Your fun project just turned from a relatively straightforward REing project to a project where patches absolutely need to be signed-off, and where you need to establish some severe trust into your contributors. That's going to slow you down massively.
But even if you can manage to keep your code clean, the stigma will remain. Even if lawyers do not get involved, you will spend a lot of time preparing yourself for such an eventuality. Not a fun position to be in.
The manpower issue.
I know that any clued and motivated individual can achieve anything. I also know that really clued people, who are dedicated and can work structuredly are extremely rare and that their time is unbelievably valuable.With the exception of Rob, who is allowed to spend some of his redhat time on the freedreno driver, none of the people working on the open ARM GPU drivers have any support. Working on such a long haul project without support either limits the amount of time available for it, or severely reduces the living standard of the person doing so, or anywhere between those extremes. If you then factor in that there are only a handful of people working on a handful of drivers, you get individuals spending several man-years mostly on their own for themselves.
If you are wondering why ARM GPU drivers are not moving faster, then this is why. There are just a limited few clued individuals who are doing this, and they are on their own, and they have been at it for years by now. Think of that the next time you want to ask "Is it done yet?".
This is why I tried to keep people from REing the powerVR, what little talent and stamina there is can be better put to use on more straightforward GPUs. We have a hard enough time as it is already.
Less work? More work!
If you think that this leaked driver takes away much of the hard work of reverse engineering and makes writing an open source driver easy, you couldn't be more wrong.This leak means that here is no other option left apart from doing a full clean room. And there need to be very visible and fully transparent processes in place in a case like this. Your one man memory dumper/bit-poker/driver writer just became at least two persons. One of them gets to spend his time ogling bad code (which proprietary code usually ends up being), trying to make sense of it, and then trying to write extensive documentation about it (without being able to test his findings much). The other gets to write code from that documentation, but also little more. Both sides are very much forbidden to go back and forth between those two positions.
As if we ARM GPU driver developers didn't have enough frustration to deal with, and the PVR stack isn't bad enough already, the whole situation just got much much worse.
So for all those who think that now the floodgates are open for PowerVR, don't hold your breath. And to those who now suddenly want to create an open source driver for the powervr, i ask: you and what army?
For all those who are rinsing out their shoes ask yourself how many unsupported man-years you will honestly be able to dedicate to this, and whether there will be enough individuals who can honestly claim the same. Then pick your boring task, and then stick to it. Forever. And hope that the others also stick to their side of this bargain.
LOL, http://goo.gl/kbBEPX
What have we come to?The leaked source code of a proprietary graphics driver is not something you should be spreading amongst your friends for "Lolz", especially not amongst your open source graphics driver developing friends.
I personally am not too bothered about the actual content of this one, the link names were clear about what it was, and I had seen it before. I was burned before, so i quickly delved in to verify that this was indeed SGX userspace. In some cases, with the links being posted publicly, i then quickly moved on to dissuade people from looking at it, for what limited success that could have had.
But what would i have done if this were Mali code, and the content was not clear from the link name? I got lucky here.
I am horrified about the lack of responsibility of a lot of people. These are not some cat pictures, or some nude celebrities. This is code that forbids people from writing graphics drivers.
But even if you haven't looked at this code yet, most of the damage has been done. A reverse engineered driver for powervr SGX will now probably never happen. Heck, i just got told that someone even went and posted the links to the powerVR REing mailinglist (which luckily has never seen much traffic). I wonder how that went:
Hi,
Are you the guys doing the open source driver for PowerVR SGX?
I have some proprietary code here that could help you speed things along.
Good luck!
So for the person who put this up on github: thank you so much. I hope that you at least didn't use your real name. I cannot imagine that any employer would want to hire anyone who acts this irresponsibly. Your inability to read licenses means that you cannot be trusted with either proprietary code or open source code, as you seem unable to distinguish between them. Well done.
The real culprit is of course LG, for crazily sticking the GPL on this. But because one party "accidentally" sticks a GPL on that doesn't make it GPL, and that doesn't suddenly give you the right to repeat the mistake.
Last months ISA release.
And now for something slightly different...Just over a month ago, there was the announcement about Imagination Technologies' new SDK. Supposedly, at least according to the phoronix article, Imagination Technologies made the ISA (instruction set architecture) of the RGX available in it.
This was not true.
What was released was the assembly language for the PowerVR shaders, which then needs to be assembled by the IMG RGX assembler to provide the actual shader binaries. This is definitely not the ISA, and I do not know whether it was Alexandru Voica (an Imagination marketing guy who suddenly became active on the phoronix forums, and who i believe to be the originator of this story) or the author of the article on Phoronix who made this error. I do not think that this was bad intent though, just that something got lost in translation.
The release of the assembly language is very nice though. It makes it relatively straightforward to match the assembly to the machine code, and takes away most of the pain of ISA REing.
Despite the botched message, this was a big step forwards for ARM GPU makers; Imagination delivered what its customers need (in this case, the ability to manually tune some shaders), and in the process it also made it easier for potential REers to create an open source driver.
Looking forward.
Between the leak, the assembly release, and the market position Imagination Technologies is in, things are looking up though.Whereas the leak made a credible open source reverse engineering project horribly impractical and very unlikely, it did remove some of the incentive for IMG to not support an open source project themselves. I doubt that IMG will now try to bullshit us with the inane patent excuse. The (not too credible) potential damage has been done here already now.
With the assembly language release, a lot of the inner workings and the optimization of the RGX shaders was also made public. So there too the barrier has disappeared.
Given the structure of the IMG graphics driver stack, system integrators have a limited level of satisfaction with IMG. I really doubt that this has improved too much since my Nokia days. Going open source now, by actively supporting some clued open source developers and by providing extensive NDA-free documentation, should not pose much of a legal or political challenge anymore, and could massively improve the perception of Imagination Technologies, and their hardware.
So go for it, IMG. No-one else is going to do this for you, and you can only gain from it!
Speeding up openSUSE 13.2 boot
Investigating, I found out that in 13.2 the displaymanager.service is now a proper systemd service with all the correct dependencies instead of the old 13.1 xdm init script.
At home, I'm running NIS and autofs for a few NFS shares and an NTP server for the correct time.
The new displaymanager.service waits for timesetting, user account service and remote file systems, which takes lots of time.
So I did:
systemctl disable ypbind.service autofs.service ntpd.serviceIn order to use them anyway, I created a short NetworkManager dispatcher script which starts / stops the services "manually" if an interface goes up or down.
This brings the startup time (until the lightdm login screen appears) down to less than 11 seconds.
The next thing I found was that the machine would not shut down if an NFS mount was active. This was due to the fact that the interfaces were already shut down before the autofs service was stopped or (later) the NFS mounts were unmounted.
It is totally possible that this is caused by the violation in proper ordering I introduced by the above mentioned hack, but I did not want to go back to slow booting. So I added another hack:
- create a small script /etc/init.d/before-halt.local which just does umount -a -t nfs -l (a lazy unmount)
- create a systemd service file /etc/systemd/system/before-halt-local.service which is basically copied from the halt-local.service, then edited to have Before=shutdown.target instead of After=shutdown.target and to refer to the newly created before-halt.local script. Of course I could have skipped the script, but I might later need to add other stuff, so this is more convenient.
- create the directory /etc/systemd/system/shutdown.target.wants and symlink ../before-halt-local.service into it.
