Skip to main content

the avatar of Andrés G. Aragoneses

Modernizing blam's autotools (or shaving the yak to move out from GoogleReader...)

Before focusing my spare time completely on the GSoC* (as I have mentoring responsibilities this year \o/ ), I wanted to solve a problem that cannot wait after July...

Yes, I've been victim of Google's cuts too... And I was wondering, where should I move? Feedly? ThingyBob? Well, I shouldn't make the same mistake twice, right?

Actually, some time ago I was using a desktop app to avoid relying on software that I cannot control (yes, vendor lock-in, the most important thing that open source tries to solve, right?): Thunderbird. But somehow the convenience of a web app (that I can access from any computer) and the hassle of using my mail client for RSS reading made me move to the web.

I should be able to find a replacement that no company or individual can "take down", and which feels less clunky than Thunderbird for reading RSS. So, enter blam (in the future I'll figure out how to sync its state between computers, maybe using SparkleShare?, to achieve that same convenience that a web-app provides), that Gnome app that has strangely managed to not catch my eye until now...

Well, maybe because if I install it from debian sid and I try to import my very first RSS feed from my GoogleReader list it doesn't work? Well, apparently it is a bug that is already fixed upstream, thanks to Carlos which has modernized the way that the program deals with XML and serialization.

Then I went ahead and tried to compile master myself... and guess what, the autogen.sh execution fails. Here the yak shaving begins, when I feel like this when trying to fix the autotools stuff:


Fortunately, after some tinkering (and some copy&paste from banshee's build scripts), I managed to fix the problem, and also modernized a bit some things (like using the brand new ".ac" extension instead of ".in" for the configure script, or using properly the AC_INIT and AM_AUTOMAKE_INIT macros,...).

Anyway, the real thing to highlight here is that while I was fixing this stuff and pushing to the repository...


... I saw some really good stuff committed by Carlos: using the new .NET 4.5 C# async patterns to get rid of those ugly callbacks! Kudos to him.

And if you're willing to help more with our autotools housekeeping, please do, I still feel this autogen.sh is way too long and needs some ironing.

* And if you're wondering what's up with GSoC (aka Google Summer of Code):

  • I had Nicholas Little lined up to work on Rygel+Banshee integration, but sadly he couldn't apply due to work commitments (hopefully he will still work with me on it in his spare time).
  • I had Rashid Khan lined up to work on Cydin+Banshee integration, but sadly there were not enough GSoC spots for him :( (fortuntately he told me he still wanted to work on it with me in his spare time).
  • I had Tomasz Maczyński lined up to work on Banshee integration with more REST APIs, and fortunately he was selected! So expect some nice FanArt.TV and SongKick plugins soon!


a silhouette of a person's head and shoulders, used as a default avatar

Upcoming changes to openSUSE KDE repositories

Since KDE has released the first beta of Platform, Workspaces, and Applications 4.11, there will be some changes in the packages offered in the openSUSE repositories.

In short:

  • KDE:Distro:Factory will now start tracking 4.11 betas and RCs: packages are being worked on. Use this version to test packages and to report bugs upstream.
  • KDE:Release:410 has been decoupled from KDE:Distro:Factory. If you were using 4.10 packages from KDF, you’re highly encouraged to move to this repository.
  • KDE:Unstable:SC will keep on carrying snapshots from KDE git repositories.

If you test the 4.11 packages, report bugs in the packaging (or openSUSE-specific functionality) to Novell’s bugzilla, and bugs in the software to bugs.kde.org. Also, please use the dedicated area on the KDE Community Forums to discuss issues.

Let the testing commence!

the avatar of Andrew Wafaa

Changed Blogging System Again

It’s been almost 18 months since I last made any change to this site, and I’ve been meaning to do it for a while. In all honesty, it’s been a while since I last blogged so I thought it was as good a time as any. Thanks to the likes of Twitter and Google+, and my corporate blog I’m finding myself blogging less an less. As my blog is getting less content, is it really worth running an sql server etc?

a silhouette of a person's head and shoulders, used as a default avatar

Installing Realtek 8273AE driver on openSUSE 12.3

This was a problem I didnt think I would face while getting a new laptop. I got the Toshiba C580 laptop, which has quite decent specifications. It is a nifty machine, and openSUSE works more or less flawlessly. The only issue I faced was with the Wireless card. It is a Realtek device, which is usually well supported, but this particular model, the 8273AE is yet to be fully supported by the kernel. I had to dig around a lot to get this to work, but received a lot of help from the openSUSE forums, and managed to get it to work. Thanks to lwfinger for writing the patch to get it to work properly.

1.) Use YaST to install the Kernel Development, C/C++ development and Base Development patterns

2.) Download the compat-wireless package from
http://linuxwireless.org/download/compat-wireless-2.6/compat-wireless-2012-10-03.tar.bz2

3.) Download the patch from
http://www.lwfinger.com/realtek_drivers/rtl8723ae_master_patch

4.) Run the following commands

tar jxvf compat-wireless-2012-10-03.tar.bz2
cd compat-wireless-2012-10-03/
patch -p1 < ../rtl8723ae_master_patch
make
sudo make install

5.) Check if the driver is working with
sudo modprobe -v rtl8723ae

This should get the driver working properly

Source: http://forums.opensuse.org/english/get-technical-help-here/wireless/477285-rtl8723ae-realtek-wirless-driver-hell-3.html

a silhouette of a person's head and shoulders, used as a default avatar

Organizing oSC13 - 38 days before(a conference without history is a tree without roots)

Today I feel like talking about history. The history of all the openSUSE confernces in the past. If I made my research right the first openSUSE Conference was in 20-23 of October in 2010.


Here are some graphics that was used at that time. I really find them interesting since I like history and I believe that a lot of lessons can be tought by history. I don't really know much about oSC10 since I was really new to the openSUSE project at that time and I hessitated on going there at that time. Looking back now I think I made a mistake not going there. Of course I found this out when joined the second conference in 2011 which also took place in Nuremberg from 11 to 14 of September in 2011. This was a lifetime experience. If you ever joined an openSUSE conference I am sure you all remember your first time there. It was amazing since I was a volunteer and I went to Nuremberg one day before the conference and there I had Allan Clark making pins, Juergen Weigert making the lights and a bunch of other top coders carring arround stuff in order to set up the venue. No Rock stars there, just people who are devoted in openSUSE and in making the conference work, for all.

 Then it was oSC 2012. This was different for me. The Greeks were so close at taking oSC2012 but when Prague also claimed it we took a step back since we understood and we knew that we were not 100% ready for it. We needed more people for the Greek community to get practically involved with it so that next year we would have even more experience and feedback. You see sometimes stepping back is better.
We joinned oSC12 more active than oSC11 and we tried to do the most we could and we learned a lot there. The idea though was the same, people hanging around and talk to each other. There was no need to know someone from before or to be friends from before. Afterall the meaning of the conference is also to have people meet and talk F2F. I have to say here that I made actuall friends in all oSC's, people that I am happy to hear their news beyond openSUSE matters. People from all over the planet and this is something you can't do sitting in front of a keyboard.



 Then we finally got the oSC13 to Greece. Not many things stayed as originally planned but this is only a good thing. We are prepairing a conference that for the first time is organized exclusively by the community and it is planned to be friendly to everybody and to produce a lot of work for the project. If you have any hessitations on comming to the conference because you don't know people or you fear that you will be a stranger among strangers forget it. Just track someone you really like from irc or from wherever and go say a hello, there is a big posbility a long time friendship to start with that hello. Any problem you might have will be solved. Ask anyone that ever joined an oSC. Come to share thoughts and learn from people. Come as you are and don't hessitate to deploy your personallity. There is enough place for everyone here.
Hope to see you all there.

a silhouette of a person's head and shoulders, used as a default avatar

KDE Platform Workspace, and Applications 4.10.4 for openSUSE

These posts kind of sound like a broken record, right? ;) Anyway, since KDE has released new versions of Platform, Workspaces and Applications as part of the stable release cycle, thanks to the OBS we have packages available for openSUSE 12.2 and 12.3. The 4.10.4 update will also be released as an official update for 12.3 in due time.

Where you can get the packages? Two places, as usual:

  • KDE:Distro:Factory in case you are interested in contributing to packaging for the next openSUSE release;
  • KDE:Release:410 (openSUSE 12.3 or openSUSE 12.2) in case you just want to upgrade to the latest and greatest version

What to look forward to in this release? More than 50 bugs being fixed, including:

  • CSS compliance fixes in KHTML
  • Bug fixes in Gwenview (display after image rotation, duplicate entries in recent folders)
  • Assorted fixes in KMail: polishing of external editor support, CalDAV fixes, UI adjustments…

For more you can always turn to the [full list of fixed bugs](https://bugs.kde.org/buglist.cgi?query_format=advanced&bug_status=RESOLVED&bug_status=VERIFIED&bug_status=CLOSED&bugidtype=include&chfieldfrom=2013-01-01&chfieldto=Now&chfield=cf_versionfixedin&chfieldvalue=4.10.4&order=Bug Number&list_id=675254).

As with any good broken record, some more repetition: report bugs in packaging to Novell’s Bugzilla, and bugs in the software directly to KDE.

Have fun with 4.10.4!

a silhouette of a person's head and shoulders, used as a default avatar

YaST is being rewritten in Ruby; Geeko gets a nosejob


For those not intimately familiar with +SUSE and +openSUSE , it bears mentioning what YaST actually is. YaST is our administrative control panel, composed of numerous modules for Software Management, User Management, Partitioning, and a variety of other tasks. It has interfaces implemented with +GTK , +Qt Project , and a command line interface. The command line interface is particularly nice in the case that you are running a server without a graphical environment, or if for some reason your graphical environment is not working. YaST even powers our very advanced graphical installer, providing us with power and stability during the install process that I haven't seen any other distribution able to replicate. WebYaST brings the power of YaST to remote administration, allowing you to remotely administer your machines from a comfortable web-based graphical interface.

For a couple of years now I've been hearing rumors about YaST being switched to +Ruby from the proprietary YCP language. However, up to recently I haven't stumbled across any substantiating evidence. Fact of the matter now though is that it is happening, and the next openSUSE release may even use the new Ruby based YaST.

Firstly though, why bother? It after all does work, and quite well for that matter. There are numerous reasons why this transition is being made. Firstly, YCP is a language developed explicitly for YaST development, and thus the only people who know it are YaST developers. This cuts out many people who would otherwise be able to contribute to it's continued evolution and maintenance. But why Ruby? Other similar (and inferior in my opinion) tools are usually written in +Python . Largely this is due to the simple fact that SUSE has many proficient Ruby developers. But, Ruby in it's own right is an excellent choice due to it's simplicity, flexibility, and the rapid development it enables. Also, it bears mentioning that WebYaST is based on Ruby, and so this would enable tighter integration and remedy duplication of effort enabling the two implementations to share more code.

The new Ruby implementation is being worked on by SUSE developers in Prague. It appears they are using a code translation scheme as the starting point similar to what +Xamarin used when they rewrote the +Android OS to use Mono. The new code has already been used to effectively install and administer an experimental build of openSUSE, and the developers feel confident of having it ready to begin integration by Milestone 4 of our next openSUSE release 13.1.

Personally I think this is an excellent move, as it would allow us to do more rapid development and innovation around YaST. Also, it would make YaST more accessible to other projects that might be interested in using or adapting parts of it for their own purposes. However, it should come as no surprise that if it does make it into openSUSE 13.1 it may introduce some new bugs that could prove a pain during installation or for new users. Nonetheless, I feel that this is certainly the right direction and will point us towards a promising future of innovation with YaST.

a silhouette of a person's head and shoulders, used as a default avatar

Old and new limare code, and management overhead...

I just pushed updated limare code and a fix to ioquake3.

In almost 160 patches, loads of things change:

  • clean FOSDEM code supporting Q3A timedemo on a limare ioquake3.

  • support for r3p2 kernel and binary userspace as found on the odroid-x series.

  • multiple PP support, allowing for the full power of the mali 400MP4 to be used.

  • fully threaded job handling, so new frames can be set up while the first is getting rendered.

  • multiple textures, in rgb888, rgba8888 and rgb565, with mipmapping.

  • multiple programs.

  • attribute and elements buffer support.

  • loads of gl state is now also handled limare style.

  • memory access optimized scan pattern (hilbert) for PP (fragment shader).

  • direct MBS (mali binary shader) loading for pre-compiled shaders (and OGT shaders!!!).

  • support for UMP (arm's in kernel external memory handler).

  • Properly centered companion cube (now it is finally spinning in place :))

  • X11 egl support for tests.

  • ...

Some of this code was already published to allow the immediate use of the OGT enabled ioquake3. But that branch is now going to be removed, as the new code replaces it fully.

As for performance, this is no better or worse than the FOSDEM code. 47fps on timedemo on the Allwinner A10 at 1024x600. But now on the Exynos 4, there are some new numbers... With the CPU clocked to 2GHz and the Mali clocked to 800MHz (!!!) we hit 145fps at 720p and 127fps at 1080p. But more on that a bit further in this post.

Upcoming: Userspace memory management.


Shortly after FOSDEM, i blogged about the 2% performance advantage over the binary driver when running Q3A.

As you might remember, we are using ARMs kernel driver, and despite all the pain that this is causing us due to shifting IOCTL numbers (whoever at ARM decided that IOCTL numbers should be defined as enums should be laid off immediately) I still think that this is a useful strategy. This allows us to immediately throw in the binary driver, and immediately compare Lima to the binary, and either help hard reverse engineering, or just make performance comparisons. Rewriting this kernel driver, or turning this into a fully fledged DRM driver is currently more than just a waste of time, it is actually counterproductive right now.

But now, while bringing up a basic mesa driver, it became clear that I needed to work on some form of memory management. Usually, you have the DRM driver handling all of that, (even for small allocations i think - not that i have checked). We do not have a DRM driver, and I do not intend to write one in the very near future either, and all I have is the big block mapping that the mali kernel driver offers (which is not bad in itself).

So in the train on the way back from linuxtag this year, I wrote up a small binary allocator to divide up the 2GB of address space that the Mali MMU gives us. On top of that, I now have 2 types of memory, sequential and persistent (next to UMP and external, for mapping the destination buffer into Mali memory), and limare can now allocate and map blocks of either at will.

The sequential memory is meant for per-frame data, holding things like draws and varyings and such, stuff that gets thrown away after the frame has been rendered. This simply tracks the amount of memory used, adds the newly requested memory at the end, and returns an address and a pointer. No tracking whatsoever. Very lightweight.

The persistent memory is the standard linked list type, with the overhead that that incurs. But this is ok, as this memory is meant for shaders, textures and attribute and element buffers. You do not create these _every_ draw, and you tend to reuse them, so it's acceptable if their management is a bit less optimized.

Normally, more management makes things worse, but this memory tracking allowed me to sanitize away some frame specific state tracking. Suddenly, Q3A at 720p which originally ran at 145fps on the exynos, ran at 176fps. A full 21% faster. Quite some difference.

I now have a board with a Samsung Exynos 4412 prime. This device has the quad A9s clocked at 1.7GHz, 2GB LP-DDR2 memory at 880MHz, and a quad PP Mali-400MP4 at 440MHz. This is quite the powerhouse compared to the 1GHz single A8 and single PP Mali-400 at 320MHz. Then, this Exynos chip I got actually clocks the A9s to 2GHz and the mali to a whopping 800MHz (81% faster than the base clock). Simply insane.

The trouble with the exynos device, though, is that there are only X11 binaries. This involves a copy of the rendered buffer to the framebuffer which totally kills performance. I cannot properly compare these X11 binaries with my limare code. So I did take my new memory management code to the A10 again, and at 1024x600 it ran the timedemo at 49.5fps. About a 6% margin over the binary framebuffer driver, or tripling my 2% lead at FOSDEM. Not too bad for increased management, right?

Anyway, with the overclocking headroom of the exynos, it was time for a proper round of benchmarking with limare on exynos.

Benchmark, with a pretty picture!


Limare Q3A benchmark results on exynos4412

The above picture, which I quickly threw together manually, maps it out nicely.

Remember, this is an Exynos 4412 prime, with 4 A9s clocked from 1.7-2.0GHz, 2GB LP-DDR2 at 880MHz, and a Mali-400MP4 which clocks from 440MHz to an insane 800MHz. The test is the quake 3 arena timedemo, running on top of limare. Quake 3 Arena is single threaded, so apart from the limare job handling, the other 3 A9 cores simply sit idle. It's sadly the only good test I have, if someone wants to finish the work to port Doom3 to gles, I am sure that many people will really appreciate it.

At 720p, we are fully CPU limited. At some points in the timedemo (as not all scenes put the same load on cpu and/or gpu), the difference in mali clock makes us slightly faster if the cpu can keep up, but this levels out slightly above 533MHz. Everything else is simply scaling linearly with the cpu clock. Every change in cpu clock is a 80% change in framerate. We end up hitting 176.4fps.

At 1080p, it is a different story. 1080p is 2.25 times the amount of screen real estate of 720p (if that number rings a bell, 2.25MB equals two banks of Tseng ET6x00 MDRAM :p). 2.25 times the amount of pixels that need to pushed out. Here clearly the CPU is not the limiting factor. Scaling linearly from the original 91fps at 440MHz is a bit pointless, as the Q3A benchmark is not always stressing CPU and GPU equally over the whole run. I've drawn the continuation of the 440-533MHz increase, and that would lead to 150fps, but instead we run into 135.1fps. I think that we might be stressing the memory subsystem too much. At 135fps, we are pushing over 1GBps out to the framebuffer, this while the display is refreshing at 60fps, so reading in half a gigabyte. And all of this before doing a single texture lookup (of which we have loads).

It is interesting to see the CPU become measurably relevant towards 800MHz. There must be a few frames where the GPU load is such that the faster CPU is making a distinguishable difference. Maybe there is more going on than just memory overload... Maybe in future i will get bored enough to properly implement the mali profiling support of the kernel, so that we can get some actual GP and PP usage information, and not just the time we spent waiting for the kernel job to return.

ARM Management and the Lima driver


I have recently learned, from a very reliable source, that ARM management seriously dislikes the Lima driver project.

To put it nicely, they see no advantage in an open source driver for the Mali, and believe that the Lima driver is already revealing way too much of the internals of the Mali hardware. Plus, their stance is that if they really wanted an open source driver, they could simply open up their own codebase, and be done.

Really?

We can debate endlessly about not seeing an advantage to an open source driver for the Mali. In the end ARMs direct customers will decide on that one. I believe that there is already 'a slight bit of' traction for the general concept of open source software, I actually think that a large part of ARMs high margin products depend on that concept right now, and this situation is not going to get any better with ARMv8. Silicon vendors and device makers are also becoming more and more aware of the pain of having to deal with badly integrated code and binary blobs. As Lima becomes more complete, ARMs customers will more and more demand support for the Lima driver from ARM, and ARM gets to repeat that mantra: "We simply do not see the advantage"...

About revealing the internals of the Mali, why would this be an issue? Or, let me rephrase that, what is ARM afraid of?

If they are afraid of IP issues, then the damage was done the second the Mali was poured into silicon and sold. Then the simple fact that ARM is that apprehensive should get IP trolls' mouths watering. Hey IP Trolls! ARM management believes that there are IP issues with the Mali! Here is the rainbow! Start searching for your pot of gold now!

Maybe they are afraid that what is being revealed by the Lima driver is going to help the competition. If that is the case, then it shows that ARM today has very little confidence in the strength of their Mali product or in their own market position. And even if Nvidia or Qualcomm could learn something today, they will only be able to make use of that two years or even further down the line. How exactly is that going to hurt the Mali in the market it is in, where 2 years is an eternity?

If ARM really believes in their Mali product, both in the Mali's competitivity and in the originality of its implementation, then they have no tangible reason to be afraid of revealing anything about its internals.

Then there is the view that ARM could just open source their own driver. Perhaps they could, it really could be that they have had very strict agreements with their partners, and that ARM is free to do what they want with the current Mali codebases. I personally think it is rather unlikely that everything is as watertight as ARM management imagines. And even then, given that they are afraid of IP issues... How certain are ARMs lawyers that nothing contentious slipped into the code over the years? How long will it take ARMs legal department to fully review this code and assess that risk?

The only really feasible solution tends to be a freshly written driver, with a full development history available publically. And if ARM wants to occupy their legal department, then they could try to match intel (AMD started so well, but ATI threw in the towel so quickly, but luckily the AMD GPGPU guys continued part of it), and provide the Technical Reference Manual and other documents to the Mali. That would be much more productive, especially as that will already be more legal overhead than ARM management would be willing to spare, when they do finally end up seeing the light.

So. ARM management hates us. But guess what. Apart from telling us to change our name (there was apparently the "fear" of a trademark issue with us using Remali, so we ended up calling it Lima instead), there was nothing that they could do to stop us a year and a half ago. And there is even less that ARM can do to stop us today :)

A full 6.0%...

a silhouette of a person's head and shoulders, used as a default avatar

You can't win an argument against conspiracy theorists

Whatever you say, or don't say, will be taken as proof. If something is done, or is not done, in both cases that is a proof of the conspiracy.

An example, from the open source world: Mentioning a competitor project from which some of your code comes will be seen as proof that you unfairly want their fabulous reputation to rub over onto your project. On the other hand, not mentioning it will be seen as proof that you don't want people to know where your good stuff comes from.

the avatar of Agustin Benito Bethencourt

Insides about the new openSUSE Team at SUSE blog

Yesterday we created a new blog [1]to communicate to our community and readers what do we do as a team within the openSUSE project. It is a collective effort and it will be part of our duties to create and publish contents for it.

Its main goal is to talk about what we do so in the long run, it becomes a reference channel to follow up the activity of the group of people that SUSE employs to work full time in openSUSE.

Once of the discussion topics we faced was how to make compatible this blog, that has a corporate nature, with our personal blogs, our work in the community and the motivations of our team members. It is a very interesting topic since it is not trivial in practice to separate the information you handle as employee and your work within a community, in a case like our ours, employees that work for a community.

The easy approach is to say that, by default,  all what we do is public and treated as "community" work, or the opposite, but there are many corner cases, some of them relevant, that are not cover by these general approaches. Usually this topic has no relevance until something happens, usually bad. We need to understand that, as employees, we hold a responsibility that, if non properly managed, can potentially harm our company. It also can hurt us as professionals or affect our community. Often you are not a perceived as a regular community member only, but as a company advocate/representative too.

As a manager, I have to make compatible the community interest, the personal interest of the people under my responsibility and the company interest. In a company like SUSE, specially when talking about openSUSE, conflicts are small in this regard. We are a very open company in general and in my area in particular. I am glad of being part of a company that understand Free Software.

But under certain situations, nobody is out of risk. I even face this limitations being the KDE Treasurer. Being open is one thing and publishing all the information is another one. Obvious in theory but....where are the limits in practice? Who should take care of ensuring those limits are respected? What measures should be taken to satisfy all parties interest? When managing information, how to be fair with your community, yourself and your company at the same time? 

It is impossible to clearly define how to react in every case, what things should be kept private and what things don't, how to deal with the information you get as an employee but should be publish as part of your everyday activity within the community....

But what is possible is to create a clear field in which the people involved can move as freely as possible, making sure that some processes are put in place to avoid harmful mistakes. But above all, you have to rely on people, train them and be close to them so they understand the risks and the possible conflicts. Experience usually helps a lot in this field.

Behind this team blog, there are some processes that try to answer some of the previous questions and concerns, reducing the risk for the company and the authors but, at the same time, keeping the spirit of our work and goals: being open. I will talk about the concrete measures in a couple of months, when we get some conclusions about their efficiency.

I hope you will find the blog interesting.

[1] Link to the blog: https://lizards.opensuse.org/author/calumma/