Skip to main content

the avatar of Sankar P

Technology Catchup

Coincidentally three different people asked me in the last month, to write about new technologies that they should be knowing, to make them more eligible to get a job in a startup. All these people have been C/C++ programmers, in big established companies, for about a decade now. Some of them have had only glimpses of any modern technologies.

I have tried a little bit (with moderate success) to work in all layers of programming with most of the popular modern technologies, by writing little-more-than-trivial programs (long before I heard of the fancy title "full stack developer"). So here I am writing a "technology catchup" post, hoping that it may be useful for some people, who want to know what has happened in the technologies in the last decade or so.

Disclaimer 1: The opinions expressed are totally biased as per my opinion. You should work with the individual technologies to know their true merits.

Disclaimer 2: Instead of learning everything, I personally recommend people to pick whatever they feel they are connected to. I, for example, could not feel connected to node-js even after toying with it for a while, but fell in love with Go. Tastes differ and nothing is inferior. So give everything a good try and pick your choice. Also remember what Donald Knuth said, "There is difference between knowing the name of something and knowing something". So learn deeply.

Disclaimer 3: From whatever I have observed, getting hired in a startup is more about being in the right circles of connection, than being a technology expert. A surprisingly large number of startups start with familiar technology than with the right technology, and then change their technology, once the company is established.

Disclaimer 4: This is actually not a complete list of things one should know. These are just things that I have come across and experimented a little bit at least. There are a lot more interesting things that I would have have missed. If you need something must have been in the list, please comment :-)

With those disclaimers away, let us cut to the chase.


Version Control Systems

The most prominent change in the open source arena, in the last decade or so, is the invention of Git. It is a version controlled system initially designed for keeping the kernel sources and has since then become the de-facto VCS for most modern companies and projects.

Github is a website that allows people to host their open source projects. Often startups recruit people based on their github profile. Even big companies like microsoft, google, facebook, twitter, dropbox etc. have their own github accounts. I personally have received more job queries through my github projects than via my linkedin profile in the last year.

bitbucket is another site that allows people to host code and give even private repos. A lot of the startups that I know of use this, along with the jira project management software. This is your equivalent of MS Project in some sense.

I have observed that most of the startups founded by people who come from Banking or Finance companies to be using Subversion. Git is the choice for people from tech companies though. Mercurial is another open source, distributed VCS which has lost a lot of limelight in the recent times, due to Git. Fossil is another VCS, from the author of sqlite, Dr. Richard Hipp. If you can learn only one VCS for now, start with Git.

Programming Languages & Frameworks

Javascript has evolved to be a leading programming language of the last decade. It is even referred to as the X86 of the web. From its humble beginnings as a client-side scripting language to validate if the user has typed a number or text, it has grown into a behemoth and entered even the server-side programming through the node-js framework. For incorporating ModelViewController pattern, javascript has gained the AngularJS framework. JS is a dynamically typed language and to bring in some statically typed langauges' goodness, we have a coffeescript language too.

Python is another dynamically typed, interpreted programming language. Personally, I felt that it is a lot more tasteful than Javascript. It feels good on eyes too. It helps in rapid application development and is available by default in almost all the Linux distros and Mac machines by default. Django is a web framework that is built on python to make it easy to develop web applications. In addition to being used in a lot of startups, it is used in even big companies like Google and Dropbox. There are variants of Python runtime such that you can run it in the JVM using Jython or in the .NET CLR using the IronPython. I have personally found this language to be lacking in performance though, which is elaborated more in a subsequent section.

Ruby is an old programming language that shot into fame in the recent years through the popular web application framework Ruby on Rails, often called just Rails. I have learnt a lot of engineering philosophies such as DRY, COO etc. while learning RoR.

All these above languages and frameworks use a package manager such as npmBower, pip, gems etc. to install libraries easily.

Go is my personal favorite in the new languages to learn. I see Go becoming as vital and prominent a programming language as C, C++ or Java in the next decade. It is developed in Google for creating large scale systems. It is a statically-typed, automatic-memory-managed language that generates native-machine-code and helps writing concurrent-code easily.

Go is the default language that I use for any programming task in the last year or so. It is amazingly fast even though (just because?) it is still in the 1.X series. In my dayjob we did a prototype in both go and python, and for a highly concurrent workflow in the same hardware, Go puffed Python in performance (20 seconds vs 5 minutes). I won't be surprised if a lot of the python and ruby code gets converted to golang in their next edition of rewrites. Personally, I have found the quality of go libraries to be much higher compared to Ruby or nodejs as well, probably because not everyone has adapted to this language yet. However, this could be just my personal biased opinion.

If you like to get fancy with functional programming, then you can learn Scala (on top of JVM), F# (on top of .NET), Haskell, Erlang, etc. The last two are very old btw but in use even today. Most recently, Whatsapp was known to use Erlang. D is also seen in the news, mostly thanks to Facebook. Dart is another language that is from Google but still to receive any wide deployment afaik, even with Google's massive marketing machinery behind it. It has been compared to VBscript and is criticized, and as of now chrome-only. Dart has received criticism from Mozilla, Webkit (rendering engine that powers Safari (and chrome earlier)), Microsoft IE as well. Dart is done by Lars Bak et al. (the people who gave us V8, chrome's Javascript engine)

Rust is another programming language that is aimed for high-performance concurrent systems. But I have not played around with it, as they don't maintain a stable API and they are not 1.0 yet. Julia is another programming language aimed at doing distributed systems, about which I have heard a lot of praise, but it still remains a exotic language afaik. R is another language which I have seen in a lot of corporate demos where the presenters wanted to show statistics, charts. Learning this may be useful even if you are not a programmer and works with numbers (like a project manager).


There is a Swift programming language from Apple to write iOS apps. I have not tried Swift yet, but from my experience of using Objective C, it cannot be worse.

Bootstrap is a nice web framework from twitter, which provides various GUI elements that you can incorporate into your application, to rapidly prototype beautiful applications, that are fluidic even when viewed in mobile.

jquery is a popular javascript library that is ubiquitous. Cascading Style Sheets (shortly CSS) is a markup language that helps configure the style of the web page UI elements. CSS is becoming mature to the extent of showing animations too. You should ideally spend a few weeks to learn about HTML5 and CSS.

Text Editors

Sublimetext is what the cool kids use these days as the editor. I have found the tutorial on tutsplus to be extra-ordinarily good at explaining sublime. It is a free (as in beer) software and not open source.

Atom is a text-editor from github built using nodejs and chromium. I did not find a linux binary and so did not bother to investigate it. But I have heard it to be good for Javascript programmers than any others, as the editor could be extended by javascript itself.

Brackets is another editor that I have heard good things about. Lime is an editor that is developed in Go, aimed to be an open-source replacement for the sublimetext.

Personally, after trying various text editors, I have always comeback to using vim. There are a few good plugins for vim in the recent times. Vundle, Pathogen are nice plugin managers for vim to ease up installation of plugins. YouCompleteMe is a nice plugin for auto-completion. vim-spf13 is a nice distro of vim, where various plugins and colorschemes are pre-packaged.

Distributed Computing

In the modern day of computing, most programs have been driven by a Service Oriented Architecture (shortly SOA). Webservices are the preferred way of communication among servers as well. While we are talking about services, please read this nice piece by Steve Yegge.

memcached is a distributed (across multiple machines), caching system which can be used in front of your database. This was initially developed by Brad Fritzpatrick, while he was the head of the LiveJournal and who is now (2014) a member of the Go team at Google. While at Google, he has started GroupCache which as the project page says is a replacement for memcache in many cases.

GoogleFileSystem (GFS) is a seminal paper on how Google created a filesystem to suit their large needs of data processing. There is a database built on top of this filesystem named BigTable which powered Google's infrastructure. Apache Hadoop is an open source implementation of these concepts, which was originally started in Yahoo and now a top-level apache project. HDFS  is the equivalent of GFS for the Hadoop. Hive and Pig are technologies to query and analyze data from the Hadoop.

As with the evolution of any software, GFS has evolved into a Colossus filesystem and BigTable has evolved into a Spanner distributed database. I recommend you to read these papers even if you are not going to do any distributed computing development.

Cassandra is another distributed database which was started in Facebook initially, but is used in many companies such as Netflix and Twitter. I have used Cassandra more than any other distributed project and actually like it a lot. It uses a SQL like query language called CQL - Cassandra Query Language. It is modelled after the DynamoDB paper from Amazon. I am too tempted to write an alternative to this in Go, just to have the idea of writing a large scale distributed system, instead of just using it as a client, but have not got around to a good dataset or usecase with which I can test it.

MongoDB is another document oriented database, which I tried using for a pet project of mine. I don't remember exactly but there were some problems with respect to unicode handling. The project was done prior to go becoming 1.0, so the problem could be in any end.

Most of the new age databases are called NOSQL databases but what they really mean is that the database skips a lot of functions (such as datatype validation, stored procedures, etc.) and try to grow by scaling out instead of scaling up.

Cloud

OpenStack is a suite of open source projects that help you create a private cloud. DeltaCloud is a project which was initially started by RedHat, and now an apache top-level project, as a way to provide a single API layer which will work across any cloud in the backend. This project is done in ruby. I was initially interested in participating in its development, until I got introduced to Go and moved into a different tangent.

To start off a software company is a very easy task to do in today's world. The public clouds are becoming cheaper and cheaper everyday and their capacity can be provisioned instantly.

Amazon web services provides an umbrella of various public cloud offerings. I have used Amazon EC2 which is a way to create a Linux (and windows) VM that runs on Amazon's datacenters. The machines come on various sizes. Amazon S3 is a cloud offering that provides you way to store data in buckets. This is used by Dropbox heavily for storing all your data. There are various other services too. In some of our prototyping, we found the performance of Amazon EC2, to be consistent mostly, even in the free tier.

Google is not lagging behind with their cloud offerings either. When Google Reader was shut down, I used Google's Appengine to deploy an alternative FOSS product and I was blown away by the simplicity of creating applications on top of it. Google Compute is the way to get VMs running on the Google Cloud. As with Amazon, there are plenty of other services too.

There are plenty of other players like Microsoft Azure, Heroku etc. but I do not have any experience with their applications. While we are talking about Cloud, you should probably read about Orchestration and know about at least Zookeeper.

In-Process Databases

These are databases which you can embed into your application, without needing a dedicated server. They run on your process-space.

sqlite is the world's most deployed software and it competes with fopen to become the default way to store data for your desktop applications (if you are still writing them ;) ). A new branch is coming with the latest rage on storage datastructures, a log-structured merge tree as well.

leveldb is a database that is written by the eminent Googlers (and trendsetters of technology in the last decade or so) Jeff Dean and Sanjay Ghemawat who gave us MapReduce, GFS etc. It is forked by Facebook into RocksDB as well.

KyotoCabinet and LMDB are other projects on this space.

Linux Filesystems

Since we have covered GFS, HDFS, etc. earlier. We will look at other popular filesystems.

btrfs is a copy-on-write filesystem in Linux. It is intended to be the defacto linux filesystem in the future, possibly obsoleting ext series in the longer run.

XFS is a filesystem that initially came from SGI to Linux. This is my personal favorite and I have been using it on all my linux machines. In addition to good performance, this offers robustness and comes with a load of features that are useful to me, like defragmentation.

We also have the big daddy of filesystems zfs too on linux.

Ceph is another interesting distributed filesystem that works on the kernel space and is already merged in the linux kernel sources for a long time now. GlusterFS is another distributed filesystem which works in the userspace. Both of these filesystems focus on scaling out instead of scaling up.

Conclusion

Pick any of these technologies that you like and start writing a toy application on it, may be as simple as a ToDo application and learn through all the stages. This approach has helped me. It may help you also.

I have written this post from a Thinkpad T430 running openSUSE Factory and GNOME Shell with a bunch of KDE tools. I like this machine, However, in the past few months I have realized that, in today's world, If you are a developer, it is best if you run Linux on your server and Mac on your laptop.

a silhouette of a person's head and shoulders, used as a default avatar

Netflix arrives to openSUSE without dirty tricks, yes natively.

Naturally, if it were so simple one would not need an article. There has been a lot of news floating around about +Netflix finally being available natively for +Linux. In case you are not aware, getting Netflix on Linux was a labored and complicated process requiring all sorts of WINE hacking or virtualization. +Microsoft had announced that its strategy would be changing away from Silverlight which Netflix has depended on for their DRM content delivery. Netflix then announced they would be dropping Silverlight in favor of +HTML5 once some DRM framework was developed so they could secure their licensed content. Naturally this announcement was greeted with excitement from Linux desktop users all over, excepting of course those whom are absolutely opposed to DRM.



In the last couple of days, there has been a flurry of articles and tutorials on how to get Netflix to work natively. Most of these of course are claiming that it is +Ubuntu only, though this is absolutely false. The new HTML5 DRM video delivery is enabled by Network Security Services which have been around for a long time, but have only recently acquired the Encrypted Media Extensions for the sort of secured DRM necessary for Netflix. While +Android and Chrome OS had Netflix, this left people wondering why not desktop Linux since the two other operating systems use the Linux kernel too. On Chrome, Google developed a special plugin to provide the DRM to allow Netflix to work, while on Android this was facilitated by an app that had the DRM built in.

So now we have working DRM thanks to Google, Mozilla, and many other parties. Firstly, you need NSS 3.16.2 or greater and the +Google Chrome browser version 37 or higher. You will need to go into your Netflix settings and tell it you'd prefer the HTML5 player. Upto very recently you'd need to have your browser falsely identify itself as another browser to get it to work, but this is no longer necessary. At present Chromium and Firefox cannot run Netflix. +Mozilla Firefox will be getting support as well, but it'll be reliant on a proprietary Content Decryption Module or CDM from +Adobe beyond their more conservative approach with a greater focus on privacy and security. This module would most likely be delivered in the same fashion as the +Adobe Flash Player.

a silhouette of a person's head and shoulders, used as a default avatar

Ruby: Do not use += in loops

Hi,
My motivation for writing this blog post is to have one simple place where I can point everybody using += in a loop. I will describe here why it can kill the performance of any application or library and will also show some measurement to demonstrate it.

Let’s start with practical measurement. I created a simple measurement script which looks like this:

require "benchmark"

N = 100000
Benchmark.bm(15) do |x|
  x.report("Array#+=") do
    arr = []
    N.times { arr += [1] }
  end

  x.report("Array#concat") do
    arr = []
    N.times { arr.concat [1] }
  end

  x.report("String#+=") do
    s = ""
    N.times { s += "1" }
  end

  x.report("String#<<") do
    s = ""
    N.times { s << "1"  }
  end
end

And result for N = 10 000 looks like this:

                      user     system      total        real
Array#+=          0.280000   0.020000   0.300000 (  0.302993)
Array#concat      0.000000   0.000000   0.000000 (  0.002291)
String#+=         0.040000   0.000000   0.040000 (  0.041442)
String#<<         0.000000   0.000000   0.000000 (  0.002437)

and for N = 100 000 looks like this:

                      user     system      total        real
Array#+=         31.410000   6.940000  38.350000 ( 38.635201)
Array#concat      0.030000   0.000000   0.030000 (  0.028933)
String#+=         3.590000   0.020000   3.610000 (  3.635690)
String#<<         0.030000   0.000000   0.030000 (  0.029524)

As can be seen, it does not raise in a linear way and now it is time for some theory to explain why.

When you push elements to the same object, the complexity class of the loop is O(n), since it grows in a linear way. For += it needs to create a copy of the array or string first and then append the new part. The copy complexity class is also O(n) as it depends on array size (the approximation would be O(n/2), which means an O(n) complexity class because half is constant). Copy optimization mechanisms can help, but the complexity class will remain the same. The result is that you need to perform n times the copy operation, which is O(n). So += complexity class is O(n^2). That explains why the times do not raise linearly but quadratically.

Therefore, += should never, ever, be used in loops. If you need to have a new object, then create an empty Array or String before the loop and push new elements into it. It is more efficient. Sure Ruby is not your tool if you are looking for a way to reduce every millisecond, but there is no excuse to use the worse algorithm as it can make application unusable performance-wise.

a silhouette of a person's head and shoulders, used as a default avatar

Sneak peek at openSUSE 13.2; hands on with beta 1

I've been running the beta 1 of 13.2 for a few days now, and there are lots of interesting and welcome changes. Overall it is surprisingly bug free, and I anticipate it will be a very smooth release (at least on +GNOME since I don't use KDE). Now, if you want to know what versions of packages are included you may take a look at this post. I on the other hand want to introduce you to the things that you may NOT know, and that are particularly interesting.

So we have long heard that btrfs would be replacing EXT4 as the default file-system in +openSUSE and many other distributions eventually. Generally it is ready and eventually will outstrip EXT4 and other file-systems for speed as well as it's many other compelling features. However as of yet it still suffers from being a bit too slow. Thus, if you use a separate /home partition you'll notice XFS is being proposed as the default. For some of you this makes sense, but if you are like me it came as quite a surprise. Last I knew XFS was recommended for ridiculously huge volumes and suffered performance issues that made it impractical for domestic use. Naturally I wanted to get to the bottom of this. +Greg Freemyer offered the following explanation:
XFS was designed for high-end systems including supercomputers.The design is 20 years old, so many of the features it incorporates work well on current multi-core laptops and PCs.

During the decade from 2000-2010, XFS had a well deserved reputation of working very inefficiently with small files.  In the 2010/2011 timeframe XFS received major improvements related to metadata handling. This had a huge positive impact on how well XFS works with small files. The key concept is that journal is now maintained initially in RAM. Prior to streaming a large junk of journal information to the disk journal, it is now elevator sorted.

That means when the actual on disk updates are done by applying the journal, the disk head will follow a series of disk seeks all in the same direction. This drastically cut down on long disk head seeks when working applying the journal. The end result was drastically faster speeds when working with small files on rotating media.

ext4 on the other hand was designed for previous generation
computers. Although it can scale to the sizes needed today, it simply was not designed to handle that heavy workloads and massive scaling that modern laptops and desktops can demand.  As such, ext4 is rapidly approaching end of life.

The envisioned replacement for ext4 for the last several years has been btrfs. Unfortunately, btrfs has not yet achieved the performance levels needed to take on heavy workloads.
Though EXT4 is being developed still, it is too old to really cope with modern use cases. In benchmarks vs. XFS newer iterations of EXT4 were nearly able to catch up to the speed of modern XFS, but with one caveat; the journal had to be disabled, which as you probably know is a horrible idea leading to corruption and fragmentation of your data. For some further reading I suggest this article from +SUSE.

Now as you may know, YaST was recently rewritten in the Ruby language. A big reason for this is that only about two people at SUSE knew the language it was written in before called the YaST Markup Language or YML. This of course made it difficult to maintain, and even harder to get community contributions for. Thankfully this has worked out and we've been seeing lots of work on YaST, cleaning up code and modernizing it; adding stability and speed improvements across the entire suite. Our installer even is a YaST module and has seen improvements thanks to the switch to Ruby as well. The improvements are obvious with quick and responsive action across all of YaST. The installer has seen benefits of this as it is quicker, smoother, and more responsive than ever across any of the cards. A new card has been added allowing network configuration (no idea if it works with wifi at all since it isn't the NetworkManager) which among other things allows you to set the hostname for your computer. The interface itself has seen a nice facelift, now with a cleaner more readable experience. In the partitioning scheme card there is a simple modifier dialog that will allow you to set non-default file-systems as well as a check box for expanding SWAP to allow suspend. GRUB2 is now not only default, but the only supported bootloader and has seen bugfixes and obvious speed improvements. Also so far as installation is concerned this has become immensely faster, completing before I can even finish a cigarette. This improvement is due to streamlining the installation process; in the past it would make a large number of mkinitrd calls, whereas now it should only make one call at the very end of the process.

Overall this is turning out to be a very exciting release. And with as good as it looks in this early beta, I anticipate raving reviews. Please consider helping test this release, and file your bugs at our own Bugzilla.


a silhouette of a person's head and shoulders, used as a default avatar

openSUSE – proprietären Grafik-Treiber AMD Catalyst 14.9 als RPM installieren

AMD Catalyst 14.9 (fglrx 14.301) wurde veröffentlicht und unterstützt Grafikkarten ab Radeon HD 5000 und höher. Das Skript makerpm-amd-14.9.sh steht ab sofort zum Download zur Verfügung und unterstützt openSUSE 11.4, 12.1, 12.2, 12.3, 13.1 sowie bis Kernel 3.15. Das Packaging Script wurde aktualisiert und enthält einen Kompatibilitätspatch für den Kernel 3.17 (Danke an die AMD-Maintainer von Arch Linux für den Patch). ;-)

Ich habe eure Kommentare, Mails wie auch Nachrichten in den letzten Wochen aus Zeitgründen leider nicht alle beantworten können. Um euch keinen Unrecht anzutun, werde ich mich in der nächsten Zeit um die Beantwortung eurer Fragen wie auch um Hilfestellung zu diesem Thema bemühen. Bitte habt etwas Geduld.

Zu openSUSE 13.2 Beta 1:
Stefan Dirsch von der SUSE Linux GmbH hat mir freundlicherweise mitgeteilt, dass eine langjährige (SUSE-)Änderung am X-Server nun wieder ausgebaut wurde, weil es mehr Probleme als Nutzen gebracht hat. Die Änderung betraf im Grunde alle proprietären Grafiktreiber. Denn bis zur openSUSE 13.1 wurden für den X-Server spezifische Treiber in das Verzeichnis /usr/lib64/xorg/modules/updates installiert. Die Struktur des genannten Verzeichnis ist identisch mit dem übergeordneten Verzeichnis und sollte eigentlich für einen einfacheren Treiber-Wechsel sorgen. Aus dem genannten Grund werden alle X-Server spezifischen Treiber in weitere Unterverzeichnisse organisiert und über update-alternatives mit einem symbolischen Link verknüpft. Ein aktuelleres Packaging-Skript steht für den AMD Treiber zwar bereit, wurde jedoch nicht aktiviert. Da der Treiber zur Zeit noch nicht mit dem neueren X-Server läuft. Eine Anfrage an AMD läuft noch.

Die Zukunft des Packaging Skriptes:
Im nächsten AMD-Treiber wird das neue überarbeitete Packaging Skript zum Einsatz kommen und enthält wie oben angesprochen wichtige Änderungen für die kommende openSUSE 13.2. Zusätzlich wird das fglrx-Paket in mehrere Pakete aufgeteilt.

  • core (Hauptpaket)
  • graphics (Für X-Server und Multimedia. Benötigt core)
  • amdcccle (AMD Catalyst Control Center. Benötigt core, graphics)
  • opencl (Für OpenCL. Benötigt core)
  • xpic (Alter Paketname wird zu einem Metapaket. Benötigt alle o.g. Pakete)

In eigener Sache:
In der letzten Zeit ist dieser Server hin und wieder ausgefallen. Nein, openSUSE ist auf dem Server nicht verantwortlich gewesen. Das Problem lag an einem defekten Speicherriegel. Ich habe bisher angenommen, dass im Angesicht des Alters vom Server (knapp 7 Jahre) und der Belastung eher das Netzteil als erstes beschädigt wird. Zur Zeit muss ich den Server stark herunter tunen und befindet sich zur Zeit unter Beobachtung. Die Antwortzeit dauert etwas länger als sonst. Bis der neue Server kommt, bitte ich um Geduld und Nachsicht. Danke.

Nachfolgende Release Notes von AMD zum AMD Catalyst 14.9:

Neue Features:

  • AMD Radeon™ R9 285
  • Ubuntu 14.04 support
  • RHEL 7.0 support
  • Install improvements
    • Package and distribution generation options; recommend options set by default
      • Help user install generated distribution package once created
      • Pop-up messages to help guide users through the install process
        • Identifying and installation of pre-requisites

Folgende Probleme sind im Treiber behoben worden:

  • Witcher 2 random lock-up seen when launching the application
  • Screen corruption when connecting an external monitor to some PowerXpress AMD GPU + Intel CPU platforms
  • Intermittent X crash when the user does a rotation with Tear Free Desktop enabled
  • Failure on exit of OpenGL programs
  • Error message being displayed when a user does run clinfo in console mode
  • Blank screen when hot plugging an HDMI monitor from a MST hub
  • System hang after resume from S3/S4 in High Performance mode on PowerXpress AMD GPU + Intel CPU platforms
  • Corruption or artifacting on the bottom right corner of the screen before booting into login UI during restart
  • Occasional segmentation fault when running ETQW
  • xscreensavers test failing with multi-GPU Crossfire™ configurations
  • Motion Builder severe flickering while toggling full screen
  • Intermittent crashing and corruption observed while running X-Plane
  • Some piglit and Khronos OpenGL conformance test failures
  • Displays occasionally going black when startx is run on Ubuntu 14.04 after switching to integrated GPU on PowerXpress AMD GPU + Intel Haswell CPU system platforms
  • A connected external display getting disabled when unplugging AC power from laptop platforms
  • An auto log out when double clicking the picture under desktop server times on PowerXpress AMD GPU + Intel CPU platforms

Offene Probleme:

  • [404829]: Horizontal flashing lines on second screen in a clone mode with V-Sync on using AMD Mobility Graphics with Switchable Intel Graphics
  • [404508]: Display takes a long time to redraw the screen after an S4 cycle

Link: AMD Catalyst™ 14.9 Linux Release Notes

Folgende Steam-Spiele habe ich getestet und laufen mit diesem Treiber:

Eine kleine Bitte habe ich: Wenn irgendwelche Probleme mit dem Treiber auftauchen, scheut euch nicht mir zu berichten (Ich nehme deutsche und englische Bugreports gerne entgegen). ;-) Ich werde versuchen, soweit es mir möglich ist, den gemeldeten Fehler zu reproduzieren. Zusammen mit den nötigen System-Informationen werde ich mich direkt an die richtige Stelle bei AMD wenden, um den Bug in der nächsten Treiber-Version beheben zu lassen. Danke schön. :-D

Für Benutzer älterer AMD Grafikkarten (Radeon HD Serie 2000 – 4000) wird dringend die Installation dieses Treibers abgeraten. openSUSE bringt bereits für ältere Grafikkarten den freien Radeon-Treiber mit. Um regelmäßig Verbesserungen am freien Radeon-Treiber zu erhalten, ist die Installation eines neuen Kernel unumgänglich.

Downloads:

Installationsanleitung:
http://de.opensuse.org/SDB:AMD/ATI-Grafiktreiber#Installation_via_makerpm-amd-Skript

Installation guide (English):
http://en.opensuse.org/SDB:AMD_fglrx#Building_the_rpm_yourself

Über das makerpm-amd-Skript

Das Skript makerpm-amd-14.9.sh ist sehr mächtig, robust und läuft vollautomatisch. Der AMD-Installer wird automatisch heruntergeladen, falls er nicht schon im Verzeichnis liegt. Zudem wird geprüft, ob die Grafikkarte vom Treiber unterstützt wird. Auf Wunsch wird nach dem Bau des RPM-Packages der fglrx-Treiber installiert.

Folgende Argumente können dem Skript übergeben werden:

-b Nur das RPM-Package bauen (Standard)
-c <type> Nur X-Server konfigurieren. Monitor-Typ: single = 1 Monitor, dual = 2 Monitore (Wichtig: Nur ausführen, wenn es Probleme mit der Standardkonfiguration des X-Servers auftreten)
-d Nur den AMD-Installer downloaden
-i Das RPM-Package bauen und installieren bzw. updaten
-kms <yes|no> Kernel-Mode-Setting (KMS) aktivieren oder deaktivieren
-nohw Hardware-Erkennung explizit ausschalten. (z.B. beim Bau in einer VM)
-old2ddriver <yes|no> den alten 2D-Treiber aktivieren oder deaktivieren
-r|–report erstellt ein Report und speichert diese in eine Datei namens amd-report.txt
-u|–uninstall entfernt AMD Catalyst restlos vom System. Zuerst wird das fglrx-Package (falls vorhanden) vom System deinstalliert. Danach werden vorhandene AMD-Dateien und -Verzeichnisse entfernt. Hinweis: Falls das Rebuild-Skript installiert wurde, wird es ebenfalls entfernt und das Initskript /etc/init.d/xdm wiederhergestellt.
-ur|–uploadreport wie Option –report nur zusätzlich wird der Report auf einem NoPaste-Service sprunge.us hochgeladen und gibt bei Erfolg den Link zurück.
-h Die Hilfe anzeigen lassen
-V Version des Skript anzeigen

Hilfe, es funktioniert nicht!

Bitte haltet folgende Regel ein:

  1. Bei der Eingabe der Befehle auf mögliche Tippfehler überprüfen.
  2. Möglicherweise ist die Lösung für das Problem im Wiki vorhanden.
  3. In Kommentaren lesen, ob eine Lösung zu einem Problem bereits existiert.

Wenn keines der o.g. Regel greift, dann könnt ihr mit eurem Anliegen an mich wenden. Damit ich euch helfen kann, müsst ihr erst vorarbeiten. Bitte ladet euch das Skript makerpm-amd-14.9.sh herunter und erstellt einen Report von eurem System in der Konsole:

su -c 'sh makerpm-amd-14.9.sh -ur'

Das Skript lädt das Report auf sprunge.us hoch und gibt anschließend einen Link aus. Diesen Link postet ihr in eurem Kommentar zusammen mit einer Beschreibung zu eurem Problem an mich. Ich werde mir euren Report anschauen und Hilfestellung geben, wo evtl. das Problem liegen könnte.

Feedbacks sind wie immer willkommen. :-)

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE factory :: dumpe2fs

# dumpe2fs 
dumpe2fs 1.42.12 (29-Aug-2014)
Segmentation fault

# echo $?
139

# dumpe2fs -h
dumpe2fs 1.42.12 (29-Aug-2014)
Segmentation fault

> rpm -qf `which dumpe2fs`
e2fsprogs-1.42.12-1.2.x86_64

> cat /etc/SuSE-release 
openSUSE 20140909 (x86_64)
VERSION = 20140909
CODENAME = Harlequin
# /etc/SuSE-release is deprecated and will be removed in the future,
use /etc/os-release instead

# ltrace dumpe2fs
__libc_start_main([ "dumpe2fs" ] 
setlocale(LC_MESSAGES, "")                        = "en_US.UTF-8"
setlocale(LC_CTYPE,"")                            = "en_US.UTF-8"
bindtextdomain("e2fsprogs", "/usr/share/locale")  = "/usr/share/locale"
textdomain("e2fsprogs")                           = "e2fsprogs"
set_com_err_gettext(0x401a00, 1, 1, 0x73676f72707366)                              = 0
add_error_table(0x605260, 1, 1, 0x73676f72707366)                                  = 0
__fprintf_chk(0x7f4fcb90f060, 1, 0x403b42, 0x403b3adumpe2fs 1.42.12 (29-Aug-2014)) = 31
getopt(1, 0x7fff9f754798, "bfhixVo:")                                              = -1
ext2fs_open(0, 0x29000, 0, 0 < no return ...>
--- SIGSEGV (Segmentation fault) ---
+++ killed by SIGSEGV +++

# strace dumpe2fs
execve("/sbin/dumpe2fs", ["dumpe2fs"], [/* 94 vars */]) = 0
brk(0)                                  = 0x15c1000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc391daf000
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=153158, ...}) = 0
mmap(NULL, 153158, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fc391d89000
close(3)                                = 0
open("/lib64/libext2fs.so.2", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3>\1 \360"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=285064, ...}) = 0
mmap(NULL, 2380840, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fc39194a000
mprotect(0x7fc39198d000, 2097152, PROT_NONE) = 0
mmap(0x7fc391b8d000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3,
0x43000) = 0x7fc391b8d000
close(3)                                = 0
open("/lib64/libcom_err.so.2", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3>\1 \27"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=14712, ...}) = 0
mmap(NULL, 2109960, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fc391746000
mprotect(0x7fc391749000, 2093056, PROT_NONE) = 0
mmap(0x7fc391948000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3,
0x2000) = 0x7fc391948000
close(3)                                = 0
open("/lib64/libe2p.so.2", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3>\1`\""..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=32528, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc391d88000
mmap(NULL, 2128304, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fc39153e000
mprotect(0x7fc391545000, 2093056, PROT_NONE) = 0
mmap(0x7fc391744000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3,
0x6000) = 0x7fc391744000
close(3)                                = 0
open("/usr/lib64/libuuid.so.1", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3>\1\340\26"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=19048, ...}) = 0
mmap(NULL, 2113928, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fc391339000
mprotect(0x7fc39133c000, 2097152, PROT_NONE) = 0
mmap(0x7fc39153c000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3,
0x3000) = 0x7fc39153c000
close(3)                                = 0
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3>\1\20\34\2"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1978611, ...}) = 0
mmap(NULL, 3832352, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fc390f91000
mprotect(0x7fc39112f000, 2097152, PROT_NONE) = 0
mmap(0x7fc39132f000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3,
0x19e000) = 0x7fc39132f000
mmap(0x7fc391335000, 14880, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1,
0) = 0x7fc391335000
close(3)                                = 0
open("/lib64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3>\1\20o"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=137435, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc391d87000
mmap(NULL, 2213008, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fc390d74000
mprotect(0x7fc390d8c000, 2093056, PROT_NONE) = 0
mmap(0x7fc390f8b000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3,
0x17000) = 0x7fc390f8b000
mmap(0x7fc390f8d000, 13456, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1,
0) = 0x7fc390f8d000
close(3)                                = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc391d86000
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc391d84000
arch_prctl(ARCH_SET_FS, 0x7fc391d84780) = 0
mprotect(0x7fc39132f000, 16384, PROT_READ) = 0
mprotect(0x7fc390f8b000, 4096, PROT_READ) = 0
mprotect(0x7fc39153c000, 4096, PROT_READ) = 0
mprotect(0x7fc391744000, 4096, PROT_READ) = 0
mprotect(0x7fc391948000, 4096, PROT_READ) = 0
mprotect(0x7fc391b8d000, 4096, PROT_READ) = 0
mprotect(0x604000, 4096, PROT_READ)     = 0
mprotect(0x7fc391db0000, 4096, PROT_READ) = 0
munmap(0x7fc391d89000, 153158)          = 0
set_tid_address(0x7fc391d84a50)         = 4002
set_robust_list(0x7fc391d84a60, 24)     = 0
rt_sigaction(SIGRTMIN, {0x7fc390d7a9f0, [], SA_RESTORER|SA_SIGINFO, 0x7fc390d83890},
NULL, 8) = 0
rt_sigaction(SIGRT_1, {0x7fc390d7aa80, [], SA_RESTORER|SA_RESTART|SA_SIGINFO,
0x7fc390d83890}, NULL, 8) = 0
rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0
getrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0
brk(0)                                  = 0x15c1000
brk(0x15e2000)                          = 0x15e2000
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = -1 ENOENT
(No such file or directory)
open("/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=2434, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc391dae000
read(3, "# Locale name alias data base.\n#"..., 4096) = 2434
read(3, "", 4096)                       = 0
close(3)                                = 0
munmap(0x7fc391dae000, 4096)            = 0
open("/usr/lib/locale/en_US.UTF-8/LC_MESSAGES", O_RDONLY|O_CLOEXEC) = -1 ENOENT
(No such file or directory)
open("/usr/lib/locale/en_US.utf8/LC_MESSAGES", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
close(3)                                = 0
open("/usr/lib/locale/en_US.utf8/LC_MESSAGES/SYS_LC_MESSAGES", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=57, ...}) = 0
mmap(NULL, 57, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fc391dae000
close(3)                                = 0
open("/usr/lib64/gconv/gconv-modules.cache", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=26244, ...}) = 0
mmap(NULL, 26244, PROT_READ, MAP_SHARED, 3, 0) = 0x7fc391da7000
close(3)                                = 0
futex(0x7fc3913348f8, FUTEX_WAKE_PRIVATE, 2147483647) = 0
open("/usr/lib/locale/en_US.UTF-8/LC_CTYPE", O_RDONLY|O_CLOEXEC) = -1 ENOENT
(No such file or directory)
open("/usr/lib/locale/en_US.utf8/LC_CTYPE", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=256420, ...}) = 0
mmap(NULL, 256420, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fc391d45000
close(3)                                = 0
write(2, "dumpe2fs 1.42.12 (29-Aug-2014)\n", 31dumpe2fs 1.42.12 (29-Aug-2014)
) = 31
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x8} ---
+++ killed by SIGSEGV +++
Segmentation fault

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE 11.4 has reached end of Evergreen support

I have posted the following to the openSUSE lists a few moments ago:

as you probably know our initial commitment to support 11.4 within
Evergreen was until July 2014.
http://en.opensuse.org/openSUSE:Evergreen

While I was hoping that we can keep 11.4 alive for some time longer this
is currently not possible from what I would call the "core team" which
is Stefan and myself. We are too busy nowadays to be able to scan and
patch every issue we get aware of.

Thanks to you all who contributed to the success of Evergreen/11.4!

The important message is:

   openSUSE 11.4 Evergreen is not actively maintained anymore!

So if you can, we recommend to switch to openSUSE 13.1 as soon as
possible which is still planned to be an Evergreen release.


Some more details below:
The repository already got and will most likely get more updates from
package maintainers who care about 11.4 and/or their packages though.
You noticed that with recent releases of patches for the bash and NSS
issues already, BUT this is _no_ guarantee that every security issue
will be fixed.
We will accept contributions from anyone though and I will try to take
care about "my" packages.

Some numbers from 11.4 Evergreen (as far as I was able to get them easily):

Evergreen lifetime: 21 months
11.4 overall lifetime (incl. Evergreen): 41 months / 3 years 5 months

Evergreen lifetime numbers (w/o official maintenance period):
Released update source packages: 804
Unique touched source packages:  177
rough number of patches (based on incident counter): 320

Let me also quote Marcus' numbers from official maintenance lifetime:

Total updates: 723
	Security:    416
	Recommended: 306
	Optional:      1

a silhouette of a person's head and shoulders, used as a default avatar

Be careful with Intel turbo boost! It can screw your benchmarking! And run slower when dealing with parallel programs!

I wrote several months ago a little application called machPQ.py (I’ll open the code soon…) which calculates the active, reactive and also the apparent power in machine terminal’s over the time domain, for electromagnetic transients analysis. The files that this program have to crunch often have 1.E6 lines or more.



MANAUARA_PQFig. 1: All this work to generate this kind of images.


Due to those large files this application was taking long time to finish it’s calculations 1h-3h, hence I started to transcript it into a parallel paradigm using python as well.

The problem begins when I tried to benchmark the parallel version and compare with the single threaded one. The single threaded in some runs was being faster than the parallel version! That was driving me crazy! I don’t know why, but something told me that I should take a look at the processor state (my laptop is a Dell XPS 15 L502x with i7 processor).

And damn! I was right! With the turbo boost enabled [3] the computer running multiple threads got hotter faster and then slowed the clock speed, therefore being slower than the single thread version, or just slightly faster (depending on how hot the day was).

So, to disable the turbo boost, I used, from [1]:

# echo 1 > /sys/devices/system/cpu/intel_pstate/no_turbo

And then the magic happened! In this way, with the parallel version fighting in fair conditions with single thread version the expected results came up.

I’ll not do a long discussion over the data, but just to summarize:

  • When the code was running with turbo boost enable, the time needed to complete the simulation using the parallel version was only by 4.13 % smaller than the single threaded version (Simulation 1);
  • Now, with turbo boost disable, the non parallel version took 42.02 % more time time than the parallel version – Oh, yeah! – (Simulation 3);
  • Running the code into n-crap-vidia, with optimus [2], again, we got a nice speed up of 48.34 % (comparing the bigger time to the smaller) (Simulation 4);
  • The parallel code running directly into cpu (Simulation 3) took 2.61 % more time than into n-crap-vidia (Simulation 4). However, this mismatch is so small and I just performed a single simulation that it is not possible to verify any trend here;
  • The single thread version running with turbo boost enabled (Simulation 1) was 24.21 % faster than the single threaded version when turbo boost was disabled (Simulation 3);
  • The parallel version with turbo boost disabled (Simulation 3) was 8.77 % faster than the parallel version with turbo boost enabled (Simulation 1);

From the above analysis we can conclude:

  • This variable clock speed is a pain in the ass when doing benchmarks!!! Even disabling the turbo boost, the clock can also be reduced if the temperature is high;
  • As the major programs are still single threaded leaving the turbo boost enabled is a good idea;
  • For very demanding multiple process or multiple threaded programs, it’s a good idea disabling the turbo boost;
  • Using the GPU through bumblebee seems interesting and deserves further tests.

All the data used to analyze the performance and speed up due to the code parallelism are shown clicking here ->

Simulation 1: With turbo boost enabled

single_threadFig. 2: Simulation 1 – Single thread version.


multi_threadFig. 3: Simulation 1: Parallel threads version.


leonardo@AL:~/projects/machPQ$ time python machPQ_s_plot.py teste.adf 1.E-6 63. ; time python machPQ_teste_parallel.py teste.adf 1.E-6 63.
Abrindo arquivo de dados…
…OK!
Iniciando processamento…
…OK!
Gravando resultados em disco…
…OK!

Tempo de processamento: 186.784282

real 3m7.005s
user 3m7.135s
sys 0m0.079s
Abrindo arquivo de dados…
…OK!
Iniciando processamento…
…OK!
Gravando resultados em disco…
…OK!

Tempo de processamento: 179.066593

real 2m59.248s
user 5m14.349s
sys 1m11.704s


Simulation 2: With turbo boost enabled and running into n-crap-vidia

leonardo@AL:~/projects/machPQ$ time optirun python machPQ_s_plot.py teste.adf 1.E-6 63. ; time optirun python machPQ_teste_parallel.py teste.adf 1.E-6 63.
Abrindo arquivo de dados…
…OK!
Iniciando processamento…
…OK!
Gravando resultados em disco…
…OK!

Tempo de processamento: 203.956192

real 3m52.100s
user 3m24.476s
sys 0m0.228s
Abrindo arquivo de dados…
…OK!
Iniciando processamento…
…OK!
Gravando resultados em disco…
…OK!

Tempo de processamento: 183.662801

real 3m20.575s
user 5m29.300s
sys 0m58.710s


Simulation 3: With turbo boost disabled

single_thread_noturboFig. 4: Simulation 3 – Single thread version.


multi_thread_noturboFig. 5: Simulation 3 – Parallel threads version.


leonardo@AL:~/projects/machPQ$ time python machPQ_s_plot.py teste.adf 1.E-6 63. ; time python machPQ_teste_parallel.py teste.adf 1.E-6 63.Abrindo arquivo de dados…
…OK!
Iniciando processamento…
…OK!
Gravando resultados em disco…
…OK!

Tempo de processamento: 232.012078

real 3m52.313s
user 3m52.349s
sys 0m0.107s
Abrindo arquivo de dados…
…OK!
Iniciando processamento…
…OK!
Gravando resultados em disco…
…OK!

Tempo de processamento: 163.367201

real 2m43.608s
user 4m36.375s
sys 1m22.991s


Simulation 4: With turbo boost disabled and running into n-crap-vidia

leonardo@AL:~/projects/machPQ$ time optirun python machPQ_s_plot.py teste.adf 1.E-6 63. ; time optirun python machPQ_teste_parallel.py teste.adf 1.E-6 63.
Abrindo arquivo de dados…
…OK!
Iniciando processamento…
…OK!
Gravando resultados em disco…
…OK!

Tempo de processamento: 236.172199

real 3m59.293s
user 3m56.594s
sys 0m0.125s
Abrindo arquivo de dados…
…OK!
Iniciando processamento…
…OK!
Gravando resultados em disco…
…OK!

Tempo de processamento: 159.207409

real 2m41.755s
user 4m35.806s
sys 1m18.623s


References:

[1] – http://luisjdominguezp.tumblr.com/post/19610447111/disabling-turbo-boost-in-linux
[2] – http://bumblebee-project.org/
[3] – http://en.wikipedia.org/wiki/Intel_Turbo_Boost

Acknowledgements

English revised by my love @anielampm =) Thank you!!

a silhouette of a person's head and shoulders, used as a default avatar

Yast Code Review Guide

Yast team has finally published their official Code Review Guide.

All contributors and Yast team members are asked to use the very same simple rules for all code changes related to Yast, including Installer or just simply all code handled by thee Yast team, including, e.g., Linuxrc.

Please, see the guide here. We are looking forward your suggestions for improvements.
a silhouette of a person's head and shoulders, used as a default avatar

Building Yocto/Poky on openSUSE Factory

Since a few weeks, openSUSE Factory no longer is labeled as "openSUSE Project 13.2", but as:
seife@susi:~> lsb_release -ir
Distributor ID: openSUSE project
Release: 20140918
When trying to build the current Yocto poky release, you get the following Warning:
WARNING: Host distribution "openSUSE-project-20140918" has not been validated with this version of the build system; you may possibly experience unexpected failures. It is recommended that you use a tested distribution.
Now I know these warnings and have ignored those before. The list of tested distributions is hard coded in the build system configuration and in general it would be a bad idea to add not yet released versions (as 13.2) or rolling releases. And since the Factory release number changes every few days, it is clearly impossible to keep this up to date: once you have tested everything, the version has increased already. But apart from this, purely cosmetic warning, there is a really annoying consequence of the version change: the configuration cache of bitbake (the build tool used by Yocto poky/OpenEmbedded) is rebuilt on every change of the host distribution release. Updating the cache takes about 2 minutes on my machine, so doing a simple configuration check on your already built Yocto distribution once a week can get quite annoying. I looked for a solution and went for the "quick hack" route:
  • bitbake parses "lsb_release -ir"
  • I  replace "lsb_release" with a script that emits filtered output and is before the original lsb_release in $PATH
This is what I have put into ~/bin/lsb_release (the variable check is a bit of paranoia to let this only have an effect in a bitbake environment):

#!/bin/bash
if [ -z "$BB_ENV_EXTRAWHITE" -o "x$1" != "x-ir" ]; then
        exec lsb-release $@
fi
printf "Distributor ID:\topenSUSE project\nRelease:\t2014\n"

Then "chmod 755 ~/bin/lsb_release" and now the warning is
WARNING: Host distribution "openSUSE-project-2014" has not been validated...
And more important: it stays the same after updating Factory to the next release. Mission accomplished.

UPDATE: Koen Kooi noted that "Yocto" is only the umbrella project and what I'm fixing here is actually the "poky" build system that's part of the project, so I edited this post for clarity. Thanks for the hint!