Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

Why are AppStream metainfo files XML data?

This is a question raised quite quite often, the last time in a blogpost by Thomas, so I thought it is a good idea to give a slightly longer explanation (and also create an article to link to…).

There are basically three reasons for using XML as the default format for metainfo files:

1. XML is easily forward/backward compatible, while YAML is not

This is a matter of extending the AppStream metainfo files with new entries, or adapt existing entries to new needs.

Take this example XML line for defining an icon for an application:

<icon type="cached">foobar.png</icon>

and now the equivalent YAML:

Icons:
  cached: foobar.png

Now consider we want to add a width and height property to the icons, because we started to allow more than one icon size. Easy for the XML:

<icon type="cached" width="128" height="128">foobar.png</icon>

This line of XML can be read correctly by both old parsers, which will just see the icon as before without reading the size information, and new parsers, which can make use of the additional information if they want. The change is both forward and backward compatible.

This looks differently with the YAML file. The “foobar.png” is a string-type, and parsers will expect a string as value for the `cached` key, while we would need a dictionary there to include the additional width/height information:

Icons:
  cached: name: foobar.png
          width: 128
          height: 128

The change shown above will break existing parsers though. Of course, we could add a `cached2` key, but that would require people to write two entries, to keep compatibility with older parsers:

Icons:
  cached: foobar.png
  cached2: name: foobar.png
          width: 128
          height: 128

Less than ideal.

While there are ways to break compatibility in XML documents too, as well as ways to design YAML documents in a way which minimizes the risk of breaking compatibility later, keeping the format future-proof is far easier with XML compared to YAML (and sometimes simply not possible with YAML documents). This makes XML a good choice for this usecase, since we can not do transitions with thousands of independent upstream projects easily, and need to care about backwards compatibility.

2. Translating YAML is not much fun

A property of AppStream metainfo files is that they can be easily translated into multiple languages. For that, tools like intltool and itstool exist to aid with translating XML using Gettext files. This can be done at project build-time, keeping a clean, minimal XML file, or before, storing the translated strings directly in the XML document. Generally, YAML files can be translated too. Take the following example (shamelessly copied from Dolphin):

<summary>File Manager</summary>
<summary xml:lang="bs">Upravitelj datoteka</summary>
<summary xml:lang="cs">Správce souborů</summary>
<summary xml:lang="da">Filhåndtering</summary>

This would become something like this in YAML:

Summary:
  C: File Manager
  bs: Upravitelj datoteka
  cs: Správce souborů
  da: Filhåndtering

Looks manageable, right? Now, AppStream also covers long descriptions, where individual paragraphs can be translated by the translators. This looks like this in XML:

<description>
  <p>Dolphin is a lightweight file manager. It has been designed with ease of use and simplicity in mind, while still allowing flexibility and customisation. This means that you can do your file management exactly the way you want to do it.</p>
  <p xml:lang="de">Dolphin ist ein schlankes Programm zur Dateiverwaltung. Es wurde mit dem Ziel entwickelt, einfach in der Anwendung, dabei aber auch flexibel und anpassungsfähig zu sein. Sie können daher Ihre Dateiverwaltungsaufgaben genau nach Ihren Bedürfnissen ausführen.</p>
  <p>Features:</p>
  <p xml:lang="de">Funktionen:</p>
  <p xml:lang="es">Características:</p>
  <ul>
    <li>Navigation (or breadcrumb) bar for URLs, allowing you to quickly navigate through the hierarchy of files and folders.</li>
    <li xml:lang="de">Navigationsleiste für Adressen (auch editierbar), mit der Sie schnell durch die Hierarchie der Dateien und Ordner navigieren können.</li>
    <li xml:lang="es">barra de navegación (o de ruta completa) para URL que permite navegar rápidamente a través de la jerarquía de archivos y carpetas.</li>
    <li>Supports several different kinds of view styles and properties and allows you to configure the view exactly how you want it.</li>
    ....
  </ul>
</description>

Now, how would you represent this in YAML? Since we need to preserve the paragraph and enumeration markup somehow, and creating a large chain of YAML dictionaries is not really a sane option, the only choices would be:

  • Embed the HTML markup in the file, and risk non-careful translators breaking the markup by e.g. not closing tags.
  • Use Markdown, and risk people not writing the markup correctly when translating a really long string in Gettext.

In both cases, we would loose the ability to translate individual paragraphs, which also means that as soon as the developer changes the original text in YAML, translators would need to translate the whole bunch again, which is inconvenient.

On top of that, there are no tools to translate YAML properly that I am aware of, so we would need to write those too.

3. Allowing XML and YAML makes a confusing story and adds complexity

While adding YAML as a format would not be too hard, given that we already support it for DEP-11 distro metadata (Debian uses this), it would make the business of creating metainfo files more confusing. At time, we have a clear story: Write the XML, store it in `/usr/share/metainfo`, use standard tools to translate the translatable entries. Adding YAML to the mix adds an additional choice that needs to be supported for eternity and also has the problems mentioned above.

I wanted to add YAML as format for AppStream, and we discussed this at the hackfest as well, but in the end I think it isn’t worth the pain of supporting it for upstream projects (remember, someone needs to maintain the parsers and specification too and keep XML and YAML in sync and updated). Don’t get me wrong, I love YAML, but for translated metadata which needs a guarantee on format stability it is not the ideal choice.

So yeah, XML isn’t fun to write by hand. But for this case, XML is a good choice.

a silhouette of a person's head and shoulders, used as a default avatar

A GNOME Software Hackfest report

Two weeks ago was the GNOME Software hackfest in London, and I’ve been there! And I just now found the time to blog about it, but better do it late than never 😉 .

Arriving in London and finding the Red Hat offices

After being stuck in trains for the weekend, but fortunately arriving at the airport in time, I finally made it to London with quite some delay due to the slow bus transfer from Stansted Airport. After finding the hotel, the next issue was to get food and a place which accepted my credit card, which was surprisingly hard – in defence of London I must say though, that it was a Sunday, 7 p.m. and my card is somewhat special (in Canada, it managed to crash some card readers, so they needed a hard-reset). While searching for food, I also found the Red Hat offices where the hackfest was starting the next day by accident. My hotel, the office and the tower bridge were really close, which was awesome! I have been to London in 2008 the last time, and only for a day, so being that close to the city center was great. The hackfest didn’t leave any time to visit the city much, but by being close to the center, one could hardly avoid the “London experience” 😉 .

Cool people working on great stuff

towerbridge2016That’s basically the summary for the hackfest 😉 . It was awesome to meet with Richard Hughes again, since we haven’t seen each other in person since 2011, but work on lots of stuff together. This was especially important, since we managed to solve quite some disagreements we had over stuff – Richard even almost managed to make me give in to adding <kudos/> to the AppStream spec, something which I was pretty against supporting (it didn’t make it yet, but I am no longer against the idea of having that – the remaining issues are solvable).

Meeting Iain Lane again (after FOSDEM) was also very nice, and also seeing other people I’ve only worked with over IRC or bug reports (e.g. William, Kalev, …) was great. Also lots of “new” people were there, like guys from Endless, who build their low-budget computer for developing/emerging countries on top of GNOME and Linux technologies. It’s pretty cool stuff they do, you should check out their website! (they also build their distribution on top of Debian, which is even more awesome, and something I didn’t know before (because many Endless people I met before were associated with GNOME or Fedora, I kind of implicitly assumed the system was based on Fedora 😛 )).

The incarnation of GNOME Software used by endless looks pretty different from what the normal GNOME user sees, since it’s adjusted for a different audience and input method. But it looks great, and is a good example for how versatile GS already is! And for upstream GNOME, we’ve seen some pretty great mockups done by Endless too – I hope those will make it into production somehow.

Ironically, a "snapstore" was close to the office ;-)
Ironically, a “snapstore” was close to the office 😉

XdgApp and sandboxing of apps was also a big topic, aside from Ubuntu and Endless integration. Fortunately, Alexander Larsson was also there to answer all the sandboxing and XdgApp-questions.

I used the time to follow up on a conversation with Alexander we started at FOSDEM this year, about the Limba vs. XdgApp bundling issue. While we are in-line on the sandboxing approach, the way how software is distributed is implemented differently in Limba and XdgApp, and it is bad to have too many bundling systems around (doesn’t make for a good story where we can just tell developers “ship as this bundling format, and it will be supported everywhere”). Talking with Alex about this was very nice, and I think there is a way out of the too-many-solutions dilemma, at least for Limba and XdgApp – I will blog about that separately soon.

On the Ubuntu side, a lot of bugs and issues were squashed and changes upstreamed to GNOME, and people were generally doing their best to reduce Richard’s bus-factor on the project a little 😉 .

I mainly worked on AppStream issues, finishing up the last pieces of appstream-generator and running it against some sample package sets (and later that week against the whole Debian archive). I also started to implement support for showing AppStream issues in the Debian PTS (this work is not finished yet). I also managed to solve a few bugs in the old DEP-11 generator and prepare another release for Ubuntu.

We also enjoyed some good Japanese food, and some incredibly great, but also suddenly very expensive Indian food (but that’s a different story 😉 ).

The most important thing for me though was to get together with people actually using AppStream metadata in software centers and also more specialized places. This yielded some useful findings, e.g. that localized screenshots are not something weird, but actually a wanted feature of Endless for their curated AppStore. So localized screenshots will be part of the next AppStream spec. Also, there seems to be a general need to ship curation information for software centers somehow (which apps are featured? how are they styled? added special banners for some featured apps, “app of the day” features, etc.). This problem hasn’t been solved, since it’s highly implementation-specific, and AppStream should be distro-agnostic. But it is something we might be able to address in a generic way sooner or later (I need to talk to people at KDE and Elementary about it).

In summary…

It was a great event! Going to conferences and hackfests always makes me feel like it moves projects leaps ahead, even if you do little coding. Sorting out issues together with people you see in person (rather than communicating with them via text messages or video chat), is IMHO always the most productive way to move forward (yeah, unless you do this every week, but I think you get my point 😀 ).

For me, being the only (and youngest ^^) developer at the hackfest who was not employed by any company in the FLOSS business, the hackfest was also motivating to continue to invest spare time into working on these projects.

So, the only thing left to do is a huge shout out of “THANK YOU” to the Ubuntu Community Fund – and therefore the Ubuntu community – for sponsoring me! You rock! Also huge thanks to Canonical for organizing the sponsoring really quickly, so I didn’t get into trouble with paying my flights.

Laney and attente walking on the Millennium Bridge after we walked the distance between Red Hat and Canonical's offices.
Laney and attente on the Millennium Bridge after we walked the distance between Red Hat and Canonical’s offices.

To worried KDE people: No, I didn’t leave the blue side – I just generally work on cross-desktop stuff, and would like all desktops to work as well as possible 😉

a silhouette of a person's head and shoulders, used as a default avatar

Upgrading my home NAS server - (HP Proliant Microserver Gen8 + FreeBSD 10.3 + ZFS + Jails)

I am using FreeBSD+ZFS for my NAS home server since 2009, see my old post about it here: My ZFS Home NAS/HTPC Box Build

The setup worked perfectly in the last years and even if I had a few hard drives failures I never lost any data. Replacing the hard drives was really easy and the system was back to normal in a matter of minutes. Over the last years I also upgraded the hardware a few times:

  • two years ago I replaced the mainboard, cpu and added more ram: ASUS P8H77-I + Intel Celeron G460 + 8GB Ram
  • about one year ago I replaced the cpu with an Intel Core i3-3250 and went up to 16GB Ram (maximum supported)

And finally, a few weeks ago, I have just found a great deal for a HP Proliant Microserver Gen8 having in standard configuration Intel G1610T cpu and 4GB ECC Ram and I could not miss it. I replaced the Ram with 2x8GB ECC. So that's my new, current, Home Server (NAS) and I am really happy with it. I spent some time with its configuration because of some little issues with running FreeBSD on it and I also decided to change a bit my setup (adding jails for services). I am not using this box for HTPC anymore.

# dmesg
Copyright (c) 1992-2016 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
    The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 10.3-RELEASE #0 r297264: Fri Mar 25 02:10:02 UTC 2016
    root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64
FreeBSD clang version 3.4.1 (tags/RELEASE_34/dot1-final 208032) 20140512
CPU: Intel(R) Celeron(R) CPU G1610T @ 2.30GHz (2294.84-MHz K8-class CPU)
  Origin="GenuineIntel"  Id=0x306a9  Family=0x6  Model=0x3a  Stepping=9
  Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
  Features2=0xd9ae3bf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,POPCNT,TSCDLT,XSAVE,OSXSAVE>
  AMD Features=0x28100800<SYSCALL,NX,RDTSCP,LM>
  AMD Features2=0x1<LAHF>
  Structured Extended Features=0x281<FSGSBASE,SMEP,ERMS>
  XSAVE Features=0x1<XSAVEOPT>
  VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID
  TSC: P-state invariant, performance statistics
real memory  = 17179869184 (16384 MB)
avail memory = 16571678720 (15803 MB)
Event timer "LAPIC" quality 600
ACPI APIC Table: <HP     ProLiant>
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
[...SKIP...]

Here is a short story with the issues I had: The HP ProLiant MicroServer Gen8 is a small, quiet, and stylishly designed server that is ideal for a NAS storage. It has 4-Bay for hard drives (main storage) and one place for an ODD drive but there is no ODD drive supplied. Initially I decided to add a SSD to this Sata II port to have it for FreeBSD OS and use it also as a storage for some virtual machines (more about my home virtual lab setup in future posts). The first issue was that I could not boot from that SSD because the system is able to boot only from the HDD in the first bay or from usb. After some research I found that the system will be able to boot from the Sata port connected to ODD drive but only if you create a RAID0 and add only the single SSD disk to it. I tried it but if the RAID card is enabled in BIOS, there is no chance to boot FreeBSD. Even from USB is not booting anymore, I always got "BTX Halted" error (there is a screenshot in the photo album below).

The solution was to just disable the RAID card in BIOS, and anyway because I am using ZFS for my NAS is not recommended to use the RAID card (for ZFS storage) which is coming with the server, so is better to just disable it and use the AHCI mode for hard drives.

Not being able to boot FreBSD from my SSD I had to change my plan a bit and perhaps boot from usb. I picked up a SanDisk Ultra 16GB usb stick, installed FreeBSD on it and thrown it in the internal usb port. After I was sure that everything was working as expected I decided to move /usr and /var on the SSD drive and let just keep the / partition on my usb stick. Importing the storage drives with mirrored ZFS pools was really easy and everything is working perfectly now.

I mentioned that I decided also to change a bit my NAS configuration and instead having all services (samba, afp, transmission, plex, monitoring, ...) on my host machine, move them to run each of them in their jail will be preferred. Here are the current jails I run right now:

# jls
   JID  IP Address      Hostname                   Path
     1  192.168.0.41    samba.home.local           /storage/jails/samba
     2  192.168.0.42    afs.home.local             /storage/jails/afs
     3  192.168.0.43    transmission.home.local    /storage/jails/transmission
     4  192.168.0.44    plex.home.local            /storage/jails/plex
     5  192.168.0.45    monitoring.home.local      /storage/jails/monitoring
     6  192.168.0.46    gitlab.home.local          /storage/jails/gitlab
     7  192.168.0.47    confluence.home.local      /storage/jails/confluence
     8  192.168.0.48    virt.home.local            /storage/jails/virtual

And here are some pictures: HP Proliant Microserver Gen 8 Home NAS Photos

a silhouette of a person's head and shoulders, used as a default avatar

the avatar of Vojtěch Zeisek

a silhouette of a person's head and shoulders, used as a default avatar

Introducing AppStream-Generator

AppStream GeneratorSince mid-2015 we were using the dep11-generator in Debian to build AppStream metadata about available software components in the distribution.

Getting rid of dep11-generator

Unfortunately, the old Python-based dep11-generator was hitting some hard limits pretty soon. For example, using multiprocessing with Python was a pain, since it resulted in some very hard-to-track bugs. Also, the multiprocessing approach (as opposed to multithreading) made it impossible to use the underlying LMDB database properly (it was basically closed and reopened in each forked off process, since pickling the Python LMDB object caused some really funny bugs, which usually manifested themselves in the application hanging forever without any information on what was going on). Additionally to that, the Python-based generator forced me to maintain two implementations of the AppStream YAML spec, one in C and one in Python, which consumes quite some time. There were also some other issues (e.g. no unit-tests) in the implementation, which made me think about rewriting the generator.

Adventures in Go / Rust / D

Since I didn’t want to write this new piece of software in C (or basically, writing it in C was my last option 😉 ), I explored Go and Rust for this purpose and also did a small prototype in the D programming language, when I was starting to feel really adventurous. And while I never intended to write the new generator in D (I was pretty fixated on Go…), this is what happened. The strong points for D for this particular project were its close relation to C (and ease of using existing C code), its super-flat learning curve for someone who knows and likes C and C++ and its pretty powerful implementations of the concurrent and parallel programming paradigms. That being said, not all is great in D and there are some pretty dark spots too, mainly when it comes to the standard library and compilers. I will dive into my experiences with D in a separate blogpost.

What good to expect from appstream-generator?

So, what can the new appstream-generator do for you? Basically, the same as the old dep11-generator: It will extract metadata from a distribution’s package archive, download and resize screenshots, search for icons and size them properly and generate reports in JSON and HTML of found metadata and issues.

LibAppStream-based parsing, generation of YAML or XML, multi-distro support, …

As opposed to the old generator, the new generator utilizes the metadata parsers and writers of libappstream. This allows it to return the extracted metadata as AppStream YAML (for Debian) or XML (everyone else) It is also written in a distribution-agnostic way, so if someone wants to use it in a different distribution than Debian, this is possible now. It just requires a very small distribution-specific backend to be written, all of the details of the metadata extraction are abstracted away (just two interfaces need to be implemented). While I do not expect anyone except Debian to use this in the near future (most distros have found a solution to generate metadata already), the frontend-backend split is a much cleaner design than what was available in the previous code. It also allows to unit-test the code properly, without providing a Debian archive in the testsuite.

Feature Flags, Optipng, …

The new generator also allows to enable and disable certain sets of features in a standardized way. E.g. Ubuntu uses a language-pack system for translations, which Debian doesn’t use. Features like this can be implemented as disableable separate modules in the generator. We use this at time to e.g. allow descriptions from packages to be used as AppStream descriptions, or for running optipng on the generated PNG images and icons.

No more Contents file dependency

Another issue the old generator had was that it used the Contents file from the Debian archive to find matching icons for an application. We could never be sure whether the contents in the Contents file actually matched the contents of the package we were currently dealing with. What made things worse is that at Ubuntu, the archive software is only updating the Contents file weekly daily (while the generator might run multiple times a day), which has lead to software being ignored in the metadata, because icons could not yet be found. Even on Debian, with its quickly-updated Contents file, we could immediately see the effects of an out-of-date Contents file when updating it failed once. In the new generator, we read the contents of each package ourselves now and store them in a LMDB database, bypassing the Contents file and removing the whole class of problems resulting from missing or wrong contents-data.

It can’t all be good, right?

That is true, there are also some known issues the new generator has:

Large amounts of RAM required

The better speed of the new generator comes at the cost of holding more stuff in RAM. Much more. When processing data from 5 architectures initially on Debian, the amount of required RAM might lie above 4GB, with the OOM killer sometimes being quicker than the garbage collector… That being said, on subsequent runs the amount of required memory is much lower. Still, this is something I am working on to improve.

What are symbolic links?

To be faster, the appstream-generator will read the md5sum file in .deb packages instead of extracting the payload archive and reading its contents. Since the md5sums file does not list symbolic links, symlinks basically don’t exist for the new generator. This is a problem for software symlinking icons or even .desktop files around, like e.g. LibreOffice does.

I am still investigating how widespread the use of symlinks for icons and .desktop files is, but it looks like fixing packages (making them not-symlink stuff and rather move the files) might be the better approach than investing additional computing power to find symlinks or even switch back to parsing the Contents file. Input on this is welcome!

Deploying asgen

I finished the last pieces of the appstream-generator (together with doing lots of other cool things and talking to great people) at the GNOME Software Hackfest in London last week (detailed blogposts about things that happened there will follow – many thanks once again for the Ubuntu community for sponsoring my attendance!).

Since today, the new generator is running on the Debian infrastructure. If bigger issues are found, we can still roll back to the old code. I decided to deploy this faster, so we can get some good testing done before the Stretch release. Please report any issues you may find!

a silhouette of a person's head and shoulders, used as a default avatar

Kein CUDA nach dem Update auf openSUSE Leap 42.1

Nachdem ich meine Workstation von openSUSE 13.1 auf openSUSE Leap 42.1 migriert hatte funktionierten lies sich die Grafikkarte nicht mehr zum Rechnen nutzen. Irgendwie hatte es einen Fehler bei der Installation des neuen Grafiktreibers gegeben. Seine Neuinstallation löste das Problem.

Das Problem

Nach dem Update fanden CUDA nutzende Anwendungen keine Grafikkarte. So meldete Boinc No usable GPUs found. Prinzipiell funktionierte die Grafikkarte aber. So hatte ich volle 3D-Beschleunigung, und auch nvidia-smi meldete die erwarteten Daten:

+------------------------------------------------------+
| NVIDIA-SMI 340.93     Driver Version: 340.93         |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 580     Off  | 0000:01:00.0     N/A |                  N/A |
| 43%   49C   P12    N/A /  N/A |    494MiB /  1535MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Compute processes:                                               GPU Memory |
|  GPU       PID  Process name                                     Usage      |
|=============================================================================|
|    0            Not Supported                                               |
+-----------------------------------------------------------------------------+

Die Analyse

NVIDIA bietet CUDA zwar für einige SUSE-Varianten zum Download, aber leider nicht für Leap 42.1. Es gibt zwar Pakete für das unterliegende SLE 12, aber diese verlangen einen Grafikttreiber welcher nicht zum Kernel von Leap 42.1 passt. Deshalb habe ich mich für die Analyse auf OpenCL-Beispiel-Code zurückgezogen. Denn für OpenCL benötigt man nur die Header, welche man einfach direkt zum Code legen kann, und einen C-Compiler.

Schon der erste Versuch mit einem Beispielprogramm welches nur die OpenCL-Runtime initialisiert zeigt das Problem:

> sudo ./platform
modprobe: FATAL: Module nvidia-uvm not found.
Failed to get platforms: -1001

Das sudo ist hier übrigens nur vorangestellt um es zu ermöglichen Kernel-Module nachzuladen. Ohne sudo erhält man den gleichen Fehler, aber ohne die essentielle Information zum Kernelmodul.

Schnell konnte ich herausfinden, dass eigentlich alle Kernelmodule installiert sind:

> rpm -ql nvidia-gfxG03-kmp-default-340.93_k4.1.12_1-36.7.x86_64 | grep \.ko
/lib/modules/4.1.12-1-default/updates/nvidia.ko

> rpm -ql nvidia-uvm-gfxG03-kmp-default-340.93_k4.1.12_1-36.7.x86_64 | grep \.ko
/lib/modules/4.1.12-1-default/updates/nvidia-uvm.ko

> ls /lib/modules/4.1.12-1-default/updates/
nvidia.ko  nvidia-uvm.ko

Allerdings war das nicht das Modulverzeichnis des aktuell laufenden Kernels:

> uname -a
Linux eddie 4.1.13-5-default #1 SMP PREEMPT Thu Nov 26 16:35:17 UTC 2015 (49475c3) x86_64 x86_64 x86_64 GNU/Linux

Wo liegen also dessen Module? /lib/modules/4.1.13-5-default/updates existiert nicht. Das NVIDIA-Modul ist aber schnell gefunden:

> ls /lib/modules/4.1.13-5-default/weak-updates/updates/
nvidia.ko

Nur fehlt in diesem Verzeichnis das nvidia-uvm.ko.

Die Lösung

Eine Neuinstallation des dazugehörigen Paketes lässt dann endlich auch OpenCL wieder richtig initialisieren:

> zypper in in -f nvidia-uvm-gfxG03-kmp-default

> ls /lib/modules/4.1.13-5-default/weak-updates/updates/
nvidia.ko  nvidia-uvm.ko

> modprobe nvidia-uvm

> ./platform
Failed to get platforms: -1001

> sudo ./platform
NVIDIA CUDA - OpenCL 1.1 CUDA 6.5.51 - NVIDIA Corporation - FULL_PROFILE

> ./platform
NVIDIA CUDA - OpenCL 1.1 CUDA 6.5.51 - NVIDIA Corporation - FULL_PROFILE

Um die Grafikkarte tatsächlich zu verwenden war dann allerdings noch ein Neustart notwendig.

the avatar of Klaas Freitag

Volumio2 Release Candidate

Last night I found time to finally install the first release candidate of Volumio 2, my preferred audio player software. This is more exciting than it sounds, because when I read the blogpost last summer that Volumio is going to be completely rewritten, with replacing the base technologies, I was a bit afraid that this will be one of the last bits that we heard from this project. Too many cool projects died after famous last announcements like that.

But not Volumio.

volumio2

After quite some development time the project released RC1. While there were a few small bugs in a beta, my feelings about the RC1 are really positive. Volumio2 has a very nice and stylish GUI, a great improvement over Volumio1. Album-art is now nicely integrated in the playback pane and and everything is more shiny, even if the general concept is the same as in Volumio1.

I like it because it is only a music player. Very reduced on that, but also very thought through and focussed to fulfill that job perfectly. I just want to find and play music from my collection, quickly and comfortable and with good sound quality. No movies, series, images. Just sound.

About speed: While the scanning of my not too big music collection on a NAS was a bit of a time consuming task in the past, this feels now much faster (maybe thats only because of a faster network between the Raspberry and the NAS?). Searching, browsing and everything works quite fluid on an Raspberry2. And with the Hifiberry DAC for output, the sound quality is more than ok.

This is an release candidate of the first release of the rewritten project, and the quality is already very good. Nevertheless I found a few things that did not work for me or could be improved. That the volume control is not working is probably because of the Hifiberry DAC driver, I remember there was something, but haven’t investigated yet.

There are some things in the GUI that could be looked at again: For example on the Browse page, there is the very well working search field. After entering the search term and Enter, the search result is displayed as a list of songs to select from. I wished that the songs were additionally grouped by albums, which should also be selectable to be pushed to the play queue.

Also it would be great if the Queue would somehow indicate which entry is currently played. I could not spot that.

But these are only minor findings which can easily be addressed later after enhancement requests were posted :-)

I think Volumio2 is already a great success, even before it was released! You should not hesitate to try it if you love to listen to music!

Thanks for the hard work Volumio-Team!

a silhouette of a person's head and shoulders, used as a default avatar

Studying in Prague? Join us at eClub Summer Camp!

With kind support of the Medialab foundation and Jan Šedivý, we are looking hard for students in Prague to work with us on a summer internship! We actually have two options for you:

  • eClub Summer Camp (main option) – we have some ambitious projects and ideas for you to try out if you are excited by machine learning, big data and artificial intelligence. Exploratory, exciting, state-of-art research without required previous in-depth knowledge! (Just good basic math and programming.)
  • Summer Job (auxiliary option, full-time coder) – we need help polishing the edges of some of our projects, seeking students that are skilled programmers.

We are mainly affiliated with FEL CVUT, but we also have students from MFF UK and we’ll welcome students from other Czech universities too. As long as you are a competent programmer, want to do something more than yet another Android game, and willing to come in person three times a week – let’s do something groundbreaking together!

the avatar of Efstathios Iosifidis

openSUSE and ownCloud at FOSSCOMM 2016, April 16-17 2016 @ University of Pireaus


This weekend (April 16-17), I'll be at FOSSCOMM (Free and Open Source Software Communities Meeting). FOSSCOMM is an annual Greek event that FOSS communities gather and present what's new.

I'll present "Why you should use openSUSE Tumbleweed". I'll show how this version is built, tested and released to the end users.

Another presentation will be about ownCloud 9.0. I'll start with what is cloud and why we use it. Using the cloud we should consider about our privacy. Regarding privacy, ownCloud is the best solution to use cloud technology.

The conference will have streaming (according to the organizers). So you should check the site.

My presentations will be (I'll be glad to see you there):
Why You Should Use Tumbleweed: April 16 @ 11:00 - 12:00
Own Cloud 9: April 16 @ 12:30 - 13:00

If you want to meet me, you can visit openSUSE booth. Ask for Stathis or diamond_gr.
Have phun