Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.

20 August, 2014


Como todo en esta vida, o al menos así debería ser, este blog quiere mejorar y para ello debe tener en cuenta siempre la razón y el motivo de su existencia: sus lectores. Y para ello llega el momento de preguntaros vuestra opinión sobre un aspecto del blog: las secciones. Hace un tiempo KDE Blog [&hellip

19 August, 2014


Hace poco hablamos de un  pack de iconos del diseñador EepSetiawan , concreamente se trataba de Dalisha un sencillo, colorido y completa colección de iconos. Hoy os presento otra creación del mismo diseñador: Square-Beam KDE. La lista de iconos de calidad comentados en el blog va aumentando gradualmente. Así de memoria recuerdo que os he [&hellip


A new gem

A new gem has started in the world. Youtube_helper is not just another Youtube gem. It’s the helper gem. What it does? The Youtube_dlhelper gem downloads a youtube video from a defined space, creates the needed directories and transcodes the filde from *.m4a to *.mp3. Read the full README for using the gem.

Where is it?

You can find it there: https://github.com/saigkill/youtube_dlhelper (It goes directly to the README).

Ho to use?

Just run it with: youtube_dlhelper.rb YourUrl The new file is shown inside your $Musicfolder/Groupname/Youtube-Music or if you have choosen a Interpret it goes to $Musicfolder/Surname_Firstname/Youtube-Videos.

Have a lot of fun :-)

Sascha Manns: Welcome

07:00 UTCmember



Hello and welcome to my new Jekyll Bootstrap Page. Because of some Bandwidth problems i moved my blog out to Github.

Sascha Manns: Jekyll Introduction

07:00 UTCmember


This Jekyll introduction will outline specifically what Jekyll is and why you would want to use it. Directly following the intro we’ll learn exactly how Jekyll does what it does.


What is Jekyll?

Jekyll is a parsing engine bundled as a ruby gem used to build static websites from dynamic components such as templates, partials, liquid code, markdown, etc. Jekyll is known as “a simple, blog aware, static site generator”.


This website is created with Jekyll. Other Jekyll websites.

What does Jekyll Do?

Jekyll is a ruby gem you install on your local system. Once there you can call jekyll --server on a directory and provided that directory is setup in a way jekyll expects, it will do magic stuff like parse markdown/textile files, compute categories, tags, permalinks, and construct your pages from layout templates and partials.

Once parsed, Jekyll stores the result in a self-contained static _site folder. The intention here is that you can serve all contents in this folder statically from a plain static web-server.

You can think of Jekyll as a normalish dynamic blog but rather than parsing content, templates, and tags on each request, Jekyll does this once beforehand and caches the entire website in a folder for serving statically.

Jekyll is Not Blogging Software

Jekyll is a parsing engine.

Jekyll does not come with any content nor does it have any templates or design elements. This is a common source of confusion when getting started. Jekyll does not come with anything you actually use or see on your website - you have to make it.

Why Should I Care?

Jekyll is very minimalistic and very efficient. The most important thing to realize about Jekyll is that it creates a static representation of your website requiring only a static web-server. Traditional dynamic blogs like Wordpress require a database and server-side code. Heavily trafficked dynamic blogs must employ a caching layer that ultimately performs the same job Jekyll sets out to do; serve static content.

Therefore if you like to keep things simple and you prefer the command-line over an admin panel UI then give Jekyll a try.

Developers like Jekyll because we can write content like we write code:

  • Ability to write content in markdown or textile in your favorite text-editor.
  • Ability to write and preview your content via localhost.
  • No internet connection required.
  • Ability to publish via git.
  • Ability to host your blog on a static web-server.
  • Ability to host freely on GitHub Pages.
  • No database required.

How Jekyll Works

The following is a complete but concise outline of exactly how Jekyll works.

Be aware that core concepts are introduced in rapid succession without code examples. This information is not intended to specifically teach you how to do anything, rather it is intended to give you the full picture relative to what is going on in Jekyll-world.

Learning these core concepts should help you avoid common frustrations and ultimately help you better understand the code examples contained throughout Jekyll-Bootstrap.

Initial Setup

18 August, 2014

Jos Poortvliet: How else to help out

12:09 UTCmember

Yesterday I blogged about how to help testing. Today, let me share how you can facilitate development in other ways. First of all - you can enable testers!

Help testers

As I mentioned, openSUSE moved to a rolling release of Factory to facilitate testing. KDE software has development snapshots for a few distributions. ownCloud is actually looking for some help with packaging - if you're interested, ping dragotin or danimo on the owncloud-client-dev IRC channel on freenode (web interface for IRC here). Thanks to everybody helping developers with this!

KDE developers hacking in the mountains of Switzerland


Of course, there is code. Almost all projects I know have developer documentation. ownCloud has the developer manual and the KDE community is writing nothing less than a book about writing software for KDE!

Of course - if you want to get into coding ownCloud, you can join us at the ownCloud Contributor Conference in in two weeks in Berlin and KDE has Akademy coming just two weeks later!

And more

Not everybody has the skills to integrate zsync in ownCloud to make it only upload changes to files or to juggle complicated API's in search for better performance in Plasma but there is plenty more you can do. Here is a KDE call for promo help as well as KDE's generic get involved page. ownCloud also features a list of what you can do to help and so does openSUSE.

Or donate...

If you don't have the time to help, there is still something: donate to support development. KDE has a page asking for donations and spends the donations mostly on organizing developer events. For example, right now, planet KDE is full of posts about Randa. Your donation makes a difference!

You can support ownCloud feature development on bountysource, where you can even put money on a specific feature you want. This provides no guarantees - a feature can easily cost tens to hundreds of hours to implement, so multiple people will have to support a feature. But your support can help a developer spend time on this feature instead of working for a client and still be able to put food on the table at home.

So, there are plenty of ways in which you can help to get the features and improvements you want. Open Source software might be available for free, but its development still costs resources - and without your help, it won't happen.


A veces hay cosas que son sorprendentes. Tras más de 6 años de vida del blog, nunca he respondido a la pregunta  ¿qué es KDE? y, aunque a estas alturas no sea algo vital ,me gustaría dar mi visión personal de la respuesta, o mejor dicho, de las respuestas. Ya ha pasado bastante tiempo desde [&hellip


I’ve been talking about how code construction is necessary to your coding project. With Javascript obfuscation or making code shine is more important than ever. There will be many many eyes looking at your code and see if it’s worth of nothing. After getting friend with js-beautify there ain’t no excuse to keep you ugly looking code laying around.


Js-beautify lives as (like last time JSHint tool) as a web service. If you just want to drop you Javascript code to http://jsbeautifier.org/ and see is this tools working go ahead. You can also use site to find your code guidelines parameters because you can test instantly how parameters looks like. Js-beautify github repository can be found https://github.com/beautify-web/js-beautify

How does it work?

Like last time we torture leaflet 0.7.2 minified code. It’s pain to make it look good with hand (and why one should there is good looking source available without touching minified version) with this king of example one can see how this tool really works. After installation with npm-tool or from my personal repo https://build.opensuse.org/project/show/home:illuusio:nodejs-fedora you have tool called js-beautify in you path. Running it is simple as this

js-beautify leaflet.js

after typing command one sees code outputted to STDOUT more readable form. So one can only get this like code style? No you can customize it until the bitter end of you life. If you like more K&R style formatting you can get one with:

js-beautify --brace-style=expand --indent-size=4 -f leaflet.js

Ok one have remember command line parameters every time.. nice. Actually not. There can be JSON formatted config file and defaults are like this: https://github.com/beautify-web/js-beautify/blob/master/js/config/defaults.json so using command line like this you can use your own defaults.

js-beautify --config=yourjsonconfig.json leaflet.js

Wait it’s not over yet!

With this tool there is also possibility to format CSS and HTML. You can use same js-beautify tool or short cuts css-beautify or html-beautify. These are nice tools to format HTML or CSS. There is also Python version of Js-beautify available but it’s not in focus of this post. It have same capabilities than Node.js version. So if you have read this and you code still look like dog have eat your home work then it’s your own fault. Now we have make code looking very good next time i’ll make it ugly..

17 August, 2014

Cornelius Schumacher: The Book

21:02 UTCmember


When inviting to the Randa 2014 meeting, Mario had the idea to write a book about KDE Frameworks. Valorie picked up this idea and kicked off a small team to tackle the task. So in the middle of August, Valorie, Rohan, Mirko, Bruno, and me gathered in a small room under the roof of the Randa house and started to ponder how to accomplish writing a book in the week of the meeting. Three days later and with the help of many others, Valorie showed around the first version of the book on her Kindle at breakfast. Mission accomplished.

Mission accomplished is a bit of an exaggeration, as you might have suspected. While we had a first version of the book, of course there still is a lot to be added, more content, more structure, more beautiful appearance. But we had quickly settled on the idea that the book shouldn't be a one-time effort, but an on-going project, which grows over time, and is continuously updated as the Frameworks develop, and people find the time and energy to contribute content.

So in addition to writing initial content we spend our thoughts and work on setting up an infrastructure, which will support a sustained effort to develop and maintain the book. While there will come more, having the book on the Kindle to show it around indeed was the first part of our mission accomplished.

Content-wise we decided to target beginning and mildly experienced Qt developers, and present the book in some form of cook book, with sections about how to solve specific problems, for example writing file archives, storing configuration, spell-checking, concurrent execution of tasks, or starting to write a new application.

There already is a lot of good content in our API documentation and on techbase.kde.org, so the book is more a remix of existing documentation spiced with a bit of new content to keep the pieces together or to adapt it to the changes between kdelibs 4 and Frameworks 5.

The book lives in a git repository. We will move it to a more official location a bit later. It's a combination of articles written in markdown and compiling code, from which snippets are dragged into the text as examples. A little bit of tooling around pandoc gives us the toolchain and infrastructure to generate the book without much effort. We actually intend to automatically generate current versions with our continuous integration system, whenever something changes.

While some content now is in the book git repository, we intend to maintain the examples and their documentation as close to the Frameworks they describe. So most of the text and code is supposed to live in the same repositories where the code is maintained as well. They are aggregated in the book repository via git submodules.

Comments and contributions are very welcome. If you are maintaining one of the Frameworks or you are otherwise familiar with them, please don't hesitate to let us know, send


Back on line after several weeks in late, I’ve tried from my best to resolve the case of Factory rolling releases.

After some hacks on the latest Sebastian Siebert beta version (Made in June), I’ve been able to build now BETA fglrx rpm for several openSUSE version.

one day AMD will release or not a stable version. (On my side I prefer to see more efforts made on the free radeon driver.)


This release concern only owners of radeon HD5xxx or above. All owner of HD2xx and HD4xx are really encouraged to use the free radeon driver (which received a lot of improvement in 3.11+ kernels)

This is experimental & BETA software, it could fix issues you encountered (FGLRX not working for openSUSE 13.1),

What happen to Sebastian

I would like to have some news about Sebastian Siebert, he’s a essential key for future updates.
This time I was able (even with several weeks in late) to adjust the script to create a build for openSUSE Factory.
But one day something will broke in kernel or somewhere else, and we all need to find a way to fix it.

So if you’re in touch with Sebastian, could you drop me a comment or a private mail?

I would like to continue the good support we created 3.5 years ago, or at least knowning if I’m orphan :-(

Beta Repository

To make things clear about the status of the drivers, it will not be published under the normal stable repository http://geeko.ioda.net/mirror/amd-fglrx.
I’ve created some times ago a beta repository located at http://geeko.ioda.net/mirror/amd-fglrx-beta.
The FGLRX 14.20 beta1 rpm are released for openSUSE version 12.3, 13.1 (+Tumbleweed), Factory

Signer of package my generic builder gpg key at Ioda-Net. (gpg key id 65BE584C)

For those interested by contributing or patches done to last Sebastian version, the raw-src on the server contain all the material used

Installing the new repository

Admitting you’ve the normal repository named FGLRX, (use zypper lr -d to find the number or name you give it). You have to start by disabling it
so you could fallback to it quickly when new stable version will be published. Open a root console or add sudo at your convenience and issue the following command:

zypper mr -dR FGLRX


To add another repository in the same console as root issue the following command which will install normally the right repository for your distribution

zypper ar -n FGLRX-BETA -cgf http://geeko.ioda.net/mirror/amd-fglrx-beta/openSUSE_`lsb-release -r | awk '{print $2}'` FGLRX-BETA

If you are using Tumbleweed use this one

zypper ar -n FGLRX-BETA -cgf http://geeko.ioda.net/mirror/amd-fglrx-beta/openSUSE_Tumbleweed FGLRX-BETA

Now the update/upgrade process

zypper dup -r FGLRX-BETA

Let the system upgrade the package, and try to enjoy the new beta.

Upgrading from the previous beta

Let the magic of zypper


A few weeks ago, during SUSE Hack Week 10 and the Berlin Qt Dev Days 2013, I started to look for Qt-based libraries, set myself the goal of creating one place to collect all Qt-based libraries, and made some good progress. We had come up with this idea when a couple of KDE people came together in the Swiss mountains for some intensive hacking, and where the idea of Inqlude, the Qt library archive was born. We were thinking of something like CPAN for Qt back then. Since then there was a little bit of progress here and there, but my goal for the Hack Week was to complete the data to cover all relevant Qt-based libraries out there.

The mission is accomplished so far. Thanks to the help of lots of people who contributed pointers, meta data, feedback, and help, we have a pretty comprehensive list of Qt libraries now. Some nuts and bolts are still missing in the infrastructure, which are required to put everything on the web site, and I'm sure we'll discover some hidden gems of Qt libraries later, but what is there is useful and up to date. If some pieces are not yet, contributions are more than welcome.

Many thanks as well to the people at the Qt Dev Days, who gave me the opportunity to present the project to the awesome audience of the Qt user and developer community.


The first key component of the project is the format for describing a Qt-based library. It's a JSON format, which is quite straightforward. That makes it easy to be handled programmatically by tools and other software, but is also still quite friendly to the human eye and a text editor.

The schema describes the meta data of a library and its releases, like name, description, release date and version, links to documentation and packages, etc. The data for Inqlude is centrally collected in a git repository using this schema, and the tools and the web site make use of it to provide nice and easy access to users.


The second key component is the tooling around the format. The big advantage of having a structured format to describe the data is that it makes it easy to write tools to deal with the data. We have a command line client, which currently is mostly used to validate and process the data, for example for generation of the web site, but is also meant to help users with installing and downloading libraries. It's not meant to replace a native package manager, but integrate with whatever your platform provides. This area needs some more work, though.

In the future it would be nice to have some more tools. I would like to see a graphical client for managing libraries, and integration with IDEs, such as Qt Creator or KDevelop would also be awesome.

Web site

The third key component is the web site. This is the central place for users


Three years ago, at Randa 2011, the idea and first implementation of Inqlude, the Qt library archive, was born. So I'm particularly happy today to announce the first alpha release of the Inqlude tool, live from Randa 2014.

Picture by Bruno Friedmann

I use the tool for creating the web site since quite a while and it works nicely for that. It also can create and check manifest files, which is handy when you are creating or updating these for publication on the web site.

The handling of download and installation of packages of libraries listed on Inqlude is not ready yet. There is some basic implementation, but the meta data needed for it, is not there yet. This is something for a future release.

I put down the plan for the future into a roadmap. This release 0.7 is the first alpha. The second alpha 0.8 will mostly come with some more documentation about how to contribute. Then there will be a beta 0.9, which marks the point where we will keep the schema for the manifest stable. Release 1.0 will then be the point where the Inqlude tool will come with support for managing local packages, so that it's useful for developers writing Qt applications as end users. This plan is not set in stone, but it should provide a good starting point. Longer term I intend to have frequent releases to address the needs reported by users.

You will here more in my lightning talk Everything Qt at Akademy.

Inqlude is one part of the story to make the libraries created by the KDE community more accessible to Qt developers. With the recent first stable release of Frameworks 5, we have made a huge step towards that goal, and we just released the first update. A lot of porting of applications is going on here at the meeting, and we are having discussion about various other aspects how to get there, such as a KDE SDK, how to address 3rd party developers, documentation of frameworks, and more. This will continue to be an interesting ride.

16 August, 2014

openSUSE.Asia Summit  2014 Travel Support program 小記- Part 1 參加會議前

openSUSE.Asia Summit 2014 今年即將在北京舉行

Call for Paper 與 Travel Support Program (之後簡稱 TSP) Deadline 在即 ( 18th, Aug 2014 )
所以就動手寫了申請的流程, 希望對我日後的記憶還有其他人有幫助. :-)

首先我有在 openSUSE.Asia Summit 2014 投稿, 這個可以是爭取 TSP的一個亮點.
當然也不一定是要講者才是主要依據, 也可以是
  • openSUSE Advocate ( 之前的openSUSE ambassadors)(大使清單)
  • 活動的組織者 / 想要參與活動的參與者.
  • 對openSUSE 有興趣的貢獻者們.

最後決定的當然是 Travel Support Program Committee, 但是就多爭取貢獻, 努力提出去吧!

接下來進行 Travel Support 流程

首先參考 官方的說明

Step 1.  (之前要寄信給 travel-support@opensuse.org 這個部分看起來之後可以省略)

Step 2.
申請 Travel Support Program Request


登入 openSUSE 帳號
(這邊注意在 connect 的帳號內, full name 一定要跟自己的護照名字一樣..., 不然你的銀行, 可能會以名字不一致來關切你....)

會列出目前有開放 Travel Support Program 的研討會

點選要參加的 Event

螢幕快照 2014-08-16 下午8.33.33.png

點選 Apply

螢幕快照 2014-08-16 下午8.35.24.png

New request 頁面輸入相關資訊

例如是否需要 visa letter

Description 內請輸入想要參加研討會被補助的原因(已經投稿...., 要去當志願者....)

subject 請下拉式選單點選希望被補助的項目, 以及輸入相關金額

點選 Create request

螢幕快照 2014-08-16 下午8.37.34.png

記得點選 Action Submit


接下來應該會收到系統收到的確認信, 完成申請流程.



-- Enjoy it

О том, что это за видеокарты и чем они отличаются, описано здесь. Однако, кроме единственного отличия в виде рабочей частоты, есть ещё и то, что GV-R785WF2-2GD значительно обделена вниманием со стороны производителя в том, что обновлений BIOS для неё нет. Для GV-R785OC-2GD наоборот последняя версия bios выложена от мая этого года.
О том, что у меня именно GV-R785WF2-2GD я узнал случайно - запустив утилиту VGA@BIOS и попытавшись обновить текущую версию до последней доступной для GV-R785OC-2GD. Утилита торжественно сообщила "Flash BIOS failed! BIOS version not match!" и продолжала это делать, какую версию бы я ей не предлагал для прошивки.
И действительно, посмотрев версию текущей прошивки (текущую версию BIOS видеокарты можно узнать с помощью таких программ как AIDA64 или GPU-Z, как описано, например, здесь) обнаружил, что она не совпадает с первой доступной для GV-R785OC-2GD. У GV-R785OC-2GD первая версия должна быть "", а у меня было "" (с количеством нулей могу ошибаться).
В итоге обновление до версии (она же F4) произвёл с помощью ATIFlash командой
atiflash.exe -p 0 -f R785O2GD.F4

, где 0 - идентификатор устройства, а "-f" - ключ к принудительному выполнению.
Выполнять команду необязательно из специально загруженной среды DOS, у меня успешно выполнилось из командной строки Windows Vista x64.

Таким образом из бизнес-модели GV-R785WF2-2GD можно получить игровую GV-R785OC-2GD. Это даёт:
  • Повышение частоты графического процессора с 860 МГц, до 975 МГц (вроде как бесплатный апгрейд на 115 МГц).
  • Множество исправлений за два года, краткий список которых можно увидеть на странице загрузки прошивок для GV-R785OC-2GD.

15 August, 2014


#1: openSUSE Factory Rolling Release Distribution

Over the course of the last several months a lot of changes were made to the development process for openSUSE Factory. Meaning it’s no longer a highly experimental testing dump, but it’s now a viable rolling release distribution in its own right. You can read all about the details here. I installed openSUSE Factory in a virtual machine yesterday and it seems to run pretty great. Of course to really judge a rolling release distribution you need to run it for a sustained period of time.

No rolling release distribution will ever be my preferred day-to-day operating system, but nevertheless I’m pretty excited about the “new” openSUSE Factory. I think the changes will enable version whores and bleeding edge explorers to finally have a truly symbiotic relationship with the users who value productivity and predictability in their PC operating system.


#2: KDE Frameworks 5 and Plasma 5

Since I was already testing openSUSE Factory it was a great opportunity to finally get my feet wet with the new KDE Frameworks 5 and Qt5 based KDE Plasma 5 workspace, initially released about a month ago. Obviously it’s still lacking some features and polish, but it’s already usable for forgiving users who know what they’re doing and showing great promise.


#3: 4G on the Jolla

My provider enabled 4G on my subscription and offered to send me a new SIM Card gratis. So now my Jolla is sporting 4G. Unfortunately it only took about 5-10 minutes of speed testing (peaking at 12 MB/s, averaging about 10 MB/s) to use all my available bandwidth for the month, so for the rest of August I’ve been speed dropped to 64 Kbps, but hey, it’s still 4G!


#4: Richard Stallman presenting with a slideshow

Who’d have ever thought they’d see the day that Stallman would do a presentation with accompanying slides? Well it happened, and I think this great use of slides helps him communicate more effectively. Watch the video and judge for yourselves (27 MB, 13 minutes).



openSUSE průvodce

15.1 Nvidia

Klikněte na tlačítko korespondující s vaší Nvidia grafickou kartou, instalujte pomocí instalace na jeden klik.

15.1.2 Poslední Nvidia karta

Tento ovladač zahrnuje každou Nvidia kartu z roku 2008 nebo novější. Ovladač zahrnuje
GeForce 8, GeForce 100-, 200-, 300-, 400-, 500-, 600- a sérii 700

15.1.3 Odkaz Nvidia karet

Tyto ovladače zahrnují Nvidia karty shruba roku 2007 nebo starší.
Včetně GeForce 6 a 7, stejně jako GeForce série FX5.

Po instalaci restartujte počítač.

15.1.4 Instalace Nvidia ovladače v Terminálu

Pokud si přejete, můžete instalovat Nvidia ovladač v terminálu.
Nejdříve však přidejte repozitář
"zypper addrepo -f http://download.nvidia.com/opensuse/13.1/ nvidia"

Poté nainstalujte balíček korespondující s vaší grafickou kartou.

Pro Ge Force 8 a pozdější
zypper install x11-video-nvidiaG03 (jako root)

Pro GeForce 6 a pozdější
zypper install x11-video-nvidiaG02 (jako root)

Pro FX5xxx
zypper install x11-video-nvidiaG01 (jako root)

Nyní restartujte.

15.2 ATI/AMD

přijde brzy

15.3 Intel

3D ovladače pro karty Intel jsou zdarma atak můžou být zahrnuty v OpenSUSE krabicové verzi.
Žádná další instalace nebo konfigurace není třeba.




Creating high quality contents takes time. A lot of people write nowadays but very few are writers. In the software industry, most of those who write very well are in the marketing side not on the technical side.

The impact of high quality contents is very high over time. Engineers and other profiles related with technology tend to underestimate this fact. When approaching the creation of contents, their first reaction is to think about the effort that takes, not the impact. Marketers have exactly on the opposite view. They tend to focus in the short term impact.

Successful organizations have something in common. They focus a lot of effort and energy in reporting efficiently across the entire organization, not just vertically but horizontally, not just internally but also externally. Knowing what other around you are doing, their goals, motivations and progress is as important as communicating results

One of the sentences that I do not stop repeating is that a good team gets further than a bunch of rock stars. I think that a collective approach to content creation provides better results in general, in the mid term, than individual ones. Specially if we consider how mainstream Free Software has become. There are so many people doing incredible things out there, it is becoming so hard to get attention....

Technology is everywhere. Everybody is interested on it. We all understand that it has a significant impact in our lives and it will have even more in the future. That doesn't mean everybody understands it. For many of us, that work in the software industry, speaking an understandable language for wider audiences do not comes naturally or simply by practising. It requires learning/training.

Very often is not enough to create something outstanding once in a while to be widely recognized. The dark work counts as much as the one that shines. The hows and whys are relevant. Reputation is not in direct relation with popularity and short term successes. Being recognized for your work is an everyday task, a mid term achievement. The good thing about reputation is that once you achieve it,  the impact of your following actions multiplies.

We need to remember that code is meant to die, to disappear, to be replaced by better code, faster code, simpler code. A lot of the work we do ends nowhere. Both facts, that are not restricted to software, mean That doesn't mean that creating that code or project was not worth it. Creating good content, helps increasing the life time of our work, specially if we do not restrict them to results.

All the above are some of the motivations that drives me to promote the creation of a team blog wherever I work. Sometimes I succeed and sometimes not, obviously.

What is a team blog for me? 

  • It is a team effort. Each post should be led by a person, an author, but created by the team.
  • It focuses on what the team/group do

14 August, 2014


Because it is still reported that the ownCloud Client has an increasing memory footprint when running for long time I am trying to monitor the QObject tree of the client. Valgrind does not report any memory problems with it so my suspicion was that somewhere QObjects are created with valid parent pointers referencing a long living object. These objects might accumulate unexpectedly over time and waste memory.

So I tried to investigate the app with Qt Inspector of Robert Knight. That’s a great tool, but it does not yet completely do what I need because it only shows QWidget based objects. But Robert was kind enough to put me on the right track, thanks a lot for that!

I tried this naive approach:

In the clients main.cpp, I implemented these both callback functions:

 QSet<QObject*> mObjects;

 extern "C" Q_DECL_EXPORT void qt_addObject(QObject *obj)

 extern "C" Q_DECL_EXPORT void qt_removeObject(QObject *obj)

Qt calls these callbacks whenever a QObject is created or deleted respectively. When the object is created I add it’s pointer to the QSet mObjects, and if it is deleted, it is removed from the QSet. My idea was that after the QApp::exec() call returns, I would have to see which QObjects are still in the mObjects QSet. After a longer run of the client, I hoped to see an artificial amount of objects being left over.

Well, what should I say… No success so far: After first tests, it seems that the amount of left-over objects is pretty constant. Also, I don’t see any objects that I would not kind of expect.

So this little experiment left more questions than answer: Is the suspicion correct that QObjects with a valid parent pointer can cause the memory growth? Is my test code as I did it so far able to detect that at all? Is it correct to do the analysis after the app.exec() call returned?

If you have any hints for me, please let me know! How would you tackle the problem?


This is the link to my modified main.cpp:



In the past 4 months during this years Google Summer of Code (GSoC), a global program that offers student developers stipends to write code for open source software projects, Christian Bruckmayer collaborated with other students and mentors to code a dashboard for the Open Source Event Manager (OSEM). In this series of posts Christian will tell you about his project and what he has learned from this experience.

Google Summer of Code 2014 Logo

Christian BruckmayerHey there, Christian here again. This is my last post in a series about my GSoC project. I have already explained the two big features I have implemented: The dashboard and Conference Goals & Campaigns. I hope you enjoyed those articles, if you haven’t read them I recommend you head over and do so. Today I would like to tell you about the most important part of GSoC for me personally: What I have learned during this summer!

The Open Source Way

Open Retrospective I can really say that I gained much experience, both technically and personally, during GSoC. Working together, the open source way, was a great experience. It goes like this: I discuss a feature with the OSEM team in GitHub issues, then I start to implement the feature and send a Pull Request to our Repository. The mentors then review my code and tell me their suggestions to improve it. After I have worked in the suggestions the progress starts again.

This feedback helped me a lot. We discussed code smells, bad design decisions or a wrong assumptions, right there, next to the code on github. And as four eyes see more than two, this process assured that only good code get’s into repository!

Working together, but self-driven

On the one hand it was awesome to work together with experienced and very skilled developers. The constructive criticism that I got for my work helped me a lot to get better every day, and it still does. But on the other hand I was responsible for my own project. It was a challenge because no one would tell me when I had to work, no one gave me a step by step list. I had to learn to organize the work myself somehow. Being a child of self-employed parents was a big advantage for me in GSoC, as I have a basic understanding of prioritization, scheduling my day and being self-dependent. Still, working together with the other students and mentors, but self-driven was something I learned this summer.

Test Driven Development

Another nice thing I got to know was test driven development. During previous student jobs I already programmed software tests but only after other developers implemented the features. In my GSoC project I got to think about the tests first and then I started to implement the feature. Implementing something this way around, tests first, you are forced to think about the design decisions. ‘Does it belong to the model or to the controller?’ or ‘How can I split this up to make it easier testable?’ are questions I


La actualidad impone un cambio de pensamiento a nivel de negocios, necesitamos pensarnos fuera de la oficina. Jason Fried, empresario en el área del Software, tiene una teoría radical sobre el trabajo: que la oficina no es un buen lugar para hacerlo. Él afirma que, “si le preguntas a la gente a dónde van para conseguir que algo se haga, casi nunca se oye a alguien decir a la oficina”.

Partiendo de esa premisa y del reacondicionamiento que están sufriendo los empleos actuales, es preciso reconocer las tendencias, y procurar mirar a la oportunidad que ofrece el trabajo en entornos virtuales.

Trabajar desde la virtualidad no es algo nuevo pero está creciendo rápidamente, son innumerables las ventajas o beneficios que brindan dichos entornos, entre los más significativos tenemos:

  • La inversión inicial para cualquier organización es mínima, debido a que se tiene acceso a grandes recursos de infraestructura informática a bajo costo, donde no es siquiera requerida la inversión en software o hardware, al hacer uso de las herramientas colaborativas en línea.
  • Para reducir costos se está disminuyendo el tamaño de las oficinas e incluso eliminándolas por completo, permitiendo a su vez establecer diferentes tipos de relaciones entre proveedores, clientes, empleados, socios, y otros, a nivel mundial.
  • La flexibilidad laboral y en consecuencia la conciliación de la vida profesional y familiar.
  • La globalización de los mercados.
  • El crecimiento de las expectativas de los consumidores.
  • Las organizaciones están obligadas a reaccionar más rápidamente a los cambios del entorno y a los competidores, debido a las posibilidades que ofrece Internet y sobre todo la oferta de productos.

Pero este tipo de trabajo presenta, de igual forma, retos importantes. Es por ello que, los gerentes contemporáneos deben procurar adquirir nuevas habilidades y herramientas que les haga factible el transitar por el proceso, el adaptarse y asumir la necesidad de afrontar su alfabetización digital. En otras palabras, se requiere de talentos con experiencias diferentes que permitan su adaptación a las circunstancias actuales, y que sean capaces de preparar y encaminar su empresa hacia la apertura y los cambios, así como al surgimiento de los nuevos paradigmas, valores y patrones de comportamiento social.

Es evidente que, con la creación de nuevos espacios de trabajo virtual, quien asuma el liderazgo de estos grupos, en los cuales personas y equipos colaboran en forma rutinaria desde ubicaciones remotas, tiene la responsabilidad de manejar proyectos con personas que no trabajan en el mismo lugar ni en el mismo horario, consolidando una sociedad basada en el conocimiento y la rápida transmisión de los mensajes e información.

En el mismo orden de ideas, otro factor a considerar entre los retos a afrontar es la seguridad, ya que hacer el trabajo en cualquier parte y momento puede resultar un desafío significativo, por lo que las empresas deben prestar especial atención y proteger la información sensible


Heya geekos!

We love the fact that the openSUSE News section is being generally well-adopted and well read. But, we’d like to do more, and do better! And for that, we need your input. Don’t worry, we won’t demand any 10000 characters super-articles (for now :P), but what we would like from you is to fill out a little survey. It’s very very very short, as we don’t want it to be too time consuming, but we would like to know if, generally speaking, we’re heading in the right direction. Or in a wrong one. Any way, it would be nice to know what the openSUSE news readers think about its content, so we can make it better. There’s nothing we’d like more than to bring you additional enjoyment while you’re drinking your morning coffee and clicking through your favorite news sites!

So, what we politely ask you to do is drop everything, and click here to fill out the survey. It’s short and shouldn’t take more than a minute or two of your time, but it would help us a great deal.

The survey will be open until the 31st of August. We’ll post the results in the first days of September right here at openSUSE News. And there will be a graph included:

statistics geeko inside

There shall be a fancy graph!

This is just the first step in the news team’s interaction with its geeko reader base. Needless to say the survey is anonymous. Also, I’d like to ask you if you could share this survey through your social networks or with other readers, so we can get the most representative possible input.

Thanks again for helping us out, and remember to…


…have a lot of fun!


Jak už jsme se dozvěděli, můžeme spouštět programy přímo z příkazového řádku tím, že je jednoduše napíšeme. Například když napíšeme
Otevře se manažer souborů dolphin. Pokud se podíváte do terminálu, jak se otevírá proces, nemůžete v té chvíli napsat nový příkaz uvnitř stejného okna.
Až když ukončíte dolphin, pak teprve můžete napsat nový příkaz do shellu.
Dolphin &
Tak a nyní máte souborového manažera bežícího na pozadí a terminál je volný k psaní nového příkazu, který potřebujete.
Teď si představte, že jste zapoměli napsat symbol „&“ po dolphinu. Tak jednoduše napište „ctrl+z“ což zastaví daný proces.
K pokračování zastaveného procesu napište“

jobs, ps
Teď máme procesy běžící na pozadí.
Můžete je vypsat užitím jobs nebo ps.
Zkuste si to. Napište jobs, nebo napište ps.
Zde je co dostanete:
nenad@linux-zr04:~> ps


8356 pts/1    00:00:00 bash

8401 pts/1    00:00:00 dolphin

8406 pts/1    00:00:00 kbuildsycoca4

8456 pts/1    00:00:00 ps

Vypnutí Procesu
Jak se zbavit procesu, který neodpovídá?
Použitím příkazu kill.
Vyzkoušejte si to na procesu dolphin.
Nejdříve musíme identifikovat PID procesu použitím ps.
V mém případě je to 8401 pro dolphin. Ukončím ho jednuduchým napsáním
kill 8401
Kill není jen pro ukončování procesů, ale originálně byl navrhnut, aby posílal signály procesům.
Samozřejmě existuje více kill signálů, které můžete použít, mohou být různé vzhledem k aplikaci, kterou používáte.
Viz níže:
1 SIGHUP Programy vyslyší signál. Tento signál je poslán do procesů běžících v terminálu, když zavřete terminál.
2 SIGINT Přeruší signál. Tento signál je dán procesům, aby je přerušil. Programy zpracují tento signál a jednají podle něho.
Také můžete vydat tento signál přímo napsáním Ctrl-c v terminálovém okně kde program běží.
15 SIGTERM Ukončovací signál. Tento signál je dán procesům k jejich ukončení. Programy zpracují signál a jednají podle něj.
Toto je defaultní signál, poslán příkazem kill pokud signál není zpecifikován.
9 SIGKILL Tento signál způsobí bezptostřední ukončení procesů.
Programy neposlouchají tento signál
Vyzkoušejte si.


Takže toutu kapitolou uzavírám naší sérii úterků s příkazovým řádkem.

Doufám že další nováčci mě začali mít rádi a už si o konzoli nemyslí léta opěvované řeči a chyby


Ao instalar o AWS CLI (cliente das APIs da amazon) alguns sistemas enxuto pode se deparar com o seguinte erro:

# aws s3 help
[Errno 2] No such file or directory

Este erro acontece devido à ausencia do comando less. Sendo assim a solução e providenciar a instalação do mesmo  e pronto!

# zypper in less


13 August, 2014

Short answer: because you should.

When somebody asks about their missing pet feature in KDE or ownCloud software, I always trow in a request for help in the answer. Software development is hard work and these features don't appear out of nowhere. There are only so many hours in a day to work on the a million things we all agree are important. There are many ways to help out and speed things up a little. In this blog I'd like to highlight testing because I see developers spend a lot of time testing their own software - and that is not as good as it sounds.

Developers also do testing!

You see, developers really want their software to be good. So when a Alpha or Release Candidate does not receive much testing from users, the developers take it on themselves to test it.

Developers testing software has two downsides:
  • Developers tend to test the things they wrote the software to do. It might sound obvious, but usually the things that break are things the developer didn't think off: "you have 51,000 songs? Oh, I never tested the music app with more than 4,000" is what I heard just yesterday.
  • And of course, it should be obvious: early and lots of testing speeds up development so you get those features you want!
Take two lessons from this:
  • If you want things to work for you, YOU have to test it.
  • If you want those other features, too, helping out is the name of the game.

It isn't hard

In the past I wrote an extensive article on how to test for KDE and ownCloud, too, has real nice testing documentation.

If you want to get on it now, Klaas Freitag just released ownCloud client 1.7 alpha 1 and openSUSE has moved factory to a rolling release process to make it easy to help test. KDE Applications 4.14 is at the third beta and the Release Candidate is around the corner.

Your testing doesn't just save time: it is inspiring and fun. For everybody involved. For added kicks, consider joining us at the ownCloud Contributor Conference in in two weeks in Berlin and KDE has Akademy coming just two weeks later!

Help make sure we can get our features done in time - help test and contribute your creativity and thoughts!

note: I'm not argueing here against testing by developers, rather that users should help out more! Of course, developers should make sure their code works and unit tests and automated testing are great tools for that. But I believe nothing can replace proper end-user testing in real-life environments and that can only really be properly done by end users.


openSUSE, despite the vastness of the www stating it’s primarily a KDE distro, prides itself in offering a one stop shop for your operating system needs, regardless of your desktop environment preferences. And it’s true. For a couple of months, I’ve been running openSUSE GNOME exclusively on my laptop. And it worked like a charm. But there was one problem.

Whichever system I’m running, I absolutely must have a wallpaper slideshow complementing my desktop theme preferences. That way, I ensure my desktop is fresh and attractive to look at every time I close a window. But, as already stated, there was a problem. The wallpaper slideshow from extensions.gnome.org didn’t seem to work for me and kept on crashing. So I had to find an alternative. How? While search engines can be a good friend, I decided to ask on the forums, to get a first-hand experience. As always,  the kind geekos on the green side of the fence looked into the issue, found a solution and helped a brother out. I’ve been offered a solution, and it’s called Variety.

There was a problem in using Variety on openSUSE – the packages didn’t exist. So, naturally, malcolmlewis stepped in and packaged Variety for us to use!

How to install Variety?

Malcolm created a one-click install for this fantastic wallpaper changer. You can get it here. But, before installing, make sure to install python pillow, as it’s a dependency. You can get it here.

So what’s so special about Variety?

Along with the obvious function, which is changing wallpapers, it’s how it changes them, what really matters. You can add a local pictures folder, or, if you don’t feel like it, you can let Variety download wallpapers directly from the internet from different sources (flickr, wallbase etc.). To learn more about the app’s preferences, check out this video made by a Linux Mint user.


Do you have any experience with Variety or would maybe like to suggest a wallpaper slideshow app? Join us in the comments! And until then, remember to…

…have a lot of fun!

12 August, 2014


Yo yo, geekos! Here we are, for the final chapter of our CLT hangout. Today, we’ll be talking about job control through which we’ll learn how to control processes running on our computer!

An Example

As we have learned, we can run programs directly from the CLI by simply typing the name of the program. For example, dolphin. If we type:


…dolphin, the file manager, opens. If you look at the terminal while this process is opened, you can not access the command prompt and you can not write a new command inside the same window. If you terminate dolphin, the prompt reappears and you can type a new command into the shell. Now, how can we run a program from CLI, while also having our prompt available for further command issuing.

dolphin &

…and you have your dolphin file manager running in background, with the terminal free to type another command you need.

Now imagine you forgot to type the ‘&’ character after dolphin. Simply type ‘ctrl+z’, which will stop your process and put it in idle. To resume the stopped process, type:


…which will restart the process from the background.

jobs, ps

Now that we have processes running in the background, you can list them either using jobs, or using ps. Try it. Just type jobs, or type ps. Here’s what I get:

nenad@linux-zr04:~> ps
8356 pts/1    00:00:00 bash
8401 pts/1    00:00:00 dolphin
8406 pts/1    00:00:00 kbuildsycoca4
8456 pts/1    00:00:00 ps


Kill a Process

How do you get rid of a process if it’s become unresponsive? By using the kill command. Let’s try it out on our previously mentioned dolphin process. First, we have to identify the PID of the process by using ps. In my aforementioned case it’s 8401 for dolphin. So to kill it, I simply type:

kill 8401

…and it kills off dolphin.

More About Kill

Kill doesn’t exist only for terminating processes, but it was originally designed to send signals to processes. And of course, there are a number of kill signals you can use, which can be different in regard to the application you use. See the table below:

killDo try them out.


With this lesson, we conclude our CLT series and our tuesday hanging out. I hope that other n00bs like me managed to demistify the console in their minds and learn the basics. Now all that’s left is for you to play around (just don’t mess around the / directory too much so you don’t bork something :D).

We’ll be seeing a lot more of each other soon, as there’s more series of articles from where these came from. Stay tuned, and meanwhile…


…have a lot of fun!


BuwgqPdCQAAlDvOSo some more bits have arrived for #FrankenPi First up, we have the Samsung Multi Charging Cable for Galaxy S5. It’s basically a USB cable but with three Micro USB ends. This will allow me to power three Pi’s from one source rather than three power bricks, as it’ s a cable the ‘juice’ output will be whatever you chuck down it.

Bu083PGCUAACJ4_Also the Plugable USB 2.0 7 Port Hub and BC 1.2 Fast Charger with 60 Watt Power Adapter arrived this morning.  It was not originally my intention to power the Pi’s off the Hub, that is until I came across this piece of kit. Now a few people have questioned the power output which tbh is all Voodoo to me however the tec specs say the following: The supplied power adapter delivers 10A of available current across all USB ports which leaves us 500mA short (7 * 1.5A = 10.5A). I read that to mean “The more you plug in the less juice” unless it’s managed in some way so that each port only gets 1.5A? At the end of the day the Hub was going to be for the Pi’s to communicate with a 2.5 external drive anyway.

flattr this!


Dan Collins “That’s a nerd ménage à trois!”

flattr this!


Píšeme tuto připomínku všem geekům, kteří mají smysl pro design, aby nám pomohli se summitem.
Vaše logo může být vytišněno a použije se na propagační materiály, je tu také super tajná Geeko cena!
Proto vás vřele vybízíme, aby jste se připojili, deadline je na 18. srpna. O pravidlech už jsme psali v tomto článku -


在執行 sendmail 組態檔案修改時,我們會透過 m4 來將組態檔案進行轉換為 cf 檔案,但執行 m4 sendmail.mc > sendmail.cf 時,若出現 m4: Cannot open /usr/share/sendmail-cf/m4/cf.m4: No such file or directory 錯誤訊息時,則必須要確認一下主機是否已經安裝了 sendmail-cf 套件,若沒有安裝後便可以排除這個問題

Older blog entries ->