Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.


Tuesday
22 April, 2014


face

We are happy to announce that the LibreOffice project has 10 Google Summer of Code projects for this 10th edition of the program. The selected projects and students are:

Project Title

 

Selected Student

Connection to SharePoint and Microsoft OneDrive

 

Mihai Varga

Calc / Impress tiled rendering support

 

Andrzej Hunt

Improved Color selection

 

Krisztián Pintér

Enhancing text frames in Draw

 

Matteo Campanelli

Implement Adobe Pagemaker import filter

 

Anurag Kanungo

Improvements to the Template manager

 

Efe Gürkan YALAMAN

Dialog Widget Conversion

 

freetank

Dialog Widget Conversion

 

sk94

Improve Usability of Personas

 

Rachit Gupta

Refactor god objects

 

Valentin

We wish all of them a lot of success and let the coding start!


Monday
21 April, 2014


face

Continuing in our efforts to create SUSE Cloud 3 Admin Appliance into a quick and easy way to deploy OpenStack, we have reached version 1.1.0. You can download the Standard or Embedded version.

Standard v1.1.0: https://susestudio.com/a/Mrr6vv/suse-cloud-3-admin
Embedded v1.1.0: https://susestudio.com/a/Mrr6vv/suse-cloud-3-admin-embedded

Standard has a process which will mirror all of the required repositories for the Admin Server, and contains the SLES 11 SP3 / SUSE Cloud ISO's

Embedded has everything that the standard image has and all of the required patch and update repositories in the image ready for you to consume. It might take a little longer to download but might be worth the wait if you need something with everything included and you want a quick testing environment to play with.

Changes from Github Project
1. restructure files into proper kiwi build directories to make it easier to build from a checkout
2. shell code needs consistent indentation
3. add a proper README.md
4. eliminate disk wastage from rebuilding huge .txz
5. eliminate copy'n'paste between setup-suse-crowbar*
6. Provide sensible default network config as outlined in the Deployment Guide
7. mount SLES 11 SP3/Cloud ISOs permanently instead of extracting files once the appliance is deployed


Sunday
20 April, 2014


face

I switched filesystem on old Thinkpad X60 from ext3 to ext4. It took a while, but it looks like it will be worth it. find over kernel tree now takes 6.3 second instead of 17.7, performance improvement of 2.8x. As I use unison to synchronize machines, I expect to see some real improvement.

And here's some fun you can have with busybox. Spot 3 glitches.

root@socrates:/mnt/my# dd if=u-boot.img of=/dev/mmbclk0p1 bs=64K seek=4
3+1 records in
3+1 records out
224220 bytes (224 kB) copied, 0.00113385 s, 198 MB/s

root@socrates:~# dd if=/dev/mmcblk0p1 of=orig.spl.1 bs=64k seek=0 count=0
0+0 records in
0+0 records out
0 bytes (0 B) copied, 3.972e-05 s, 0.0 kB/s

socrates login: riit^H^H^H^[[2~^H^H^[[3~
login: loginprompt.c:164: login_prompt: Assertion `wlen == (int) len -1' failed

Friday
18 April, 2014


face

Register now!We are happy to announce that at long last the schedule for oSC14 has landed and you can find the details of the once again jam packed conference here. We already published a extended sneak peek as well as information on the keynote by Michael Meeks.

Kicking off

In a week from today, on Thursday the 24th of April the conference kicks offwith a welcome get together at the Sesame Tavern which is just 200 meters away from the venue. Standing with your back to the venue you will turn to your right walking slightly down hill toward the old town of Dubrovnik. Approximately 200 meters down the street you will see the Sesame sign on your right. The tavern is located at Branitelja Dubrovnika 23, 20000 Dubrovnik. We will start at 6:00 P.M. and hope to see you there!
Dubrovnik

The sessions

We are very excited about the excellent talks and workshops that have been submitted and that we were able to select to make up this year’s conference program. Registration opens on Friday the 25th of April at 8:30 in the morning and registration will open each day thereafter at 9:00 A.M. The program kicks off with the Opening Keynote by the openSUSE board on Friday moring at 10 and should keep you busy until we end the conference around lunch time on Monday the 28th. Between the start of the schedule on Friday and the close on Monday the speakers will deliver close to 70 hours worth of content in various formats in 60 sessions. There is also room for on the spot hack sessions and BoF meetings for those that need a bit more than just standing in the hallway.
Party time

Relaxation

With all the material being presented we will not loose sight that a bit of relaxation is needed as well. The conference party on Sunday evening starts at 7:00 P.M. (19:00) at the EastWest Pub, which is located right at the beach at Frana Supila 4. The pub is just south of the old town and it takes about 15 to 20 minutes to walk there from the venue. On the way you will pass Sesame where everything starts on Thursday evening.

Beginnings and endings

On Monday around noon we will start to pack up and say goodbye for another year at oSC15 in …… for this you will need to wait until the keynote on Friday.

Check out the schedule here, find more details on the conference website and plan your participation. We will see you in Dubrovnic next week!

The Program Team


face

OpenStack Icehouse has been released and packages are available for openSUSE 13.1 and SUSE Linux Enterprise 11 SP3 in the Open Build Service.
The packages are developed in the Cloud:OpenStack:Icehouse project and can be download from the openSUSE download server:

  • For openSUSE 13.1, you can add the Icehouse repository with:
    zypper addrepo -f obs://Cloud:OpenStack:Icehouse/openSUSE_13.1 Icehouse
  • For SLES 11 SP3, add the icehouse repository with:
    zypper addrepo -f obs://Cloud:OpenStack:Icehouse/SLE_11_SP3 Icehouse
If you like to install from packages, follow the OpenStack Installation Guide for openSUSE and SUSE Linux Enterprise which has been updated for Icehouse.
    With Icehouse, you can use the dashboard now in German and also use a new wizard for network creation.
    Additional information about the release is available on the OpenStack web page.
    My personal highlights of the new release are:
    • The translation of the dashboard to German and the new accordion navigation in the dashboard
    • The updated Installation Guides that have been greatly improved.
    • The new Database module (trove) that allows easy creation and manipulation of Databases for usage by virtual machines and thus gives you Database-as-a-Service.
    • The Compute component now allows migrations and upgrades - you can first update the controller nodes and then the compute nodes and run thus a mixture of old and new compute nodes.
    • The great progress that Orchestration makes with better integration of projects and giving the users now full control of manipulation of stacks.
    • Learning about the "most insane CI infrastructure". The more I learn about the CI infrastructure and interact with the infra team, my appreciation about their great work growth. Thanks a lot, Clark, Elizabeth, Fungi, Monty, James, Sergey, et al.!
    Also, thanks to the Documentation team, it was again a lot of fun to work together with all of you and release Icehouse documentation and improve OpenStack! Team, I look forward to drink with you the 104 beers that Gauvain promised for Atlanta ;)
    Now on to get Icehouse integrated into SUSE Cloud for our next release...



    Thursday
    17 April, 2014


    face

    Most OpenStack projects send after each commit updated files for translation to transifex. Also, every morning any translated files get merged back to the projects as a "proposal" - a normal review with all the changes in it.
    Quite a lot projects had broken translation files - files with duplicate entries that transifex rejected -, and thus no update happened for some time. The problem is that these jobs run automatically and nobody gets notified when they fail.

    Clark Boylan, Devananda van der Veen and myself have recently looked into this and produced several fixes to get this all working properly:

    • The broken translation files (see bug 1299349) have been fixed in time for the Icehouse release.
    • The gating has been enhanced so that no broken files can go in again.
    • The scripts that send the translations to and retrieve them from transifex have been greatly improved.
    The scripts have been analyzed and a couple of problems fixed so that no more obsolete entries should show up anymore. Additionally, the proposal scripts - those that download from transifex - have been changed to not propose any files where only dates or line numbers have changed. This last change is a great optimization for the projects. For example, the sahara project got every morning a proposal that contained two line changes for each .po file - just updates of the dates. Now they do not get any update at all unless there is a real change of the translation files. A real change is either a new translation or a change of strings. For example, today's update to the operations-guide contained only a single file (see this review) since the only change was a translation update by the Czech translation team - and sahara did not get an update at all.

    New User: OpenStack Proposal Bot

    Previously the translation jobs where run as user "Jenkins" and now have been changed to "OpenStack Proposal Bot".

    Now, another magic script showed up and welcomed the Proposal Bot to OpenStack:
    Unfortunately, the workflow did not work for most projects - the bot forgot to sign the contributor license agreement:
    I'm sure the infra team will find a way to have the bot sign the CLA and starting tomorrow, the import of translations will work again.

    Wednesday
    16 April, 2014


    face
    Yesterday was my last day as KDE e.V. Board Member. As you know I have been the KDE Treasurer since April 2012. I will keep being part of the Financial Working Group so I will be able to help my successor during the landing process and in the future. I still have some leftovers to finish (reports) and I plan to write a couple of posts about our numbers, so you all know what it the situation of KDE e.V. in general....healthy, by the way :-) It is being a soft transition.

    KDE e.V. is in the right time to be ambitious and heavily increase its resources to support KDE community. Several decisions have been made in this regard and they will be executed during this 2014. The financial situation is healthy enough to afford some level of expansion. So I think it is time for somebody else to come with energy and enthusiasm to drive these changes the following months/years. And we have that person so.....

    KDE e.V. is a solid organization, well managed and with a Board that takes the financial area seriously. It has been a pleasure and a honor to be part of the Board.

    On the other hand, my relation with SUSE will end this month. Working on openSUSE, an specially building and leading the openSUSE Team, has been a great experience. I wish them all the best, specially in their current main task, turning Factory into a "usable" rolling release by changing the development work flow/process. It is a goal with a high impact for openSUSE.
     
    openQA has a nice present, a tremendous potential and future, not just from the technical but also from the business point of view. For those of you looking for a great place to work, consider SUSE. It was for me.

    The last few weeks I have been temporary living in Prague. I love this city. I am not attending to openSUSE Conference (I am sure it will be a great one) and I am not sure if I will be able to go to Akademy-es, which is a pity since it takes place in Malaga, where I lived for three years, and it is organized by one of my colleagues, Antonio Larrosa. I plan to go to Akademy in Brno though.

    As you can see, these are times for changes, after around two years putting my best in KDE e.V. Board and SUSE/openSUSE. I have no idea what am I going to do next but I am sure it will be exciting so I expect an article soon called "Open Doors". Otherwise....I will not know what to do with so much time, or maybe I will... write more posts. :-)

    Tuesday
    15 April, 2014


    face

    Are you bored or seeking something to do? Do you want to do something that your friends will call just waste of time but it is so highly nerdy and most cool? Do you want to know what makes openSUSE or Linux in general tick?

    Kernel? eh what?

    Imagine a car where you can change motor as many times you like. You can tune your motor as much as you can and you can run it in your car again with Linux you can do that. Your ride with new kernel can be good or really bad (or something between).
    As you might know Linux is just a name for Linux-kernel as Google Android is just Google’s forked Linux kernel. What happens after kernel booting is not that important anymore.

    User-space and kernel-space

    Nearly every modern operating system separate kernel-space and user-space. Your applications like browser works in user-space and your USB-stick operates in kernel-space. In normal life you never have to cross kernel space but If you do you should really find what are: /dev, /proc and /sys directories in Linux. Those directories contains kernel stuff what is in there you have to check from Internet or how /sys works from this Wikipedia article.

    openSUSE kernel

    Official linux kernel can be found on https://www.kernel.org/ and kernel newbies can go here (Really give time to that site if you don’t know what you are doing). You have to make clear to yourself there is no official binaries for Linux kernel. If you compile and start using yours it’s as official as anyone else. Distributions have their own official kernel binaries and that that.
    So you can just pull kernel from git version control down and start compiling right a way. It’s not that easy you need to have config file and doing that is hardest part. There is preset configs in Linux but I assume you are using openSUSE version of Linux kernel.
    That means what you ask from me? You have to understand there is official and only official Linus Torvals kernel GIT and then there is hundreds versions of kernel that contains some random stuff that is not allowed in mainline Linus Torvalds version.
    Mainly every distribution have their own version of Kernel and openSUSE is not an exception of this. openSUSE Linux kernel can be pulled from git: http://kernel.opensuse.org/cgit/kernel/.

    WTF? ROLF? I’ll install something that only has binaries

    Yes please do so there is kernel binary running in your openSUSE currently and you don’t have to change that. Compiling kernel is not for people in hurry nor for people that doesn’t have adventurous mind or some urge to do it. It takes time to learn and years to master really. In that time frame you just have to admit that sometimes your new compiled kernel doesn’t event boot. So we start and we need some tools (install them as root)

    zypper 

    Monday
    14 April, 2014


    face

    Java 8′s default methods on interfaces means we can implement the decorator pattern much less verbosely.

    The decorator pattern allows us to add behaviour to an object without using inheritance. I often find myself using it to “extend” third party interfaces with useful additional behaviour.

    Let’s say we wanted to add a map method to List that allows us to convert from a list of one type to a list of another. There is already such a method on the Stream interface, but it serves as an example

    We used to have to either

    a) Subclass a concrete List implementation and add our method (which makes re-use hard), or
    b) Re-implement the considerably large List interface, delegating to a wrapped List.

    You can ask your IDE to generate these delegate methods for you, but with a large interface like List the boilerplate tends to obscure the added behaviour.



    class MappingList<T> implements List<T> {
        private List<T> impl;
     
        public int size() {
            return impl.size();
        }
     
        public boolean isEmpty() {
            return impl.isEmpty();
        }
     
        // Many more boilerplate methods omitted for brevity
     
        // The method we actually wanted to add.
        public <R> List<R> map(Function<T,R> f) {
            return list.stream().map(f).collect(Collectors.toList());
        }
     
    }

    Guava gave us a third option

    c) Extend the the Guava ForwardingList class. Unfortunately that meant you couldn’t extend any other class.

    Java 8 gives us a fourth option

    d) We can implement the forwarding behaviour in an interface, and then add our behaviour on top.

    The disadvantage is you need a public method which exposes the underlying implementation. The advantages are you can keep the added behaviour separate, and it’s easier to compose them.

    Our decorator can now be really short – something like

    class MappableList<T> implements List<T>, ForwardingList<T>, Mappable<T> {
        private List<T> impl;
     
        public MappableList(List<T> impl) {
            this.impl = impl;
        }
     
        @Override
        public List<T> impl() {
            return impl;
        }
    }

    We can use it like this

    // prints 3, twice.
    new MappableList<String>(asList("foo", "bar"))
        .map(s -> s.length())
        .forEach(System.out::println);

    The new method we added is declared in its own Mappable<T> interface which is uncluttered.

    interface Mappable<T> extends ForwardingList<T> {
    	default <R> List<R> map(Function<T,R> f) {
    		return impl().stream().map(f).collect(Collectors.toList());
    	}
    }

    The delegation boilerplate we can keep in its own interface, out of the way. Since it’s an interface we are free to extend other classes/interfaces in our decorator

    interface ForwardingList<T> extends List<T> {
        List<T> impl();
     
        default int size() {
            return impl().size();
        }
     
        default boolean isEmpty() {
            return impl().isEmpty();
        }	
     
        // Other methods omitted for brevity
     
    }

    If we wanted to mix in some more functionality to our MappableList decorator class we could just implement another interface. In the above example we added a new method, so this time let’s modify one of the existing methods on List. Let’s make a List that always thinks it’s empty.

    interface AlwaysEmpty<T> extends ForwardingList<T> {
        default boolean isEmpty() {
            return true 

    Sunday
    13 April, 2014


    face

    I’ve been thinking about how to express joins in my pure Java query library.

    Hibernate and other ORMs allow you to map collection fields via join tables. They will take care of cascading inserts and deletes from multiple tables for you.

    While this is nice and easy, it can to cause problems with bigger object graphs because it hides the performance implications of what it is going on from you. You may appear to be saving a single object but that might translate into modifying millions of rows.

    There is still more complexity, as you can map the same relational structure into different Java datatypes. Different hibernate collections have different performance characteristics. Sets have a query performance penalty from ensuring uniqueness, Lists have an ordering penalty, and bags have a persistence cost from recreating the collection on save.

    This complexity makes me think that transparently doing joins and mapping collection properties is not a good idea at least without a lot of thought.

    The nice thing about what I had so far was that there were no Strings needed (Except for values). All properties were referred to as Java properties. There was also no code generation needed (Unlike JOOQ).

    This is what I’ve come up with for creating and querying many to many relationships. Let’s say we have Person and Conspiracy entities. Persisted as per my last post.

    class Person {
    	String getFirstName();
    	String getLastName();
    	// ...
    }
    class Conspiracy {
    	String getName();	
    	// ...
    }

    A person can be in multiple conspiracies, and conspiracies can have many people involved, so we need another relationship table in the database schema.

    I’ve got the following code creating a conspiracy_person table with appropriate foreign keys. It’s reasonably nice. Unfortunatley we have to have separate fieldLeft and fieldRight methods for fields referencing the right and left hand side of the relationship. It’s type erasure’s fault as usual. We can’t have a methods that differ only by their generic type parameters.

    create(relationship(Conspiracy.class, Person.class))
        .fieldLeft(Conspiracy::getName)
        .fieldRight(Person::getFirstName)
        .fieldRight(Person::getLastName)
        .execute(this::openConnection);

    Equivalent to

    CREATE TABLE IF NOT EXISTS conspiracy_person ( 
        conspiracy_name text, 
        person_first_name text, 
        person_last_name text 
    );

    We can delete in the same manner as we did with simple tables. Again I can’t see a way to avoid having left/right in the method names.

    delete(relationship(Conspiracy.class, Person.class))
        .whereLeft(Conspiracy::getName)
        .equalTo("nsa")
        .andRight(Person::getLastName)
        .equalTo("smith")
        .execute(this::openConnection);

    Equivalent to

    DELETE FROM conspiracy_person WHERE conspiracy_name = ? AND person_last_name = ?;

    Now for saving a collection. Saving all the items in a collection at once is something we’re likely to want to do. We can do the following and for each person in our conspiracy, persist the relationship between the conspiracy and the person. This is going to be slow, but at least we’ve made it obvious what we’re doing.

    nsa.getMembers().forEach(agent -> {
        insert(nsa, agent)
            .valueLeft(Conspiracy::getName)
            .valueRight(Person::getLastName)
            .valueRight(Person::getFirstName)
            .execute(this::openConnection);
    });

    Equivalent to repeated


    face

    I like to call this “drive-by-bugfixing” and this is how it usually happens:

    • I have a problem with e.g. xfce4-power-manager, which I’m unable to fix right now
    • I check out some random other package (let’s call it “yerba-power-manager”) to check if it can replace xfpm for me
    • I find it has a bug. Or two. Actually caused by broken openSUSE patches trying to implement new APIs
    • Because it is “an interesting problem”, I fix it them just for fun
    • Later I find that I have no use for this package as (for unreated reasons), it does not fix any of my original problems

    So far so good. Now I have a fixed package lying around in my home:seife buildservice repository. Trying to be a good cititzen, I submit it back to the original YERBA desktop project.
    Can you imagine what happens next?
    Correct! It gets rejected. Why? Because I did not mention all my patches in the changelog.

    Come on guys. Policies etc. are all fine, but if you want people helping maintain your broken packages, then don’t bullshit them with policy crap, period.
    I had done the heavy lifting last sunday and fixed the bugs, now all that the desktop maintainer would have needed to do would have been to amend the changelog.

    Well, I am not that interested in that particular desktop and its problems, so I just revoked the submitrequest and am done with it. I fixed XFPM instead :-)

    And yes, I understand very well that such policies are a good thing to have, and necessary, and if I’m contributing to some subproject on a regular basis, then I of course make sure that I’m following these rules. On the other hand, it’s really easy to discourage the occasional one-time contributor from helping out.

    (Names changed to protect the guilty)


    Saturday
    12 April, 2014


    face

    Heartbleed and openSUSE infrastructureHeartbleed Logo

    As people started to ask, we checked all openSUSE servers and can confirm that none of them is affected by the heartbleed bug.

    For those users running openSUSE 12.2 and 13.1, we can just repeat what we always pray: please install the latest official updates provided by our glorious maintenance team.

    RSYNC and rsync.opensuse.org

    The server behind rsync.opensuse.org is re-installed now and already providing packages via HTTP again.

    But we faced an issue with the automation that creates the content of the “hotstuff” rsync modules: normally a script analyzes the log files of download.opensuse.org and arranges the content of these special rsync modules to provide always the most requested files, so our users have a good chance to find a very close mirror for their packages. But currently the script is not producing what we expect: it empties all those hotstuff modules. As the core developer behind this script comes back from vacation on Monday, we hope he can quickly fix the problem. For now we disabled the “hotstuff” modules (means on rsync.opensuse.org: we disabled rsync completely for now) to avoid problems.

    If you want to sync packages to your local machine(s) via rsync: please pick a mirror from our page at mirrors.opensuse.org providing public rsync.

    New hardware

    All the racks of the OBS reference server

    All the racks of the OBS reference server

    You may have noticed already that the openSUSE team installed a new version of openQA on the production server. An additional news item might be that this new version has seen also new hardware to run faster than ever.

    But not only openQA, also the database cluster behind download.opensuse.org has seen a hardware upgrade. The new servers allow to run the database servers as virtual machines, able to have the whole database structure stored in RAM (you hopefully benefit from the faster response times on download.opensuse.org already). And the servers still have enough capacity left, so we can now also visualize the web servers providing the download.opensuse.org interface. We are currently thinking about the detailed setup of the new download.opensuse.org system (maybe using ha-proxy here again? maybe running mirrorbrain in the “no local storage” mode? …) – so this migration might take some more time, but we want to provide the best possible solution to you.

    Admins on openSUSE Conference

    These year, three of our main European openSUSE administrators are able to attend to the openSUSE Conference in Dubrovnik:Geekocamp

    • Markus Rückert
    • Martin Caj
    • Robert Wawrig

    And they will not only participate: instead they are providing talks and help with the infrastructure and video recording of the venue. So whenever you see them: time to spend them a drink or two :-)

     

     

     

     


    face

    I had one virtual host working on my Apache server, but when I tried to add another, I kept getting directed to the first one, even though I had set ServerName properly (or so I thought). The Apache Virtual Hosts documentation told me I could debug my vhost setup by running /usr/local/apache2/bin/httpd -S -- but this doesn't work on SUSE. So, I set off to look for the SUSE equivalent.
    Read more »


    face

    Some ruby gems are packaged for openSUSE, but often the packages are out of date or simply missing. Apparently, most people use the "Ruby Way" of installing gems. But. . . what is that and how to apply it in openSUSE?

    Read more »


    Friday
    11 April, 2014


    face

    In an effort to make OpenStack available to the non-tech user and appear much less of a heavy lifting project for them, I have created the SUSE Cloud 3 Admin Appliance. I have worked with so many partners, vendors, and customers deploying OpenStack with SUSE Cloud that the idea came to me that SUSE had some great tools that would enable me to create something that they could use to easily deploy, test, and discover OpenStack on their own without a whole lot of effort required. SUSE has integrated Crowbar/Chef as part of the installation framework for our enterprise OpenStack distribution – SUSE Cloud – to improve the speed of deploying and managing OpenStack clouds. This has allowed us to be flexible in our deployment when working with partners and software vendors and provide greater ease of use.

    The creation of the SUSE Cloud 3 Admin Appliance is intended to provide a quick and easy deployment. The partners and vendors we are working with find it useful to quickly test their applications in SUSE Cloud and validate their use. Beyond those cases it has become a great tool for deploying your production private cloud based on OpenStack.

    I have developed two different appliances and you can find them here:

    Standard v1.0.1: SUSE Cloud 3 Admin Standard
    Embedded v1.0.1: SUSE Cloud 3 Admin Embedded

    Standard has a process which will mirror all of the requiredrepositories to the Admin Server.

    Embedded has all of the required repositories in the image ready for you to consume. It might take a little longer to download, but might be worth the wait if you need something portable that can quickly load a private cloud.

    This is version 1.0.x

    Its important that you answer several questions before proceeding. You can find those questions in the SUSE Cloud 3 Deployment Guide

    This Questionnaire will help you as a companion to the Deployment Guide. SUSE Cloud Questionnaire

    This guide on using the appliance can help walk you through step by step. SUSE Cloud Admin Appliance Guide

    - This version contains the GM version of SUSE Cloud 3
    - Disabled IPv6 - Added motd (Message of the day) to reflect next steps
    - Updated logos and wallpaper to align with product
    - Updated init and firstboot process and alignment with YaST firstboot

    Enjoy!


    face

    Cameron Seader recently published two appliances to help you get started with SUSE Cloud. You can read more about them on his blog.

    Bits and Bytes: Quickly Setting-up an OpenStack Cloud with the SUS...: In an effort to make OpenStack available to the non-tech user and appear much less of a heavy lifting project for them, I have created the S...


    face

    A blog entry detailing how I learned something about administering git repositories and, at the same time, migrated my passwords from kwalletmanager (which I found difficult to back up and keep synchronized between machines), to password-store, a command-line application that optionally works with a git repository.
    Read more »


    face

    (This post is in german, since the router is commonly used in german-speaking countries)

    Ich benutze seit längerem FreeDNS anstelle von Dyn.com oder ähnlichen Diensten, hauptsächlich weil das Update so einfach ist: einfach einen personalisierte URL per “wget” oder “curl” aufrufen und schon ist die eigene IP geändert, es wird keine spezielle Software benötigt. Dieser URL enthält eine Zeichenkette die den eigenen Account identifiziert, sie enthält aber nicht den Usernamen oder das Passwort. Somit ist es relativ ungefährlich, diesen URL in z.B. einem cronjob einzutragen: sollte jemand diesen String mitlesen, so kann er im schlimmsten Fall die IP falsch updaten, was zwar lästig aber relativ harmlos ist.

    Nachdem Dyn.com küzlich die kostenlosen Accounts gekündigt hat, war das Thema FreeDNS für einige Bekannte auch aktuell, die größten Bedenken kamen allerdings daher, daß der in die FRITZ!Box eingebaute Dynamic DNS Unsterstützung FreeDNS nicht unterstützen würde. Das ist so nicht richtig:

    Im Webfrontend, unter “Dynamic DNS” einfach “Benutzerdefiniert” auswählen, als “Update-URL” dann den URL einsetzen, die man in der Accountverwaltung auf http://freedns.afraid.org/dynamic/ als “Direct URL” bekommt.
    Als “Domainname” wird der angemeldete Domainname eingetragen. Die FRITZ!Box prüft nach ob sich dieser auflösen lässt und meldet einen Fehler, wenn das nicht funktioniert.
    Leere “Username” und “Passwort” akzeptiert die FRITZ!Box nicht, also habe ich in die Felder einfach “x” eingetragen. Fertig.

    Getestet hier auf einer FRITZ!Box 7390 mit FRITZ!OS 06.03.

    (Update 13.4.2014: “Domainname” ergänzt)


    Jakub Steiner: LGM Leipzig

    07:53 UTCmember

    face

    Another great Libre Graphics Meeting is behind us and I’m greatful for being able to take part in it. Big thanks to everyone making it happen, particularly the GIMP folks for allowing an old affiliate to share the Wilberspace.

    There’s been some great talks, quite a few relating to Blender this year which I hope will become a trend :) Peter Sikking demonstrated how to present (yet again). Even though I’ve been fully aware with the direction of GEGL based non destructive editing GIMP is taking, the way Peter showed the difference between designing for a given context versus mimicking was fun to watch. Chris Lilley showed us the way forward for the gnome-emoji project with SVG support in OpenType. So much going on beside the main talks that I managed to miss many, including Pitivi and Daniel’s on Entangle.

    Allan and I presented what we do within the GNOME project and how to get involved. Kind of ran out of time though, guess who’s to blame. The GIMP folks set up a camera, so hopefully there will be footage of the talks available. Really enjoyed my time, always like coming back with the need to create more things.

    Martin Owens deserves a shoutout for being an awesome Inkscape developer trying to address some rough spots we’ve bumped into over the years. Almost made me want to follow the Inkscape mailing list again :) Hopefully soon, we’ll be able to ditch the window opening verb madness we use for gnome-icon-theme-symbolic export.

    Watch on Youtube


    Thursday
    10 April, 2014


    face
    Do you love writing elegant and well-understandable code in Ruby? Do you want to do something new that really matters to thousands of people? Do you want to create open-source as your daily job and make openSUSE better? Join us and become a full-time Yast developer!

    Yast team is looking for a new "Yastie". Job description can be found here at SUSE Careers page.

    face

    This week I attended #pipelineconf, a new one-day continuous delivery conference in London.

    I did a talk with Alex on how we do continuous delivery at Unruly, which seemed well received. The slides are online here

    There were great discussions in the open space sessions, and ad-hoc in the hallways. Here’s a few points that came up in discussion that I thought were particularly interesting.

    De-segregate different categories of tests

    There are frameworks for writing automated acceptance tests, tools for automated security testing, frameworks and tools for performance testing, and tools for monitoring production systems.

    We don’t really want to test these things in isolation. If there is a feature requested by a customer that we’re implementing, it probably has some non-functional requirements that also apply to it. We really want to test those along with the feature acceptance test.

    i.e. for each acceptance test we could define speed and capacity requirements. We could also run http traffic from each test through tools such as ZAP that try common attack vectors.

    Testing and monitoring shouldn’t be so distinct

    We often think separately about what things we need to monitor in our production environment and what things we want to test as part of our delivery pipeline before releasing to production.

    This often leads us to greatly under-monitor things in production. Therefore, we’re over-reliant on the checks in our pipeline preventing broken things reaching production. We also often fail to spot behaviour degradation in production completely.

    Monitoring tools for production systems often focus on servers/nodes first, and services second.

    We’d really like to just run our acceptance tests for both functional and non-functional requirements in production against our production systems in the same way that we do as part of our deployment.

    This isn’t even particularly hard. An automated test from your application test suite can probably succeed, fail, or generate an unknown failure. These are exactly the states that tools like Nagios expect. You can simply get Nagios to execute your tests.

    Monitoring your application behaviour in production also gives you the opportunity to remove tests from your deployment pipeline if it’s acceptable to the business for a feature to be broken/degraded in production for a certain amount of time. This can be a useful trade-off for tests that are inherently slow and not critically important.

    Non-functional requirements aren’t

    People often call requirements about resilience/robustness/security/performance “non-functional requirements” because they’re requirements that are not for features per se. However, they are still things our customers will want, and they are still things that a our stakeholders can prioritise against features – as long as we have done a good enough job of explaining the cost and risk of doing or not doing the work.

    Technical people typically help with coming up with these requirements, but they should be prioritised along with our features.

    There’s no point building the fastest, most secure system if no-one ever


    Wednesday
    09 April, 2014


    face

    I finally got around to update VDR to version 2.0.x
    I’m using this version since some time and it is working fine for me. However, I’m quite sure that there are some kinks to be ironed out.
    I’m not sure if updating from an old openSUSE VDR installation is a good idea or if it would be better to start from scratch. Personally, I’d do the latter and only keep my channels.conf.

    The recommended way for starting VDR from systemd is now the runvdr-extreme-systemd package, the old runvdr init script is still available from the vdr-runvdr package, but is completely untested.

    Configuration now happens in /etc/runvdr.conf, the old /etc/sysconfig/vdr is no longer read at all.

    Normally, only the used plugins need to be added to runvdr.conf like

    AddPlugin streamdev-server
    AddPlugin epgsearch --logfile=/var/log/epgsearch/log --verbose=3
    AddPlugin xineliboutput --local=none --remote=37890

    This should be the equivalent of old sysconfig values

    VDR_PLUGINS="streamdev-server epgsearch xineliboutput"
    VDR_PLUGIN_ARGS_streamdev_server=""
    VDR_PLUGIN_ARGS_epgsearch="--logfile=/var/log/epgsearch/log --verbose=3"
    VDR_PLUGIN_ARGS_xineliboutput="--local=none --remote=37890"

    The settings in runvdr.conf are commented, so the config file should be easy to understand.

    If you are using vdradmin-am and are importing the old vdradmin.conf (I’d actually advise to start from scratch there, too) then you need to change the SVDR port setting to the new default of 6419 (or change the SVDRPORT variable for VDR to the old value).

    The “supported” plugins are maintained in the “vdr” repository of the openSUSE Buildservice. I’m collection “unsupported” additional plugins in “vdr:plugins”. The definition of “supported” right now is “the stuff that I use”, simply because I cannot really test stuff that I don’t use on a daily basis. Of course if someone wants to help maintain these things, I’m more than willing to move things into the main “vdr” repository.
    Stuff that is in the supported repository will most likely end up in Factory and thus in openSUSE 13.2.

    Bugreports via bugzilla or the opensuse-factory mailinlist, please ;-)


    Tuesday
    08 April, 2014


    face

    Do you happen to find yourself in the situation that you have a program crash every now and then, but whenever this happens to you, you did not start the app in gdb?

    Then I know that feeling; this guide will help you configure your test system to create a coredump whenever an app crashes, so if needed, you can use those at any later time to still create a backtrace. There are obviously some limitations, like stepping through and anything else funny you could do while debugging a running app. But it still does give you a good entry point.

    NOTE: I do not advise to set this up on shared machines or critical machines. A coredump can contain sensitive data (memory extracts), which you would not want to be shared around.

    With that cleared, it’s as simple as a few configuration steps:

    Create any directory where you want to keep your coredumps. I use /cores
    mkdir -m 777 /cores
    This creates a world readable and writable directory

    Configure your system to know WHERE to store the coredumps and how to name them. Create a file /etc/sysctl.d/99-coredump.conf, with the following content:
    # I want to have core dumps in a world writable directory

    kernel.core_pattern = /cores/%e-%t-%u.core
    The pattern results in a filename /cores/NameOfBinary-TimeStamp-UserID.core; this helps yourself to identify the more recent crashes.

    Modify the file /etc/security/limits.conf and add the line
    * soft core unlimited
    instructing the system to actually write coredumps for you.

    Reboot your system and go ahead, make your apps crash. you will see files appear in /cores, if all is setup right.

    You can at any time later start gdb with two parameters, first one pointing to the binary to debug, 2nd parameter to the corefile. Then normal gbd usage applies (if you miss debuginfo packages, you can even still install them at this time: the coredump remains valid). The usage of gdb goes beyond the scope of this article.


    Monday
    07 April, 2014


    Chenthill P: C to C++ tour

    10:37 UTC

    face

    I was requested during the beginning of this year to give a crash course on C++ for the developers with C background in our company. A while back, sankar gave an introduction to go language using http://tour.golang.org/ . Fascinated by the interface of gotour, I wanted to give the C to C++ tour using a similar interface but needed it in a quick time.

    I discussed it with sankar and he came up with https://github.com/psankar/kuvalai/. He masters in pulling people to learning something new ;-) So I quickly learnt a bit of Go and contributed to kuvalai. It was taking a while to get it done, so we discussed and decided to hack up the go-tour. Made it to work with c++!!

    Image

    Readme.txt - explains howto apply the go tour patches and get it running.

    All the programs and the article is now available at https://github.com/chenthillrulz/cpp-tour :-) I wanted to put this up on webserver so that it can benefit others, esp. beginners to c++ and students. But since I don’t have any webspace at the moment, thats going to take time ;-)

    It was really challenging to construct simple, connected, practical examples for demonstrating the features. I wanted this tour to go simply like a movie. I did not know that I would enjoy so much doing this stuff :-) Got some happy, encouraging feedback from my peers after the training sessions. Perhaps I should thank my manager, Pradeep for persuading me to do this stuff. And my team, some of whom are still pushing me for the final session!!

    Have conducted four sessions and the last one would cover advanced concepts such as traits, functors, template specialization, c++-11 features etc. The last session is taking time as I dont have practical experience on using traits, but still want to get some practical examples :-)  Working on it!!

    The descriptions in the doc. many require some polishing. It has about 42 sections at this point. And as always patches are welcome!!!

    writing a blog after quite some time, refreshing :-)



    face

    New release, time to run gource on master -> Evolution of the Open Build Service (gource) on Vimeo.


    Sunday
    06 April, 2014


    face

    Corel released CorelDraw x7 on 27 March 2014. We had some time to look at the changes in file-format and we adapted libcdr to be able to open it. The changes landed this week in LibreOffice code, in master and libreoffice-4-2 branch. That means that support will be available in the next 4.2.x release.

    It is good to note that while introspecting the files we discovered a flaw in CorelDraw x7 that makes files using the Pantone palette number 30 pretty unusable for CorelDraw users. We worked it around and the files are opening just fine in LibreOffice. Take this as a first contribution by the new Document Liberation Project.


    face

    Hi there!

    As you are reading this, I guess you have an interest in openSUSE and GNOME; and you are already fully aware of GNOME 3.12 being released. Also, you do not want to wait for the next version of openSUSE 13.2 to be made available (will anyway only be in November, presumably with GNOME 3.14).

    So, how can YOU get GNOME 3.12 NOW on your openSUSE 13.1 system?
    The answer is simple: there IS already a repository available, but it is ABSOLUTELY not meant for the one that could not potentially recover from a bad repository state.

    The repository in question is called ‘home:dimstar:broken’
    zypper ar obs://home:dimstar:broken/openSUSE_13.1 GNOME-3.12
    zypper dup –from GNOME-3.12

    BUT, as the name indicates, it CAN be broken. A group of people did test this repository already and we concluded it ‘working on our machines’; but then, we only have a handful of machines.

    So, if you ARE interested in helping, testing and you are NOT afraid of having to revert from a potentially broken state, please feel free to add the repository, ‘DUP’ to it and enjoy.

    Any breakage notices? Please tell us in #opensuse-gnome on irc.freenode.net; best to hant around there with us and engage; we might have a bunch of question which you would be the only one to answer.

    So? What are you waiting for? GO GET IT!

    If you’re not as brave to add a repository that has ‘broken’ in the name (the name is intentional, for exactly that reason), then please stay tuned; depending on the feedbacks we receive, the repository will be fixed up and once considered stable be moved into the GNOME:STABLE namespace; at which time you can safely get it as well.

    Cheers! Happy testing.


    Friday
    04 April, 2014


    face
    If your running openSUSE 13.1 and you use Webex on a regular basis for home/work/other you have probably noticed that it does not execute properly and you can't get some of the features to work on it. Well look no further. Thanks to my colleague dvosburg you can run the below command on your openSUSE 13.1 and it will install the necessary packages and its dependencies that are required for a good Webex experience.

    zypper in libpango-1_0-0-32bit \
    libpangomm-1_4-1-32bit \
    libpangox-1_0-0-32bit \
    libgtk-2_0-0-32bit \
    libgtk-3-0-32bit \
    libglib-2_0-0-32bit \
    libXau6-32bit \
    libXmu6-32bit \
    libxcb1-32bit_64 \
    libXext6-32bit 


    Enjoy!

    face

    If your running VMware Workstation 9 and above and you use both existing and new VMs you can possibly get a return of "Unable to change virtual machine power state: Cannot find a valid peer process to connect to" error.

    This does not happen with everyone, but the problem seems to come from the Nvidia drivers. At least as far as I can tell thus far. I have not been able to debug further because this problem is not happening to me. If you have this problem and your running the Nvidia 331.20 drivers then you will want to do the following.


    1) Download the Nvidia 325.15 driver from here http://www.nvidia.com/object/linux-display-amd64-325.15-driver.html


    Create a custom patched Nvidia driver.


    2) Download the patch below for the latest kernel in openSUSE 13.1 which is 3.11+

    http://cvs.rpmfusion.org/viewvc/*checkout*/rpms/nvidia-kmod/devel/kernel_v3.11.patch?revision=1.1&root=nonfree

    save as kernel_v3.11.patch


    3) Execute the following to create the custom patched Nvidia installer.

    # sh NVIDIA-Linux-x86_64-325.15.run --apply-patch kernel_v3.11.patch
    4) You will get a file output NVIDIA-Linux-x86_64-325.15-custom.run
    5) You can now install this custom Nvidia driver which should fix your VMware Workstation problem.

    Enjoy!

    face

    Download this patch

    Then follow these steps from a terminal as root user

    # cd /usr/lib/vmware/modules/source/
    # cp vmnet.tar vmnet.tar.original
    # tar xvf vmnet.tar vmnet-only/filter.c
    # patch vmnet-only/filter.c < /location_of_filter.c.diff/kernel-3.14-vmware-filter.c.diff
    # tar -uvf vmnet.tar vmnet-only/filter.c
    # rm -rf vmnet-only/
    # vmware-modconfig --console --install-all 
     Enjoy!

    Older blog entries ->