Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.


Monday
30 May, 2016


Calvin Gaisford: Pigs Fly

23:07 UTC

face

I was chatting today with Boyd and he discovered he hasn't blogged anything since January 1, 2009.  The shocking part came when we realized that was four years ago.  While it hasn't been that long for me, I've been pretty inconsistent and most of my recent entries have been about biking.  Since I can't bike right now (27˚F) I thought I'd write about some technology.


A little more than a month ago I began work on Todo Pro for Android.  I have a very limited working version that I can use and syncs with the Todo Pro service.  BTW, the answer is no you can't have it yet, only I know how to tip-toe through it so it doesn't have problems.  

Just before Christmas I decided I needed to really experience life with an Android phone so I went shopping.  I was limited to the Verizon models available and spent half a day researching and going to the Verizon store to play with the devices.  The first device I was shown was the Samsung Galaxy Note 2.  I stick with my first observation in that this device is not a phone.  It is a very small tablet that works to make phone calls and is only good for Women who can place it in their purse.  As if calling something this large a phone isn't enough entertainment, it comes with a stylus!  

Styli
For the benefit the younger and less experienced reader, a stylus is a stick that looks like a pencil that you use for input on a device.  It's sort of how you use your finger on your iPhone, but think Soviet Military from the 80's.  Steve Jobs was right.  My top desk drawer tends to collect items at the back of it that were useful in their day but have now been replaced.  Under a collection of foreign money not worth the time or effort to exchange and an old wallet I never used is my old worn out collection of styli from devices years ago.

Back of my desk drawer
Lost Styli Revealed


The thought of going back to a stylus is frightening, but the sales guy at the Verizon store assured me the Galaxy Note 2 was the most advanced phone made.

Samsung Galaxy S III
I spent a lot of time looking at the various Android phones available and even called up Android users I knew and asked their advice.  The Galaxy S III seemed like a logical choice.  It's Verizon's most popular phone from what I could tell and it had the best Calendar App (an exclusive app to Samsung) of all of the devices.  I use my calendar a lot!  It also had great specs and I was fairly confident it would get the next few Android updates which would prolong it's usefulness to Appigo.  I bought the phone and had them connect it to my plan replacing


face

I use my phone as my alarm clock and I have to wonder what the people were thinking when they designed the screens to turn off the alarms (shown above on an iPhone 5 and a Droid RAZR M).  For the benefit of the younger and less experienced readers (I'm starting to find some sick pleasure in saying that) I'd like to explain the problem here.  As you become older, or as Brian my youthful employee likes to say "become more brittle", you'll find that you need corrective lenses for almost all you do.  I'm now to the point that when somebody hands me something to read I find myself pulling a "Dr. Gaisford" move where I hold the item at arm length from my face so I can actually read it.

There is a reason why most alarm clocks have a single physical button on the top of them to turn them off.   Simply slam your hand down on the top of it and you'll most likely hit that button and turn off the alarm.  At 5:30am when I am suddenly thrust out of my deep sleep into a very dark room lit only by the seemingly ultra-bright screen on my phone, it is very difficult to make sense of these screens.  The iPhone screen is not as bad as the Android screen because the slide at the bottom of the screen is fairly easy to do.  Of course when the room it pitch black and I'm groggy from sleep and I don't have glasses on the screen looks more like this (except it hurts your eyes more and there is a loud sound that won't stop):



Most of us could probably figure out how to turn the alarm off even when the screen is that bad.  How about the Android screen:


Now imagine having my vision being half asleep squinting as hard as you can to make out the text on that screen (which I never could due to the extreem brightness) trying to turn off the alarm.  A few days ago I hit what I thought was the right button because it went silent.  When I turned off the water from the shower I could hear it going off again.  I obviously guessed wrong and hit the snooze button by mistake.  That or I guessed correctly but since the two buttons are so small and right next to each other I may have actually hit the wrong button.

I would suggest that a wake up alarm screen needs to have two simple large buttons, one to snooze and the other to indicate I'm up.  Here is a very quick and dirty mockup adjusted to my early morning perception:


This is by no means a perfect and beautiful, but for the purposes of turning off my alarms at 5:30am, I don't need perfection, I need function!  Note that the two huge buttons are on opposites ends

face
 
I have posted a bunch of times about switching between phones.  Because I develop apps I have various devices and switch between them depending on what I'm working on.  I like to test out the software I'm currently working on.  Last night I switched back to an iPhone 5s from a Samsung Galaxy S4.  If you properly set up your "cloud" data it's fairly easy to switch between devices (how I do that probably deserves it's own post).  I was really surprised this morning when I took Toma for a walk at the quality of picture difference between the two.  Look at the full resolution version of that picture (at the leaves).  I have never been able to get a photo like that out of the Samsung!  Toma's face in the photo isn't as clear but he was moving.

face

Here’s a quick status update about where we currently stand with respect to multiscreen support in Plasma Desktop.

While for many people, multiscreen support in Plasma works nicely, for some of our users, it doesn’t. There are problems with restoring previously set up configuration, and around the primary display mechanism. We’re really unhappy about that, and we’re working on fixing it for all of our users. These kind of bug are the stuff nightmares are made of, so there’s not a silver bullet to fix everything of it, once and for all right away. Multiscreen support requires many different components to play in tune with each other, and they’re usually divided into separate processes communicating via different channels with each other. There’s X11 involved, XCB, Qt, libkscreen and of course the Plasma shell. I can easily at least three different protocols in this game, Wayland being a fourth (but likely not used at the same time as X11). The complexity involved here is not just, and the components involved are actually doing their job quite well and have their specific purposes. Let me give an overview.

Multiscreen components

Plasma Shell renders the desktop, places panels, etc., When a new screen is connected, it checks whether it has an existing configuration (wallpaper, widgets, panels etc.) and extends the desktop. Plasma shell gets its information from QScreen now (more on that later on!)

KWin is the compositor and window manager. KWin/X11 interacts with X11 and is responsible for window management, movement, etc.. Under Wayland, it will also take the job of the graphical and display server work that X11 currently does, though mostly through Wayland and *GL APIs.

KScreen kded is a little daemon (actually a plugin) that keeps track of connected monitors and applies existing configs when they change

KScreen is a module in systemsettings that allows to set up the display hardware, positioning, resolution, etc.

Libkscreen is the library that backs the KScreen configuration. It offers an API abstraction over XRandR and Wayland. libkscreen sits pretty much at the heart of proper multiscreen support when it comes to configuring manually and loading the configuration.

Primary Desktop

The primary display mechanism is a bit of API (rooted in X11) to mark a display as primary. This is used to place the Panel in Plasma, and for example to show the login manager window on the correct monitor.

Libkscreen and Qt’s native QScreen are two different mechanism to reflect screen information. QScreen is mainly used for querying info (and is of course used throughout QtGui to place windows, get information about resolution and DPI, etc.). Libkscreen has all this information as well, but also some more, such as write support. Libkscreen’s backends get this information directly from Xorg, not going through Qt’s QScreen API. For plasmashell, we ended up needing both, since it was not possible to find the primary display using Qt’s API. This causes quite some problems since


face

Embedded below is the blog of Google Summer of Code student Martin Garcia Monterde. Martin detailed his first week coding with openSUSE and the Google Summer of Code.


Sunday
29 May, 2016


Luca Beltrame: I have a problem...

07:48 UTCmember

face

Every day, a sizable number of people posts problems on the KDE Community Forums and the ever-helpful staff does their best to solve whatever issues they’re facing. But what exactly does one do when this happens? This post provides more insights on the process.

NOTE: The following applies to my workflow for the Kontact & PIM subforum.

Step 1: Someone posts a problem

The questions posted are various, and range from simple tasks (“how I do XXX”) to very specific workflows. It covers a large spectrum.
The first thing I do when reading a post, is to go through a “mental checklist”:

  1. Is this known already?
  2. Are there enough information?
  3. What distro is this user on?

Answering point 1 means I have to keep up with development of KDE software, or if I don’t know, check the mailing lists and blog posts to see if other people have raised the issue (checking Bugzilla is a last resort, due to the very large number of bugs posted there). It also helps running the latest KDE software.

If point 2 isn’t satisfied, I ask a few more questions following the General Troubleshooting guidelines. These include conditions for reproduction of the issue, if it still occurs with a new user account, and so on.

Point 3 is related to point 2: not all distros are equal, so knowing on which distro the user in may reveal distribution-specific issues that need to be addressed directly downstream.

Step 2: Going deeper

If the issue isn’t solved even like this, “we need to go deeper”. Usually, time permitting, I try to reproduce the issue myself if it is within my reach (for example, if it doesn’t involve company secrets on an internal server ;).

If I can reproduce it, I tell the user to file a bug, or workarounds, if I found any. If I can’t, I ask a few more details. Usually this can lead to the issue being solved, or to a bug report being filed.

Step 3: Communicating

Sometimes the issue is unclear, or it is behavior where the line between feature and bug is very blurred. In this case, I need to get information straight from the horse’s mouth. I hop on IRC, and I address the developers directly, usually pointing at the forum thread, and asking for details.

Sometimes they follow up directly, sometimes they report me useful information, and sometimes they tell me its’ a feature or a bug. In either case, I report the information to the thread starter. In rare cases, the issue is simple enough that it gets fixed shortly afterwards.

Stem 4: Following up

Unfortunately not all bugs can be addressed straight away, so sometimes issues stay lingering for a long period of time. However, sometimes a commit or two may fix it, with or without a bug being filed. If I notice this (I do read kde-commits from time to time ;) I follow up on the thread writing about it


Friday
27 May, 2016


Michal Čihař: wlc 0.3

20:30 UTC

face

wlc 0.3, a command line utility for Weblate, has been just released. This is probably first release which is worth using so it's probably also worth of bigger announcement.

It is built on API introduced in Weblate 2.6 and still being in development. Several commands from wlc will not work properly if executed against Weblate 2.6, first fully supported version will be 2.7 (current git is okay as well, it is now running on both demo and hosting servers).

How to use it? First you will probably want to store the credentials, so that your requests are authenticated (you can do unauthenticated requests as well, but obviously only read only and on public objects), so lets create ~/.config/weblate:

[weblate]
url = https://hosted.weblate.org/api/

[keys]
https://hosted.weblate.org/api/ = APIKEY

Now you can do basic commands:

$ wlc show weblate/master/cs
...
last_author: Michal Čihař
last_change: 2016-05-13T15:59:25
revision: 62f038bb0bfe360494fb8dee30fd9d34133a8663
share_url: https://hosted.weblate.org/engage/weblate/cs/
total: 1361
total_words: 6144
translate_url: https://hosted.weblate.org/translate/weblate/master/cs/
translated: 1361
translated_percent: 100.0
translated_words: 6144
url: https://hosted.weblate.org/api/translations/weblate/master/cs/
web_url: https://hosted.weblate.org/projects/weblate/master/cs/

You can find more examples in wlc documentation.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments


face

Dear Tumbleweed users and hackers,

During the last couple days, the community was strong in producing and submitting fixes to make it possible for GCC 6 to become the default compiler for our distribution. But it’s not entirely there yet.

Things that DID happen though in the last week, things that made it into the snapshots 0520, 0524 & 0525) are:

  • Late additions of GNOME 3.20.2
  • FreeRDP was updated to a 2.0.0 pre-release snapshot (the 1.0.x tree is no longer maintained and a bunch of other packages would need new features – just upstream is reluctant to ever call it ‘release’ again it seems)
  • Mozilla Thunderbird 45.1

In snapshot 0524, the package libpci3 (updated from 3.4 to 3.5) accidentally broke ABI, which in turn stopped chromium from starting. This was observed by openQA – but chromium is not tagged as a ‘critical’ ship stopper for a new snapshot (there are always multiple browsers available after all). For snapshot 0525, chromium was rebuilt to ensure it links against the new ABI and libpci is also receiving an update to 3.5.1 in snapshot 0526, which re-introduces the accidentally broken symbol – thus fixing any other package that might have linked to it.

Things brewing in staging did not really change during this week

  • GCC 6 – the up-to-date status is in the wiki
  • Perl 5.24.0 – Looking reasonable, one kiwi fix is missing (boo#981080)
  • KDE Applications 16.04.1 – the legal queue is just too long 🙁
  • Linux Kernel 4.6 – a fix in ‘installation images’ is pending; should be ready next week

Have a great weekend ahead


Martin Vidner: Ruby Call Graph

12:59 UTCmember

face

Call-graph makes a call graph among methods of a single Ruby file.
I made it to help me orient myself in unfamiliar legacy code and to help identify cohesive parts that could be split out.
Yes, it is quick and dirty.

Example

One file in YaST has around 2700 lines and 73 methods. The call graph below was made with
$ ./call-graph ../yast/packager/src/modules/Packages.rb
$ dot -Tpng -oPackages.png ../yast/packager/src/modules/Packages.dot
If the resulting size is too big, use ImageMagick:
$ convert Packages.png -resize 1200 Packages-small.png
Packages.png, an example output
Packages.png, an example output

Requirements

License

MIT

Thursday
26 May, 2016


face

I have decided to change the publication system for my blog from Blogger to static pages generated by Jekyll.

You can find more details at http://blog.ladslezak.cz/2016/05/26/welcome-to-jekyll/.


I also moved the blog to a new address, from now on check this page:


http://blog.ladslezak.cz


face

Something a bit different for my blog but I couldnt help but share this very cool article by Holly Root-Gutteridge I saw on twitter about how wolves (and other animals) have different tunes/patterns when communicating depending on the region they are from. It is pretty fascinating reading about how this research showed that the animals have evolved to not just be physically adapted to their environment but also audibly different from one another. The full article is somewhat long but definitely worth a read.

The question of when and how language first emerged is the topic of tremendous controversy – it has even been called ‘the hardest question in science’. My work is on what information can be extracted from vocalisations. It is a first step in understanding where the physical body dictates the shape and form of the call, and where the caller has control. For example, a piano player is limited to combinations of a piano’s 88 keys, but a song played on a Steinway will have different sound qualities to the same song on a bar’s upright. In addition, different tunes can also be played. Separating the characteristics of the instrument from the choices of the player is essential before we can understand what meaning those choices might convey.

More questions follow. If howls from different subspecies are different, do the howls convey the same message? Is there a shared culture of howl-meanings, where an aggressive howl from a European wolf means the same thing as an aggressive howl of a Himalayan? And can a coyote differentiate between a red wolf howling with aggressive intent and one advertising the desire to mate? Even without grammar or syntax, howls can convey intent, and if the shape of the howl changes enough while the intent remains constant, the foundations of distinctive culture can begin to appear.

Full story here.

 



Tuesday
24 May, 2016


face

As some of you already know, xdg-app project is dead. The Swedish conspiracy members tell me it’s a good thing and should turn your attention to project Flatpak.

Flatpak aims to solve the painful problem of the Linux distribution — the fact that the OS is intertwined with the applications. It is a pain to decouple the two to be able to

  • Keep a particular version of an app around, regardless of OS updates. Or vice versa, be able to run an uptodate application on an older OS.
  • Allow application authors distribute binaries they built themselves. Binaries they can support and accept useful bug reports for. Binaries they can keep updated.

But enough of the useful info, you can read all about the project on the new website. Instead, here comes the irrelevant tidbits that I find interesting to share myself. The new website has been built with Middleman, because that’s what I’ve been familiar with and worked for me in other projects.

It’s nice to have a static site that is maintainable and easy to update over time. Using something like Middleman allows to do things like embedding an SVG inside a simple markdown page and animate it with CSS.

=partial "graph.svg"
:css
  @keyframes spin {
    0% { transform: rotateZ(0deg); }
    100% { transform: rotateZ(359deg); }
  }
  #cog {
    animation: spin 6s infinite normal linear forwards;
  }

See it in action.

The resulting page has the SVG embedded to allow text copy & pasting and page linking, while keeping the SVG as a separate asset allows easy edits in Inkscape.

What I found really refreshing is seeing so much outside involvement on the website despite ever publicising it. Even during developing the site as my personal project I would get kind pull requests and bug reports on github. Thanks to all the kind souls out there. While not forgetting about future proofing our infrastructure, we should probably not forget the barrier to entry and making use of well established infrastructures like github.

Also, there is no Swedish conspiracy. Oh and Flatpak packages are almost ready to go for Fedora.


face

Embedded below is the blog of Google Summer of Code student Shalom Ray. Ray provides an overview of his project Improving the one-click installer.


Monday
23 May, 2016


face

I had my «openSUSE bug hunting» presentation scheduled at 09h30 this morning. I’m usually very lazy on Sundays but the enthusiasm of the Developers Conference is just an amazing feeling. Though we live on a small island, we get to meet some people maybe just once a year during this fun event. I picked up Shelly on the way and we reached Voilà Hotel at 09h05. Right at the hotel entrance Yash was waiting, he might have seen us coming. We went upstairs chatting and met JoKi. My presentation was scheduled at the Accelerator and I thought I’d just go and test the gear. Aargh! The TV had only HDMI cable and my ThinkPad had VGA & a Mini DisplayPort. That said, I needed an adapter. Joffrey who came around greeting everyone had a HDMI to VGA cable, which he lent me. At that same time JoKi also came with a Mini DisplayPort to HDMI convertor. Great! Then I had an adapter plus a backup.

I mirrored my laptop display and checked if everything’s fine. All good and it was 09h30.

Developers Conference 2016, openSUSE bug hunting

Thank you for the photo, Shelly :)

However, folks were still coming, so we thought let’s just wait till 09h45 giving a chance for others to arrive. Indeed I started at 09h45 sharp with a 3/4 full room and just a few minutes later it was «house full». That was great and a true encouragement though a Sunday morning.







Thank you for the (re-)tweets folks. :D

I chose the title of my prez «openSUSE bug hunting» from a blog post I wrote in 2013 while running «release candidates» of openSUSE. Starting the presentation I spoke about how some folks might organize special events working to hunt and find bugs, while some bugs we just encounter when doing regular tasks. What do we do when we find one of those bugs? Do we just ignore and think, «it’s just an error, nothing more», and we continue work? Do we search on the internet whether others encountered similar errors and if there is a fix? Few people ever consider filing a bug report through the right channel, unless it’s just a «button» away like some applications (e.g web browsers) offer.

Bug reporting most of the time require some information gathering from the system; that


face

Guest post by Petr Marek (source)

Everybody driving a car needs the navigation to get to the destination fast and avoid traffic jam. One of the biggest problems is how to enter fast the destination and how to find where are the congestions, what is the traffic situation. YodaQA Traffic is a project attempting to answer the traffic related questions quickly and efficiently. Drivers may ask questions in natural language like: “What is the traffic situation in the Evropská street?” or “What is the fastest route from Opletalova street to Kafkova street?” You can try out the prototype (demo available only for limited time) – try to ask for example “traffic situation in the Wilsonova street” .

YodaQA Traffic still has some limitations. Currently we only have a browser version not suitable for smart phones. It is answering traffic questions for Prague’s streets only.

But as usual, this whole technology demo is open source – you can find it in the branch f/traffic-flow of our Hub project.

How does it work and where we get the data from?

All YodaQA are first analyzed to recognize and select traffic questions. We do it in two steps. The first step is to recognize the question topic. We use six topics like traffic situation, traffic incident or fastest route. The topic is determined by comparing semantic similarity of the user’s question with a set of reference questions. We estimate the similarity with our Dataset-STS Scoring API. Each reference question is labeled by a “topic”. The Sentence Pair Similarity algorithm selects the reference question “topic” with the highest similarity to the question.

Next we need to recognize the location, i.e. to recognize the street name. This is handled by another tool called the Label-lookup which we normally use for entity linking in YodaQA. It compares questions words with a list of all street names in the Prague. We exported the list of streets names in Prague from OpenStreetMap. We do not do exact match, we try to select the closest street name from the list.

The last step is to decide whether the question is really the traffic question, because the Dataset-STS API and Label-lookup can find topic and street name even in a pure movie question like “When was the Nightmare on Elm Street released?”. The Dataset-STS and Label-lookup return not only topic or street name but also the score, fortunately. We created dataset of over 70 traffic questions and over 300 movies questions and founded the minimal score thresholds, with which the recognition makes the lowest classification error on this dataset.

Once we know the type of question and the location we start a small script accessing the traffic situation data from HERE Maps. The only complication is that the the API doesn’t return traffic situation for particular street, but bounding box only. To overcome this problem we have to find a bounding box for a desired location, using an algorithm we developed for this purpose. Then we call the


face

Last week, members of The GNOME Project announced a new conference in the United States northwest to enhance the GNU/Linux application ecosystem.

The Libre Application Summit, which will take place in Portland, Oregon, from Sept. 19 – 23, aims to empower application developers both big and small as well as enhance app developers collaboratation with major Linux distributions.

The summit, which is designed to improve the developer and user experience for the GNU/Linux desktop, has a lot of potential to expand and openSUSE is excited to be the summit’s first sponsor.

Since last year, openSUSE has been working together with GNOME members to offer an event in the Portland designed for application developers who want to explore opportunities for expanding apps to distributions, to build personal relationship with users and to explore opportunities to monetize developers apps.

Entrepreneurs and open-source enthusiasts are encouraged to attend if they are interested in building a product based on free and open source software.

The summit will focus on the following topics:

Ecosystem: business, legal, community, and social issues

Platforms: deep low-level topics around hardware, drivers, and tools

Distribution: collaborating with established distributions (like openSUSE), inter-distribution cooperation, QA and continuous integration.

Development: toolkits, X/Wayland, security, runtimes, SDK, development tools.

West Coast Geekos

openSUSE community members living on the West Coast are encouraged to submit a talk.

Learn more about this conference by visiting las.gnome.org or watch this video.

Any companies interested in sponsoring the event along with openSUSE, should visit the sponsorship page.


face
A few days ago, I published my last blogpost as ’ownCloud’ on our blog roll about the ownCloud community having grown by 80% in the last year. Talk about leaving on a high note!

Yes, I’ll be leaving ownCloud, Inc. - but not the community. As the numbers from my last post make clear, the ownCloud community is doing awesome. It is growing at an exponential rate and while that in itself poses challenges, the community is healthy and doing great.

I joined in 2014, when ownCloud, Inc. had about 36 employees. The community grew that year, according to our history page, from 1 million users to 2.2 while the number of average coders per month went from 62 to 76. For me, the coolest thing that year was the ownCloud Contributor Conference, that brought together 100 contributors for a week of hacking at the university of Berlin. A stressful, but awesome week. Though, my first meeting most of my colleagues was some months earlier at the Stuttgart meetup and my first release was ownCloud 7 not long before the event.

2015 was more of that - our history page has a great overview and I’m darn proud of having been a part of all those things. 2016 brought ownCloud 9, a major release, which was accompanied by an overhaul of owncloud.org, I hope you like our new website!

Not everything is finished, of course. We’re still smack in the middle of awesome work with Collaboraand Spreedas well as the WDLabs PiDrive project - I just finished and published this page about it. All great stuff which has great momentum and will certainly move forward.

Myself, I’ll stay around in the community. I’ll talk about the awesome stuff that is coming next early June but until then, don’t hesitate to contact me if you’ve got any questions about ownCloud or anything else. You can still catch me on jos@opensuse.org;-)

Saturday
21 May, 2016


face

It was Saturday morning and I found myself rushing to be at Flying Dodo just in time. Oh, to be precise «not in time» but like 15 mins later than I expected to be, that is 09h45. The night before I got busy preparing the box of openSUSE goodies, sorted the stickers, pamphlets, DVDs and cheat sheets. Little I knew that folks would like those so much. I would tweet as I got the pack ready.





Shelly and I were the first geeks to reach Flying Dodo. While I would setup my laptop with the projector, she prepared the tables with the stickers and cheat sheets.

Developers Conference, Linux Installfest

The first few geeks came shortly afterwards. Ronny and Ajay from the Linux User Group of Mauritius came along with their gear. Oh, this little gang from the University of Mauritius hopped in and yes we were under attack. We also received the visit of folks from the PHP Mauritius User Group.



The morning session was great. Ajay, Pritvi, Ronny and Avish helped people getting their laptop Tux’ed either with Ubuntu or with openSUSE. Meanwhile I got to run an interactive session with the university folks with a command-line walk-through.


There was a question about email headers. I showed email headers from my Gmail account and also from Thunderbird. We talked a little bit about IETF RFC 2822 and together we looked at some of those colon-separated field values. Ajay gave us a simple yet clear explanation on SPF and DKIM. We did a ‘dig’ on a couple few domains to read the TXT records. Ajay explained about hard-fail and soft-fail in the SPF records and how they affect delivery of email.


I tried answering other questions that popped up; covering various topics like SSH, file permissions, etc, and we had real fun during that interactive session.



face

Dear Tumbleweed users and hackers,

This week we finally could add the long awaited Qt 5.6 to Tumbleweed. It was blocked for a long time as it exposed a bug in, what showed, icewm (the window manager used during installation). This bug though is so deeply nested in the architecture of icewm, that in the end, it was decided to workaround the bug with a ‘fix’ in YaST directly.

During this week, there were 4 snapshots published (0514, 0516, 0517 & 0519).

The most notable changes were:

  • Qt 5.6 – as mentioned in the intro
  • KDE Framework 5.22.0
  • Plasma 5.6.4
  • GNOME 3.20.2 – packages were spread over a longer period to get you fixes as early as possible

The season of ‘larger’ changes has started – and we now have in Staging:

  • GCC6 as default compiler for our distribution
  • Perl 5.24.0 – It would not be perl if nothing would break
  • KDE Applications 16.04.1 – Shaping up, mostly legal reviews pending
  • Linux kernel 4.6

Have a great time and use every moment to have a lot of fun


Friday
20 May, 2016


face

I have some 2” wooden blinds in my house that I’ve been wanting to motorize. Why? I’m lazy and I thought it would be cool to have.

The best commercial solution for retrofitting existing blinds seems to be Somfy. They have wireless battery-powered systems and fancy-looking remotes. For new motorized blinds, Bali seems to be popular, and they use Somfy for the motorization. There are also some kickstarter things (MOVE, MySmartBlinds), but the last time I looked those didn’t really do what I want. Somfy likely has a good product, but it’s very expensive. It looks like it would cost about $150 per blind, which is just way too much for me. They want $30 just for the plastic wand that holds the batteries (8 x AA). We’re talking about a motor and a wireless controller to tell it what to do. It’s not rocket surgery, so why should it cost $150?

My requirements are:

  • Ability to tilt the blinds to one of three positions (up, middle, down) remotely via some wireless interface. I don’t care about raising or lowering the entire blind.
  • There must be some API for the wireless interface such that I can automate them myself (close at night, open in morning)
  • Tilt multiple blinds at the same time so they look coordinated.
  • Be power efficient – one set of batteries should last more than a year.

Somfy satisfies this if I also buy their “Universal RTS Interface” for $233, but that only makes their solution even more expensive. For the 6 blinds I wanted to motorize, it would cost about $1200. No way.

I’ve been meaning to get into microcontrollers for a while now, and I thought this would be the perfect project for me to start. About a year ago I bought a RedBear BLE Nano to play with some Bluetooth stuff, so I started with that. I got a hobby servo and a bunch of other junk (resistors, capacitors, etc) from Sparkfun and began flailing around while I had some time off around Christmas. The Arduino environment on the BLE Nano is a little weird, but I got things cobbled together relatively quickly. The servo was very noisy, and it’s difficult to control the speed, but it worked. Because I wanted to control multiple devices at once, BLE was not a really great option (since AFAIK there is no way to ‘broadcast’ stuff in a way that is power-efficient for the listeners), and I started looking at other options. Eventually I ran across the Moteino.

The Moteino is an Arduino clone paired with a RFM69W wireless radio, operating at either 915Mhz or 433Mhz. It also has a very efficient voltage regulator, making it suitable for battery powered applications. The creator of the board (Felix Rusu) has put in a lot of work to create libraries for the Moteino to make it useful in exactly my type of application, so I gave it a try. The RFM69


face

The blog has been a little bit silent – a typical sign of us working too hard to worry about that! But we’ll satisfy some of your curiosity in the coming weeks as we have about six posts in the pipeline.

The thing I would like to mention first is some fundamental research we work on now. I stepped back from my daily Question Answering churn and took a little look around and decided the right thing to focus for a while are the fundamentals of the NLP field so that our machine learning works better and makes more sense. Warning: We’ll use some scientific jargon in this one post.

So, in the first months of 2016 I focused huge chunk of my research on deep learning of natural language. That means neural networks used on unstructured text, in various forms, shapes and goals. I have set some audacious goals for myself, fell short in some aspects but still made some good progress hopefully. Here’s the deal – a lot of the current research is about processing a single sentence, maybe to classify its sentiment or translate it or generate other sentences. But I have noticed that recently, I have seen many problems that are about scoring a pair of two sentences. So I decided to look into that and try to build something that (A) works better, (B) actually has an API and we can use it anywhere for anything.

My original goal was to build awesome new neural network architectures that will turn the field on its head. But I noticed that the field is a bit of a mess – there is a lot of tasks that are about the same thing, but very little cross-talk between them. So you get a paper that improves the task of Answer Sentence Selection, but could the models do better on the Ubuntu Dialogue task then, or on Paraphrasing datasets? Who knows! Meanwhile, each dataset has its own format and a lot of time is spent only in writing the adapter code for it. Training protocols (from objectives to segmentation to embedding preinitializations) are inconsistent, and some datasets need a lot of improvement. Well, my goal turned to sorting out the field, cross-check the same models on many tasks and provide a better entry point for others than I had.

Software: Getting a few students of the 3C group together, we have created the dataset-sts platform for all tasks and models that are about comparing two sentences using deep learning. We have a pretty good coverage (of both tasks and models), and more brewing in some side branches. It’s in Python and uses the awesome Keras deep learning library.

Paper: To kick things off research-wise, we have posted a paper Sentence Pair Scoring: Towards Unified Framework for Text Comprehension where we summed up what we have learned early in the process. A few highlights:

  • We have a lofty goal of building an universal text comprehension model, a sort of

Wednesday
18 May, 2016


face

Here we are after another Scrum sprint with our usual report about the activity in YaST development.

Trusted boot

YaST bootloader module got a new option, Trusted Boot (FATE#316553). It installs TrustedGRUB2 instead of the regular GRUB2. Trusted Boot means measuring the integrity of the boot process, with the help from the hardware (a TPM, Trusted Platform Module, chip).

It enables some interesting things which we unfortunately haven’t provided out of the box. We give you a bootloader which measures the boot integrity and places the results in Platform Configuration Registers (PCRs).

First you need to make sure Trusted Boot is enabled in the BIOS setup (the setting is named Security / Security Chip on Thinkpads, for example). Then you can enable the new YaST Bootloader option that will install TrustedGRUB2.

Trusted boot in YaST Bootloader

In the description of this pull request you can find a more detailed explanation including some commands and hexadecimal dumps to check the result. Geek pr0n!

SSH keys importing… and a glance at a YaST Developer’s life

When looking at any software project, it’s common to find some feature or piece of code that is there due to the so-called “historical reasons”. YaST2 code-base has been around since 1999, adapting to changes and new requirements in a (almost literally) daily basis since then. That leads to a new level of heritage – the “prehistoric reasons”. Working in the YaST Team implies coding, debugging, testing… and archaeological research.

We got a bug report about the installer “stealing” some SSH host keys (but not all of them) from previously installed systems. It was actually the effect of a little-known YaST feature that can look surprising (not to say weird) at first sight. Ten years ago, somebody decided that when installing SUSE in a networked environment, where people use SSH to log in, it was better to import SSH keys from a previously installed Linux than to get that “ssh host key changed” for everybody who tries to connect. The rational was that forcing everybody to change the ~/.ssh/known_hosts file often could become a security breach, since people could get used to ignore the security warnings. Welcome to the world of historical reasons. :-) Moreover, it was decided that the operation should be performed without showing any information to the users, in order to not confuse them.

More or less at the same time (we are still talking about 2006), it was decided to introduce importing of users from an existing system, this time with user interaction. The YaST developers decided that it would be fine to share some mechanisms in the implementation of both features. Another step into the historical reasons void.

Fast forward to the present. After several fate entries, bug reports and redesigns over the years, we decided to make the importing of SSH host keys more visible and usable, to make both functionalities (SSH import and users import) more independent and more clean and to take the first step to clean up the insanity introduced through


face


Since the last openSUSE Tumbleweed update, there have been three snapshots, but the next snapshot is the one many users are waiting for because it will include Qt 5.6.

The hold up has taken some time to be released because a minor fix, but any snapshot dated 20160517 or higher will have the Qt 5.6.

There are a few other exciting packages in staging that could soon arrive in follow on Tumbleweed snapshots like Plasma 5.6, Plasma Framework 5.22.0 and KDE Applications 16.04.1. The 4.6 Linux Kernel was also checked in to staging recently.

What has already been released in the repositories in recent Tumbleweed snapshots is GNOME 3.20.2 and Linux Kernel 4.5.4 from the 20160516 snapshot. Libzypp updated to major version 16.0.0.

Snapshot 20160514 and 20160512 had a few package changes that updated translations for many of the new package updates.

The live installer was dropped from Tumbleweed. People can get live images, but there is no installer.

openSUSE Conference News

The schedule for the openSUSE Conference was released yesterday. The schedule is not complete and is still subject to change. Any presenter who has an issue with the scheduled date and time of their presentation should email ddemaio@suse.de.

Presenters who are not using a company or other project presentation template are asked to use the openSUSE presentation template for talks or workshops.

Release Engineer

openSUSE is looking for a Release Engineer. A job announcement was recently posted. The position description is listed as a release engineer. The position requires a proficiency in several major scripting languages like python, bash and perl. The applicant should understand open source communities and be passionate about Linux. The job location is listed in Nuremberg, but people who are interested in working remotely should also apply.


Monday
16 May, 2016


face

with a little help from SaltStack -

I’ve been running my personal blog on rootco.de for a few months now. The server is a minimal install of openSUSE Leap 42.1 running on a nice physical machine hosted at the awesome Hetzner, who offer openSUSE Leap as an OS on all of their Physical and Virtual server hosting. I use the standard Apache available in Leap, with Jekyll to generate this blog. You can actually see the source to this Jekyll blog on GitHub. And to manage it all I use the awesome SaltStack and keep all of my Salt configuration in GitHub also so you can see exactly how my system is setup.

Why am I sharing all of this? Well this weekend there was something I needed to fix.
http://rootco.de was running without HTTPS.

So What?

This site is a blog about Free Software & Open Source stuff, why on earth does it need to be running HTTPS?.

Because every single web service that can be HTTPS, should be HTTPS. There are lots of good articles going back years as to why, but the simplest reasons is that it helps ensure the content you visit when you go to my blog is the content I intended for my blog. It’s very hard for someone to tamper with the content delivered from a HTTPS website. While I’m not (yet) currently hosting any interactive services on my server, if I do I want to ensure they’re secured by HTTPS so the data I’m sending to my server is done so as securely as possible.

And in this day and age, there is rarely an excuse to not use HTTPS for everything. Certificates used to be expensive and complicated to setup, but thanks to the wonderful project LetsEncrypt anyone can now get certificates for their domains for FREE.

Getting Stated with LetsEncrypt

I started as anyone should, by reading the Getting Started Guide.
As there is not (yet) a certbot package for openSUSE Leap, I had to use the certbot-auto wrapper script.
The documentation recommends you install it using the following commands:

$ git clone https://github.com/certbot/certbot
$ cd certbot
$ ./certbot-auto --help

As I’m actually using SaltStack to manage my system, all I did instead was add the following to my Salt State for the rootco.de Web Server.

certbot:
  git.latest:
    - name: https://github.com/certbot/certbot
    - target: /opt/certbot
    - user: root

You can see the git commit HERE.

I then ran the following on my salt master to tell SaltStack to pull down the changes and apply them to the rootco.de Web Server.

$ git -C /srv/salt pull 
$ salt 'luke.rootco.de' state.highstate --state-output=changes

NOTE: I could have just waited, I actually have the above running as a cronjob every 30 minutes to make sure my server configuration stays as I have defined it in SaltStack

I then sanity checked the contents of /opt/certbot before proceeding. I


face

Today it's fifteen years from my first contribution to free software. I've changed several jobs since that time, all of them involved quite a lot of free software and now I'm fully working on free software.

The first contribution happened to be on phpMyAdmin and did consist of Czech translation:

Subject: Updated Czech translation of phpMyAdmin
From: Michal Cihar <cihar@email.cz>
To: swix@users.sourceforge.net
Date: Mon, 14 May 2001 11:23:36 +0200
X-Mailer: KMail [version 1.2]

Hi

I've updated (translated few added messages) Czech translation of phpMyAdmin. 
I send it to you in two encodings, because I thing that in distribution 
should be included version in ISO-8859-2 which is more standard than Windows 
1250.

Regards
    Michal Cihar

Many other contributions came afterwards, several projects died on the way, but it has been a great ride so far. To see some of these you can look at my software page which contains both current and past projects and also includes later opensourced tools I've created earlier (mostly for Windows).

These days you can find me being active on phpMyAdmin, Gammu, python-gammu and Wammu, Debian and Weblate.

Filed under: Debian English phpMyAdmin SUSE | 2 comments


Saturday
14 May, 2016


face

Dear Tumbleweed users and hackers,

Despite the ‘shorter’ work-weeks we currently see in Europe due to various celebrations, Tumbleweed keeps on rolling. This is of course thanks to our community that does not let itself be stopped by some days off from work.
This review will touch the snapshots 0508, 0511 and 0512.

Note-worthy updates shipped in those snapshots:

  • Plasma 5.6.3
  • Linux kernel 4.5.3

What is cooking:

  • Qt 5.6 – It is coming. Last night, a fix to address this was checked in to YaST. YaST is trying to work around an icewm issue, which would be much more complex to fix.
  • A switch away from sun rpc to tirpc. This is done in pam now and will move along to other areas, including glibc
  • KDE Application 16.04.1
  • Perl 5.24.0 – let’s be surprised on how much we might need to fix

Have a great weekend


Wednesday
11 May, 2016


face
No matter if you're GSoC student in openSUSE, KDE, ownCloud or anywhere else, you're community bonding period has started. This is not an easy time because starting something new is always hard and this is, in a sense, a new job.

And many students are still busy with exams and other things. You are ambitious, of course, so you make promises to your mentor and then--you might not be able to follow through on that. You're too busy studying or this family-and-friends thing gets in the way. Now what?

It is fine to make mistakes or miss a deadline...

Please understand that we get this! It is not a surprise and you're not alone. The key here is to communicate with your mentors. That way, they know why you're busy and when you will be back.

Not having time for something, even if you promised - really, that is OK. When you have a job in the future it will happen all the time that more urgent things come up and you can't meet a deadline. Key is that you TALK about it. Make sure people know.

Let me give you a short anecdote - something that didn't even happen that early in my career...

At some point early in my job at a new company, I was on on a business trip and I missed my train. It was quite stupid: I got out in the wrong station. The result was that I had to buy a new ticket, spending over USD 180. I was quite upset about it and afraid to tell my manager about my blunder. I did the easiest thing: just avoid talking to my boss at all. As he was in the US and I was in Europe, that was not hard at all... But, after three weeks of finding all kinds of excuses to get out of our regular calls, he gave me a direct call and said: "what the heck is going on?". I admitted the whole thing and, of course, he was quite upset. But not at the USD 180. That is nothing on the budget of his or any team in any company. The costs of me not talking to him, now that he was serious about and I had to promise to never do that, ever, again.

... if you communicate about it

So what can you learn from my mistake? The rule, especially in the beginning of your career, is to over-communicate. Especially when it comes to new employees, many managers are anxious and worried about what is going on. Telling them often, even every day, how things are going and what you're doing is something they will never complain about.

You can practice during GSOC: sending a daily ping about the state to your mentor, even if it is "hey, I had no time yesterday, and won't have any today". And a weekly, bigger report on what you worked on is also

face

Tumbleweed-black-greenSince the last update, openSUSE Tumbleweed had two snapshots.

Snapshot 20160505 and 20160508 brought quite a few goodies for Tumbleweed users.

Firefox 46 and GNOME 3.20.2 were in the 20160505 snapshot along with some other packages like git 2.8.2, glib-networking 2.48.1 and a huge update to ostree in version 2016.5.

The most recent snapshot updated the Linux Kernel to 4.5.3 and Plasma updated to version 5.6.3. Bluedevil 5, breeze and hexchat were also updated in the 20160508 snapshot. Samba had an enormous list about bugs and fixes in the email that lists the details of the 20160508 snapshot.

Plans for the new GNU Compiler Collection in a Tumbleweed snapshot is at least three to four weeks away because of a huge update stack that will be made when GCC 6 makes it into Tumbleweed. Pre-testing in private staging has shown that GCC 6 should build smoothly in the various stages that lead to a Tumbleweed snapshot, but that remains to be seen until a snapshot with GCC 6 is at the users’ fingertips.

As for Qt 5.6, the bug that broke YaST appears to be swarming between two chopsticks – it took forever to catch and fix. However, as Dominique shared on Saturday, a fix is found and merged so the 5.6.x release is coming soon! Perhaps we’ll wait for 5.6.1 as that should be out soon.


Tuesday
10 May, 2016


Michael Meeks: 2016-05-10 Tuesday.

21:00 UTCmember

face
  • Practices with babes in the morning, mail chew, sales calls, built ESC bug stats, admin; customer call.

Monday
09 May, 2016


Michael Meeks: 2016-05-09 Monday.

21:00 UTCmember

face
  • Mail chew; project planning, two team calls, and more calls in between more mail chew.

Older blog entries ->