Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.


Thursday
05 March, 2015


face

TumbleweedDistrowatch will be changing its color scheme for remarks on Tumbleweed from red to green. Snapshots for Tumbleweed have included Kernel 3.19 as it was the first distro to update to Kernel 3.19 and the next snapshot will provide an update for systemd.

Systemd 219 has some fixes in the works and is currently blocking the release of the new Tumbleweed snapshot. Additional contributions would help speed up the release and be highly appreciated.

While GCC 5 won’t be in the next few snapshots, progress is being made the difficult fixes to GCC5 are the ones that remain.

As for the previous snapshot released earlier this week, Mozilla Firefox was updated to version 36. There is a significant amount of additions to 36 and support for full HTTP/2 is enabling a faster, more scalable and responsive web. Mozilla also updated Thunderbird to 31.5.0 and MFSA 2015-24/CVE-2015-0822 provides reading of local files through manipulation of form autocomplete for both Firefox and Thunderbird.

For the most recent snapshot, visit the mailing list.


Tuesday
03 March, 2015


face
  • An inlaid GNOME logo, part 3

    Esta parte en español

    (Parts 1, 2)

    The next step is to make a little rice glue for the template. Thoroughly overcook a little rice, with too much water (I think I used something like 1:8 rice:water), and put it in the blender until it is a soft, even goop.

    Rice glue in the blender

    Spread the glue on the wood surfaces. I used a spatula; one can also use a brush.

    Spreading the glue

    I glued the shield onto the dark wood, and the GNOME foot onto the light wood. I put the toes closer to the sole of the foot so that all the pieces would fit. When they are cut, I'll spread the toes again.

    Shield, glued Foot, glued


Monday
02 March, 2015


face

I have become increasingly convinced that there is little difference between monitoring and testing. Often we can run our automated tests against a production system with only a little effort.

We are used to listening to our automated tests for feedback about our software in the form of test-smells. If our tests are complicated, it’s often a strong indicator that our code is poorly designed and thus hard to test. Perhaps code is too tightly coupled, making it hard to test parts in isolation. Perhaps a class has too many responsibilities so we find ourselves wanting to observe and alter private state.

Doesn’t the same apply to our production monitoring? If it’s hard to write checks to ensure our system is behaving as desired in production, it’s easy to blame the monitoring tool, but isn’t it a reflection on the applications we are monitoring. Why isn’t it easy to monitor them?

Just as we test-drive our code, I like to do monitoring check-driven development for our applications. Add checks that the desired business features are working in production even before we implement them, then implement them and deploy to production in order to make the monitoring checks pass. (This does require that the same team building the software is responsible for monitoring it in production.)

As I do this more I notice that in the same way that TDD gives me feedback on code design (helping me write better code), Check-driven-development gives feedback on application design.

Why are our apps too hard to test? Perhaps they are too monolithic so the monitoring tools can’t check behaviour or data buried in the middle. Perhaps they do not publish the information we need to ascertain whether they are working.

Here are 4 smells of production monitoring checks that I think give design feedback, just like their test counterparts. There are lots more.

  • Delving into Private State
  • Complexity, Verbosity, Repetition
  • Non-Determinism
  • Heisentests

Delving into Private State

Testing private methods in a class often indicates there’s another class screaming to be extracted. If something is complex enough to require its own test, but not part of the public interface of the object under test, it likely violates the single responsibility principle and should be pulled out.

Similarly, monitoring checks that peek into the internal state of an app may indicate there’s another app that could be extracted with a clearly separate responsibility that we could monitor independently.

For example, we had an application that ingested some logs, parsed them and generated some sequenced events, which then triggered various state changes.

We had thoroughly tested all of this, but we still occasionally had production issues due to unexpected input and problems with the sequence generation triggered by infrastructure-environment problems.

In response, we added monitoring that spied on the intermediate state of the events flowing through the system, and spied on the database state to keep track of the sequencing.

Monitoring that pokes into


face

Over the past 2 weeks I was able to successfully setup a multi-domain network with a couple classmates and wanted to document our process here in hopes that someone will find this useful.

To start off, I have 3 machines on the same network. In my case these were virtual machines but this will work the same with physical machines. Each of them has a unique static IP, and they are all able to ping each other. They are also configured to allow traffic on the required ports for each service (DNS, NIS and NFS).

To start off, we need to create a primary DNS server for our domain. This server will answer all queries for our domain and will have a delegation within the root DNS server so that our caching servers will be able to query the domain. One important thing to note is that no device will ever directly use this server, instead they will be configured to use the caching DNS server which will then do a recursive query and eventually query the primary server on our behalf.

The first thing you must do is install the bind package using the package manager and then backup the default config file. The following commands will accomplish this:

yum install bind
cp /etc/named.conf /etc/named.conf.BAK

Then you must edit the /etc/named.conf file using your favorite text editor and make its configuration similar to this (replace IPs and domain names as needed):

options {
	listen-on port 53 { 192.168.19.53; 127.0.0.1; };
	listen-on-v6 port 53 { any; };
	directory 	"/var/named";
	dump-file 	"/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
	allow-query     { any; };
	recursion no;
};

zone "ushamim.com" IN {
	type master;
	file "my-hosts.db";
	allow-update { none; };
};

zone "19.168.192.in-addr.arpa" IN {
	type master;
	file "my-rev.db";
	allow-update { none; };
};

Some things of note:

  1. DNS sec is not enabled in this configuration, normally you would want this on but I have disabled it just to make this task easier
  2. Note that recursion is disabled, we do not want to allow recursive queries from this server as it will make us vulnerable to a DoS attack as well as add extra work load for this server

Now that we have told the server where to look for its zone files, we must create them and fill their contents with the SOA, NS, A and PTR records. Lets start by creating /var/named/my-hosts.db:

$TTL 86400
@    IN   SOA  dns.ushamim.com.   root.host.ushamim.com. (
				42	; serial
				3H	; refresh
				15M	; retry
				1W	; expiry
				1D )	; minimum
@    IN   NS    dns.ushamim.com.
host.ushamim.com.   IN   A  192.168.19.1
dns.ushamim.com.   IN   A  192.168.19.53
nfs.ushamim.com.   IN   A  192.168.19.2
nis.ushamim.com.   IN   A  192.168.19.3

Then we can create the /var/named/my-rev.db file:

$TTL 86400
@    IN   SOA  host.ushamim.com.   root.host.ushamim 

face

I use the Open Build Service to work on openSUSE packages. There is a useful tutorial HERE. Here is a summary of 'osc' commands I use the most:

alias oosc='osc -A https://api.opensuse.org'


Assuming you will be using the openSUSE Build Service, you will need to include the -A option on all the commands shown below. If you set up this alias, you can save a lot of typing.

osc search PKG


Search for a package. You can also use http://software.opensuse.org/ and zypper search PKG is also helpful.

osc meta pkg PRJ PKG -e


If you are project maintainer of PRJ, you can create a package directly using this command, which will throw you into an editor and expect you to set up the package's META file.

osc bco PRJ PKG

osc branch -c PRJ PKG


If you are not a project maintainer of PRJ, you can still work on PKG by branching it to your home project. Since you typically will want to checkout immediately after branching, 'bco' is a handy abbreviation.

osc ar


Add new files, remove disappeared files -- forces the "repository" version into line with the working directory.

osc build REPOSITORY ARCH


Build the package locally -- typically I do this to make sure the package builds before committing it to the server, where it will build again. The REPOSITORY and ARCH can be chosen from the list produced by osc repos

osc chroot REPOSITORY ARCH


Builds take place in a chroot environment, and sometimes they fail mysteriously. This command gives you access to that chroot environment so you can debug. In more recent openSUSEs the directory to go to is ~/rpmbuild/BUILD/

osc vc


After making your changes, edit the changes file. For each release you need to have an entry. Do not edit the changes file yourself: instead, use this command to maintain the changes file "automagically".

osc ci


Commit your changes to the server. Other SVN-like subcommands (like update, status, diff) also work as expected.

osc results


Check what the server is doing. Typically a build will be triggered by your commit. This command lets you see the status.

osc sr


'sr' is short for submitrequest -- this submits your changes to the PROJECT for review and, hopefully, acceptance by the project maintainers. If you're curious who those are, you can run osc maintainer (or osc bugowner)

osc rebuildpac


Sometimes it's desirable to trigger a rebuild on the OBS server.

NOTE ON LICENSING


JFYI: http://spdx.org/licenses/ lists all well known licenses and their original source. This becomes extremely handy if you start packaging.

face
The title is not merely a clickbait, but my current opinion, after attending a programming competition for the first time. This post expresses my opinions on the hiring processes of [some of] the new age companies through programming competitions and algorithms-focused interviews.

I believe that the assessment for a senior/architect level programmer, should be done by finding how co-operative [s]he is with others to create interesting products and their history than by assessing how competitive [s]he is in a contest.

Algorithms

On my lone programming competition experience (on hackerrank), the focus of the challenges were on Algorithms (discrete math, combinatorics etc.).

Usage of standard, simple algorithms, instead of fancy, non-standard algorithms is a better idea in real life, where the products have to last for a long time, oblivious to changing programmers. Fancy algorithms are usually untested, harder to understand for a maintenance programmer.

Often, it is efficient to use the APIs provided by the standard library or ubiquitously popular libraries (say jquery). Unless you are working on specific areas (say compilers, memory management etc.) an in-depth of knowledge of a wide-range of algorithms may not be very beneficial (imo) in day-to-day work, elaborated in the next section.

Runtime Costs

There are various factors that decide the runtime performance, such as: Disk accesses, Caches, Scalable designs, Pluggable architectures, Points of Failures, etc.

Algorithms optimize mostly one aspect, CPU cycles. There are other aspects (say choice of Data structures, databases, frameworks, memory maps, indexes, How much to cache etc.) which have a bigger impact on the overall performance. CPU cycles are comparatively cheap and we can afford to waste them, instead of doing bad I/O or a non-scalable design.

Most of the times, if you choose proper datastructures and get your API design correct, we can plug the most efficient algorithm, without affecting the other parts of the system, iff your algorithm proves to be really a bottleneck. A good example is the Evolution of filesystems, schedulers in the Linux Kernel. Remember that Intelligent Design school of software development is a myth.

In my decade of experience, I have seen more performance problems due to poor choice of datastructures or unnecessary I/O, than due to poor selection of algorithms. Remember, Ken Thompson said: When in doubt, Use Brute Force. It is not important to get the right algorithm on the first try. Getting the skeleton right is more important. The individual algorithms can be changed, after profiling.

At the same time, this should not be misconstrued as an argument to use bubblesort.

The 10,000 hour rule

Doing well in online programming competitions is mostly the 10,000 hour rule in action. You spend time in enough competitions and solve enough problems, you will quickly know which algorithm or programming technique (say dynamic programming, greedy) to employ if you see a problem.

Being an expert at online programming competitions does not guarantee that [s]he could be trusted with building or maintaining a large scale system

face

Last time I’ve took look at Simple Direct Layer and how to get audio out of it. If SDL still feels little bit too hard to cope with I think I have solutions for you: libAO. Besides being no brainier with API libAO provides huge list of supported operating systems.
There is so much audio systems supported that you won’t be dissapointed but as much as I like everyone use Roaraudio. I don’t see it’s happening really soon (sorry roar you had your time in fame) but supporting Roaraudio  doesn’t mean that libAO is obsolete. It’s far from being obsolete. Libao supports OSS, ALSA and Pulseaudio out of the box and only problem is license is GPL 2.0+ so it’s no-go for proprietary development.

History

LibAO is developed under Xiph umbrella. Xiph is the organization who brought you Ogg/Vorbis, FLAC, Theora and currently they are hammering together next generation video codec Daala. Opus-audio codec standard is also Xiph project. LibAO rised from Xiph’s need multi-platform audio output library for Vorbis-audio codec. In this point if you don’t have any glue what I just said in last sentences I think you should take your spoon and start shovelling about Open Source audio codecs.
Becaus of the history libAO only has output mode and doesn’t use any callbacks. It doesn’t have fancy Float32 mode (as much as I understood) but that doesn’t say it’s bad thing. It works as expected you just feed bytes and after while you hear them from your speakers.

What about API

Supported outputs: Alsa, Oss, Jack, Mac OS X, Windows
License: GNU General Public license 2.0+

As said libAO API is difficult to describe since there almost ain’t NAN of it. You initialize, ask output device, put in your mode settings and start feeding data. Pulseaudio simple is almost easy as this but it’s still more difficult if you compare it to libAO. LibAO doesn’t support recording so only output and there must be a way to use another device than default but it’s not very easy to find or I was too lazy to dig it out.

So who wants to use libAO? People in hurry and don’t mind GPL-license, someone with very very tiny need of just getting audio out and people how hate bloat.

Summary: So if you hate bloat and again license doesn’t make you unhappy please use this library. Still libAO has kind of same problem that ALSA has. It’s mature, usable and ready for hardcore torturing but is it sexy? No! Is fresh? No, No! Is something that will change it API every week or hour?

After this I have to choose what to bring next. I have FFmpeg, Gstreamer and VLC in row. If you have opinion about next framework let me know.


Saturday
28 February, 2015


Richard Brown: Why openSUSE?

23:12 UTCmember

face

As this is hosted on my personal blog, this should probably go without saying, but given the topics covered in this post I wish to state for absolute clarity that the content of this post does not necessarily reflect the official opinion of the openSUSE Project or my employer, SUSE Linux GmbH. This is my personal opinion and should be treated as such

One thing I get asked, time and time again, is “Why openSUSE?”. The context isn’t always the same “Why should I use it?”, “Why should I contribute to openSUSE?”, “Why do you use it?”, “Why not use [some other distribution]?” but the question always boils down to “What’s special about the openSUSE Project?”

It’s a good question, and in the past, I think it’s one which both the openSUSE Project as a whole and me as an individual contributor have struggled to satisfactorily answer. But I don’t believe that difficulty in answering is due to a lack of good reasons, but an abundance of them, mixed in with a general tendency within our community to be very modest.

This post is my effort to address that, and highlight a few reasons why you, the reader, should contribute to the openSUSE Project and use the software we’re developing


Reason 1: We’re not (just) a Linux Distribution

Normally, the first thing people think of when they hear openSUSE is our ‘Regular Release Distribution’, such as openSUSE 13.2 which we released last year (Download it HERE)

The openSUSE Project however produces a whole bunch more.

For starters, we have not one, but technically TWO other Linux Distributions which we release and maintain

openSUSE Tumbleweed – our amazing rolling release which manages to give it’s users stable, usable, Linux operating system, with constantly updating software.
Perfect not only for Developers, but anyone who wants the latest and greatest software, leading Linux experts like Greg Kroah-Hartman have stated rolling releases like Tumbleweed represent the ‘future’ of Linux distributions. I agree, and I think openSUSE’s is the best (for reasons that will be made clear later in this post).
In my case it’s replaced all but one of my Linux installations, which previously included non-openSUSE distributions. The last (my openSUSE 13.1 server) is likely to end up running Tumbleweed as soon as I have a weekend to move it over. I started small with Tumbleweed, having it on just one machine, but after a year, I just don’t see the need for the ‘old fashioned’ releases any more in any of my use cases.
If you want to see it for yourself, Download it HERE

openSUSE Evergreen – this is a great community driven project to extend the lifecycle of select versions of our Regular Release beyond its usual “2 Releases + 2 Months” lifespan.
While technically not a ‘true’ separate Distribution in the purest sense of the word, it takes a herculean effort from our community to keep on patching the distribution


Friday
27 February, 2015


face

Qactus, a Qt-based OBS notifier, is out in the wild. Version 0.4.0 is the first release.
I started it a long time ago together with Sivan Greenberg as a personal project for learning Qt. And now it’s back into life :)

It features
– Build status viewer
– Row editor with autocompleter for project, package, repository and arch
– Submit requests viewer
– Bugs

This application is possible thanks to Marcus ‘darix’ Rueckert. He has helped me getting further knowledge of the OBS API.

I think this version is usable. So, why don’t you give it a try?
The source code is hosted on GitHub.

qactus_040_1 qactus_040_3 qactus_040_2

face
Lobby of the venue
Back home. Tired and jetlaggy, but satisfied: SCALE rocked!

SCALE loves ownCloud

The 13th South California Linux Expo was awesome! It is the biggest LinuxFest in the USA. While decidedly different in nature from Europe's biggest Linux event that that took place just three weeks prior (FOSDEM), we met similarly enthusiastic existing and future users. Conversations were also similar: about half the visitors already knew ownCloud, often using it or planning on deploying it; and the other half was more than a little delighted to hear about it, often exclaiming they had been looking for 'something like that' for a while. Negativity was extremely rare: I don't recall a single negative comment at SCALE (merely a few people who liked ownCloud but had no use for it personally), FOSDEM had one conversation starting unpleasantly but quickly turning around - even though one feature of ownCloud wasn't up to snuff, the user was happy with the experience as a whole.
Before the action started!

For most users, ownCloud was simply a wonderful product and they used it at home, deployed it for customers or managed it in their company. Some asked what features were coming or just arrived in ownCloud 8, or asked about the state of specific features and in more than one occasion they very enthusiastically told me how excited they were about ownCloud, how they loved it and how they were telling everybody to use it!

ownCloud to-go

Those who didn't know ownCloud were almost invariably surprised and excited. I can't count the times I heard "wow, why did I never hear about this before" and "dude, I've been looking for something like this for ever!". Often, people wondered how long ownCloud had been around (we just turned five), if it was open source (yes, with love), how many people contributed to it (719 and counting) and how many users it has (we guestimate over 2 million, with 500,000 in this single deployment alone). Oh, and, does it scale? The deployment linked above and a mention of users like CERN can put most concerns to rest. Yes, ownCloud scales from Raspberry Pi to Atom Smashing size.

What came up a few times as barriers to their future usage of ownCloud was pretty much what I discussed before. Running a server at home is not easy and I walked by the EFF booth to ask about progress on Let's Encrypt to ask about the progress of solving one aspect of that problem: more easily getting SSL certificates. I was told the project is on track for the 2nd half of this year.
Frank and Bryan Lunduke

It is wonderful to have such energizing, positive, enthusiastic users - and to have such an enthusiastic booth crew to talk to them as well. At the booth we had Frank, Matt, Ron, Camila and myself. Awesome it was and we had great fun! Below a timelapse video of Saturday morning. It was

face

Apparently I haven’t blogged in nearly 5 years, wow! Most of my coding for fun in the last few years ended up being on internal systems to my previous company. Well all that’s in the past now, I’m about to start an adventure of traveling for 6 months around Central/South America and South-east Asia.

But just before that, had a couple days downtime so thought I would learn a little javascript and D3. You can see the result on:

http://francis.giannaros.org/fb-graph/
(works in Chrome/Firefox, you have to click ‘Accept’ on the Facebook popup)

Basically it will give you a graph visualisation of who you are tagged with on Facebook, and how your friend tags relate — at least all the ones available from the Facebook graph UI.

You can use the date slider at the top to restrict the timeline — it’s interesting to see how people come and go in your life, at least as far as Facebook photos are concerned.

Looking forward to future fun coding, maybe more directly in an established free software project again!

Facebook Photo Tag Graph

 

 

 


face

There’s a new build published today for the AMD FGLRX drivers.

It include the new patch made by Sebastian Siebert supporting Kernel 3.19x series you could have on 13.1, 13.2 and Tumbleweed openSUSE distribution.

The server just got the new rpms, so you should be able to update with zypper ref -f && zypper up

Have fun.


Thursday
26 February, 2015


face

4:51 AM: I'm woken up by the sound of a loud chirp.  I roll over and pretend I didn't hear it.

4:52 AM: No luck, it's a smoke detector chirping from low battery.  Why can't this happen in the middle of the day?

I lay in bed for a while pretending that I can fall back asleep and ignore the chirp, I start thinking about the four Nest Protect systems I purchased and installed and how much I wish I had replaced all of my smoke alarms.  The only reason I hadn't replaced all of them is they are $99 each and I wanted to give them a test run before I completely replace them.

4:55 AM: I'm out of bed standing in the hallway at a strange angle hoping that on the next chirp I can determine which room has the chirping smoke alarm.  It's one of three because the hallway and one bedroom both have Nest Protect units in them and "they don't chirp!"

4:56 AM: The chirp comes from in front of me so it has to be the other bedroom.  I open the bedroom door, hear my teenage son stir and wonder how he can sleep through the chirping.  I open the battery compartment, yank out the battery, test it on my tongue (it seems strong) and then hear another chirp coming from behind me.

4:57 AM: I have opened every smoke detector on the floor and removed the battery because I simply want to get back to bed.  I will deal with this in the morning.

Then I heard another chirp.

How is that possible?  I don't have any more of these "old school" smoke detectors that chirp.  Is there something else in my house that makes a sound like that?

I stand very still and wait for it.

chirp!

It's coming from the other bedroom.  The bedroom that has a Nest Protect in it.  I walk up to the door and crack it open trying not to wake my other son.  I look up at the Nest Protect unit on the ceiling and I'm stunned.  It chirps again.

Why am I stunned.  Let me quote from the Nest Protect website (italics added):

What is Nightly Promise?
Have you ever woken up in the middle of the night because your smoke alarm was chirping? Nest Protect has a better way: Nightly Promise. Each night when you turn out the lights you’ll get a quick green glow which means the batteries and sensors of Nest Protect are working. It also means no dreaded chirps at midnight so you can sleep safe and sound.
 I pull the unit down and disconnect the power and tell my son to go back to sleep.  I take it in my office and it chirps again.

Problem Reported in my Phone

I decide to look at my phone and see that the Nest app


face

This time of the year again. The monitoring was prodding us with "Your certificate will expire soon". When we fiddled with the tools to create the new CSR, we were wondering "Can we go 4K?". 4K is hip right now. 4K video. 4K TVs. So why not a 4K certificate? A quick check with the security team about the extra CPU load and checking our monitoring.

"Yes we can"

So with this refresh the certificate for all the SSL enabled services in Nuremberg is 4096 bits long. The setup is running with the new certificate for a few days already and so far we did not notice any problems. Next stop will be upgrading the SSL end points to a newer distribution so we get TLS 1.2 and our A grade back. Stay tuned for more news on this front.


face

Several scientific packages were added to main openSUSE repositories (Factory) in January 2015.

  • harminv (web page)
    Program (and accompanying library) to solve the problem of harmonic inversion — given a discrete-time, finite-length signal that consists of a sum of finitely-many sinusoids (possibly exponentially decaying) in a given bandwidth, it determines the frequencies, decay constants, amplitudes, and phases of those sinusoids.
  • libctl (web page)
    Guile-based library implementing flexible control files for scientific simulations.
  • meep (web page)
    Finite-difference time-domain (FDTD) simulation software package developed at MIT to model electromagnetic systems, along with our MPB eigenmode package.
  • elpa (web page)
    A new efficient distributed parallel direct eigenvalue solver for symmetric matrices. It contains both an improved one-step ScaLAPACK type solver (ELPA1) and the two-step solver ELPA2.
    ELPA uses the same matrix layout as ScaLAPACK. The actual parallel linear algebra routines are completely rewritten. ELPA1 implements the same linear algebra as traditional solutions (reduction to tridiagonal form by Householder transforms, divide & conquer solution, eigenvector backtransform). In ELPA2, the reduction to tridiagonal form and the corresponding backtransform are replaced by a two-step version, giving an additional significant performance improvement.


Wednesday
25 February, 2015


Michael Meeks: 2015-02-25 Wednesday

21:00 UTCmember

face
  • Up early, quick mail chew; set off for Cambridge; into the office to see Tracie; read a great report. Train on to Edinburgh, worked on budgets. Extraordinarily frustrating experience with intermittent connectivity and Evolution on the train for some hours.
  • Enjoyed some of the talks at the Open Source Awards, and a great meal mid-stream.
  • Extraordinarily honoured to recieve from Karen Sandler, on behalf of Collabora Productivity, the UK Open Source Awards 2015 - Best Organisation; gave a small acceptance spiel:
    • It is an honour: in a Cloud obsessed world to have a Commodity Client Software company represented. In a world obsessed by Tablets: to encourage Free Software that makes your PC/Mac keyboard really useful. Naturally, we do have a Tablet & phone version making good progress now (for the paper-haters).
    • LibreOffice 80+ million users: more than the UK's population. A brief correction - Collabora is only the 2nd largest contributor to the code - the 1st is volunteers in whose debt we all are. Everything we produce is Free Software.
    • Collabora - has a mission we believe in: To make Open Source rock (ono). We're betting our Productivity subsidiary on ODF and LibreOffice.
    • We're here to kill the old reasons not to deploy Free Software: with long term maintenance for three to five years; rich support options - backing our partner/resellers with a fast code-fix capability; and finally killing complaints - we can implement any feature, fix any bug, and integrate with any Line Of Business app for you.
    • In the productivity space - innovation is long overdue; Free Software can provide that. Thanks for coming & your support

face

It has been a while since we reported about YaST in this site. This post in Spanish from fellow openSUSE blogger Victorhck has inspired us to write about some exciting news that deserve to be shared with the whole openSUSE community. YaST has always been a completely free and open source project, but free and open source means way more than just having the code available in some server at Internet. You may know lowering the entry barrier to contribute to YaST has been one of the goals of the project.

The first big step was moving from YCP to a more popular, documented and widespread programming language like Ruby. The new Ruby-based codebase debuted in openSUSE 13.1, full of automatically converted code that looked “not so Ruby”. Now, with the revamped installation workflow introduced in openSUSE 13.2 and after a whole release cycle of refining and polishing the YaST code and the development tools, the world of YaST development is a nicer place for newcomers.

So we have the code publicly available and written in a nice popular language, we have easy to install development tools, we have a public IRC channel and an open mailing list and we have a group of experienced developers willing to help anybody wanting to jump aboard. What is missing?

Tons of documentation!

The YaST team has put some effort in the last months gathering all the development documentation that was disperse and creating new one. The result is the new YaST development landing page. The page is packed with information useful to anyone willing to introduce himself in the world of YaST development and also acts as a central documentation hub, containing links to information hosted in Rubydoc.info, doc.opensuse.org or the openSUSE wiki. Among other things, the page includes a guide with the first steps for newcomers, a section with documentation targeted at developers and another one with descriptions of the processes and guidelines observed while developing YaST.

One of the sources of information linked from the YaST landing page is the brand new tutorial titled “Creating the YaST journalctl module“. As the title suggests, the tutorial presents a very simple example of a YaST module developed from scratch in pure Ruby. The document is focused on the tools and the overall architecture trying to balance nicely theory and practice. All the example code and files used in the tutorial are available in a git repository that follows the learning time-line, with every tag corresponding to a step in the tutorial.

But this tutorial is not the only evidence of a flourishing Ruby future for YaST.

New modules

The last months have seen the born of several new YaST modules written in Ruby from scratch. The source code of all of them is available at Github and the modules themselves are all included and directly installable on openSUSE Tumbleweed, with the exception of the I/O Channels module, available only for SLE since it’s targeted at


Kohei Yoshida: Orcus 0.7.1 is out

00:56 UTCmember

face

After more than a year since the release of 0.7.0, I’m once again very happy to announce that the version 0.7.1 of the Orcus library is now available for download. You can download the package from the project’s download page.

This is a maintenance release. It primarily includes bug fixes and build fixes since the 0.7.0 release with no new features. That said, the most notable aspect of this release is that it is buildable with the version 0.9.0 of the Ixion library which was just released a week ago. So, if you are trying to package and distribute the newly-released Ixion library but are unable to do so because of Orcus not being buildable with it, you might be interested in this release.

The next major upgrade will be 0.9.0 whose development is on-going on the current master branch. Hopefully that release won’t be too far away.


Tuesday
24 February, 2015


Michael Meeks: 2015-02-24 Tuesday

21:00 UTCmember

face
  • Mail chew, built ESC stats; mail; lunch. Customer call. Reviewed the LibreOffice 4.4 feature set to write a LXF column, rather encouraged.
  • Booked train tickets to the great Open Source Awards tomorrow in Edinburgh.

face
The "Ceph Developer Summit" for the Infernalis release is on the way. The summit is planed for 03. and 04. March. The blueprint submission period started on 16. February and will end 27. February 2015. 

Do you miss something in Ceph or plan to develop some feature for the next release? It's your chance to submit a blueprint here.

face

Since the release of the 3.19 kernel in openSUSE Tumbleweed the vmnet module will fail to build for VMware Workstation 11.0.x

VMware community message

Credit for the patch

patch available at 1

Execute the following steps to patch your VMware Workstation 11.0.x


Download the patch to /tmp:
# curl -L "https://docs.google.com/a/seader.us/uc?authuser=0&id=0BxMaO3Y-qL_1Z2NMSkxRdndzNlk&export=download" -o /tmp/vmnet-3.19.patch
Extract the vmnet module from sources:
# cs /usr/lib/vmware/modules/source
# tar -xf vmnet.tar
Apply the patch to the source:
# patch -p0 -i /tmp/vmnet-3.19.patch
Recreate the source archive:
# tar -cf vmnet.tar vmnet-only 
Remove leftover folder:
# rm -r *-only
Rebuild VMware modules:
# vmware-modconfig --console --install-all
Enjoy!


face

openSUSE Conference 2015

Link to oSC15 webpage

 

As you may already know, the Travel Support Program (TSP) provides travel sponsorships to openSUSE community who want to attend the openSUSE conference and need financial assistance. The openSUSE conference 2015 will held in the city of The Hague – Netherlands, from May 1st to May 5th.

 

 

The goal of the TSP is to help everybody in and around openSUSE to be able to go to the openSUSE Conference!

 

When and how

The application period will be opened from February 24th to March 5th. The approval results will be done by TSP App on March 9th and the sponsoship acception must to be done until March 12th. In case the requester doesn’t Approve the sponsorship the amount will be given for the next on the waiting list.

Remember: All requests will be managed through the TSP application at http://connect.opensuse.org/travel-support.

You will need an openSUSE Connect account in order to log in the application and apply for sponsorship. Please be sure to fullfil all of your personal details at openSUSE connect account to avoid delays or negative request. A good application with good information will be processed faster.

A few reminderstips

  • Please read the TSP page carefully before you apply.
  • Any information you send to the Travel Committee will be private.
  • We want everybody there! Even if you think you would not qualify for the travel support, just submit and make it worth! If you don’t try you won’t get!
  • If you submitted an abstract to be presented you should mention it in your application.
  • The Travel Committee can reimburse up to 80% of travel and/or lodging costs. That includes hotel, hostel, plane,train, bus, even gas for those willing to drive. Remember, no taxi!
    • Important: Food and all local expenses are on you!
  • We want to sponsor as many people as possible so please check the best deal.
  • The Travel Committee won’t be able to book or pay anything in advance. The reimbursement will be done after the event finishes and based on your expenses receipts.
  • no receipts = no money It is the rule!

If you have any question regarding your trip to the conference do not hesitate to ask the TSP or oSC15 organizers.

We hope to see you there!


Monday
23 February, 2015


Michael Meeks: 2015-02-23 Monday

21:00 UTCmember

face
  • Mail chew, 1:1 calls with people, lunch. Team call, sync with Lubos, mail, hacked a little bit on gtktiledviewer wrt. zoomed in selection overlay rendering. 2nd team call.

Sunday
22 February, 2015


Michael Meeks: 2015-02-22 Sunday

21:00 UTCmember

face
  • Lie-in, out to NCC - Claire preached well; back for lunch with Dean and Moulouia(?) and their fun-sized girls. Quarter practice with the babes, watched a notably terrible 'Cindy' film; tea, read stories bed.

Saturday
21 February, 2015


Michael Meeks: 2015-02-21 Saturday

21:00 UTCmember

face
  • Up earlyish; counted the area for tiles, into Bury to order just the right kind; to Noughton Park to play with the babes. Back for lunch.
  • Spent much of the afternoon moving more cupboards from wall A (the wrong wall) to wall B (the right wall). Considered the plans to create fitted family lockers for the assorted junk that four babes produce - to match JP's version we admired in Toronto.

face

openSUSE miniSummit T-shirtBy Bruno Friedmann

Hi Geekos, here is a small summary of our Thursday February 19th openSUSE miniSummit event here at SCALE 13x.

Located in Century AB room, a 80 seats room, the average attendance rate was varying between 50% and 85%.

Qualifying the attendance 50% or more were not related to SUSE / openSUSE, which had a good wealth of questions and feedback.

The day started by a talk about openSUSE / SUSE Xen and openstack by Peter Linnel and Russel Pavlicek.

One hour later Manu Gupta has presented all the bolts and nuts about Google Summer of Code at openSUSE.

We then go for lunch, and corridor exchanges.

I opened the afternoon with my talk “them + me = we” about breaking mythic frontier.

Then, just after a small break, Mark Fasheh, a member of filesystem SUSE Labs group, had a talk about the project Duperemove: dedupe on btrfs (have a look of the source on github, and the package available on obs).

The day continued with a Town Hall talk by myself and Peter running an open discussion with attendees. With interesting remarks and feedback from openSUSE users, and also complete foreigners. For example, the way systemd was introduced in openSUSE distribution was appreciated (having choice during 2 versions). It was an unstressfull, open and positive exchange.

To follow, Bryan Lunduke and Peter animated a talk about “the 10 things you would love about SUSE and openSUSE if you only you knew…”

I did really enjoy the way they numbered the slides …
Freschy, punchy, funky, the kinda talk I would like to see again at OSC15.

To finalize the day, Markus Feilner​ for Linux Magazine (Germany) talked about openQA.

I found the day interesting and a perfect mix between openSUSE and SUSE during this day, confirming the excellent partnership we have.

Thanks you to the sponsors of this day and to all those who helped make it happen.

Links :
SCALE picture album day 1 : by Françoise on G+

openSUSE miniSummit day album :
Bruno’s Album on G+

Follow the news on G+ channel

Stay tuned for more news during this week-end.


Friday
20 February, 2015


Michael Meeks: 2015-02-20 Friday

21:00 UTCmember

face
  • Mail chew, patch review, fixed an Android crasher, and tried to avoid a SIGILL running Android/Atom code on my AMD x86_64 machine in the android emulator: nasty, failed to overcome that with my Core II Duo. Perhaps new NDK's really generate lots of hyper-new instructions, odd.
  • Took apart my Motorola Xoom which has failed to charge reliably recently. Very impressed with the non-soldered-in, surface-mount - get really well secured (by moulding in the chassis) power connector; removed and tweaked this with a needle to mend it (for now).

face

A common frustration with Java is the inability to overload methods when the method signatures differ only by type parameters.

Here’s an example, we’d like to overload a method to take either a List of Strings or a List of Integers. This will not compile, because both methods have the same erasure.

class ErasureExample {
 
    public void doSomething(List<String> strings) {
        System.out.println("Doing something with a List of Strings " );
    }
 
    public void doSomething(List<Integer> ints) { 
        System.out.println("Doing something with a List of Integers " );
    }
 
}

If you delete everything in the angle brackets the two methods will be identical, which is prohibited by the spec

public void doSomething(List<> strings) 
public void doSomething(List<> strings)

As with most Java things – if it’s not working, you’re probably not using enough lambdas. We can make it work with just one extra line of code per method.

class ErasureExample {
 
    public interface ListStringRef extends Supplier<List<String>> {}
    public void doSomething(ListStringRef strings) {
        System.out.println("Doing something with a List of Strings " );
    }
 
    public interface ListIntegerRef extends Supplier<List<Integer>> {}
    public void doSomething(ListIntegerRef ints) {
        System.out.println("Doing something with a List of Integers " );
    }
 
}

Now we call call the above as simply as the following, which will print “Doing something with a List of Strings” followed by “Doing something with a List of Integers”

public class Example {
 
    public static void main(String... args) {
        ErasureExample ee = new ErasureExample();
        ee.doSomething(() -> asList("aa","b"));
        ee.doSomething(() -> asList(1,2));
    }
}

Using the wrapped lists inside the method is straightforward. Here we print out the length of each string and print out each integer, doubled. It will cause the above main method to print “2124”.

class ErasureExample {
 
    public interface ListStringRef extends Supplier<List<String>> {}
    public void doSomething(ListStringRef strings) {
        strings.get().forEach(str -> System.out.print(str.length()));
    }
 
    public interface ListIntegerRef extends Supplier<List<Integer>> {}
    public void doSomething(ListIntegerRef ints) {
        ints.get().forEach(i -> System.out.print(i * 2));
    }
 
}

This works because the methods now have different erasure, in fact the method signatures have no generics in at all. The only additional requirement is prefixing each argument with “() ->” at the callsite, creating a lambda that is equivalent to a Supplier of whatever type your argument is.


face

Hi Geekos, here a small summary of our Thursday February 19th openSUSE miniSummit event here at SCale 13x.

Located in Century AB room, a 80 seats room. The average attendance rate was varying between 50% and 85%.
Qualifying the attendance 50% or more were not related to SUSE / openSUSE, which was a good experience of question and feedback.

openSUSE miniSummit T-shirt

The day started by a talk about openSUSE / SUSE Xen and openstack by Peter Linnel and Russel Pavlicek.
One hour later Manu Gupta has presented all the bolts and nuts about GSOC at openSUSE.

We then go for lunch, and corridor exchanges.

I’ve opened the afternoon with my talk “them + me = we” about breaking mythic frontier
Then just after a small break, Mark Fasheh member of filesystem SUSE Labs group has talk about the project Duperemove: dedupe on btrfs (have a look on github the source are there, and package available on obs)

The day continue with a Town Hall talk co-animated by myself and Peter running an open discussion with attendees. With interesting remarks and feedback from openSUSE users, and also complete foreigners. For example, the way systemd was introduced in openSUSE distribution was appreciated (having choice during 2 versions). It was an unstressfull, open and positive exchange.

To follow, Bryan Lunduke and Peter animated a talk about “the 10 things you would love about SUSE and openSUSE if you only you knew…”
I did really enjoy the way they numbered the slides …
Freschy, punchy, funky, the kinda talk I would like to see again at OSC15.

To finalize the day, Markus Feilner​ for Linux Magazine (de) talked about openQA.

I found interesting the perfect mix we’ve done between openSUSE and SUSE during this day, confirming the excellent partnership we have.
Let the sponsors of this day be warmly thanked to make it happened.

Links :
SCale picture album day 1 : by Françoise on G+

openSUSE miniSummit day album :
Bruno’s Album on G+

Follow the news on G+ channel

Stay tuned for more news during this week-end.


face

1. captcha is not case-sensitive

2. you can get around concurrent downloads limit using incognito window from chromium. If you need more downloads, chromium --temp-profile does the trick, too.

What you might want to know about Debian stable

Somehow, Debian stable rules do not apply to the Chromium web browser: you won't get security updates for it. I'd say that Chromium is the most security critical package on the system, so it is strange decision to me. In any case, you want to uninstall chromium, or perhaps update to Debian testing.

Older blog entries ->