Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.


Monday
27 February, 2017


face

When I run a parallel reading performan testing on a md raid1 device with two NVMe SSDs, I observe very bad throughput in supprise: by fio with 64KB block size, 40 seq read I/O jobs, 128 iodepth, overall throughput is only 2.7GB/s, this is around 50% of the idea performance number.

The perf reports locking contention happens at allow_barrier() and wait_barrier() code,

|        – 41.41%  fio [kernel.kallsyms]   [k] _raw_spin_lock_irqsave
|             – _raw_spin_lock_irqsave
|                         + 89.92% allow_barrier
|                         + 9.34% __wake_up
|        – 37.30%  fio [kernel.kallsyms]  [k] _raw_spin_lock_irq
|              – _raw_spin_lock_irq
|                         – 100.00% wait_barrier

The reason is, in these I/O barrier related functions,

– raise_barrier()
– lower_barrier()
– wait_barrier()
– allow_barrier()

They always hold conf->resync_lock firstly, even there are only regular reading I/Os and no resync I/O at all. This is a huge performance penalty.

The solution is a lockless-like algorithm in I/O barrier code, and only holding conf->resync_lock when it has to.

The original idea is from Hannes Reinecke, and Neil Brown provides comments to improve it. I continue to work on it, and make the patch into current form.

In the new simpler raid1 I/O barrier implementation, there are two wait barrier functions,

  • wait_barrier()

Which calls _wait_barrier(), is used for regular write I/O. If there is resync I/O happening on the same I/O barrier bucket, or the whole array is frozen, task will wait until no barrier on same barrier bucket, or the whold array is unfreezed.

  • wait_read_barrier()

Since regular read I/O won’t interfere with resync I/O (read_balance() will make sure only uptodate data will be read out), it is unnecessary to wait for barrier in regular read I/Os, waiting in only necessary when the whole array is frozen.

The operations on conf->nr_pending[idx], conf->nr_waiting[idx], conf->barrier[idx] are very carefully designed in raise_barrier(), lower_barrier(), _wait_barrier() and wait_read_barrier(), in order to avoid unnecessary spin locks in these functions. Once conf->nr_pengding[idx] is increased, a resync I/O with same barrier bucket index has to wait in raise_barrier(). Then in _wait_barrier() if no barrier raised in same barrier bucket index and array is not frozen, the regular I/O doesn’t need to hold conf->resync_lock, it can just increase conf->nr_pending[idx], and return to its caller. wait_read_barrier() is very similar to _wait_barrier(), the only difference is it only waits when array is frozen. For heavy parallel reading I/Os, the lockless I/O barrier code almostly gets rid of all spin lock cost.

This patch significantly improves raid1 reading peroformance. From my testing, a raid1 device built by two NVMe SSD, runs fio with 64KB blocksize, 40 seq read I/O jobs, 128 iodepth, overall throughput
increases from 2.7GB/s to 4.6GB/s (+70%).

 

Thanks to Shaohua and Neil, very patient to explain memory barrier and atomic operations to me, help me to compose this patch in a correct way. This patch is merged into Linux v4.11 with commit ID 824e47daddbf.


face

‘Commit 79ef3a8aa1cb (“raid1: Rewrite the implementation of iobarrier.”)’ introduces a sliding resync window for raid1 I/O barrier, this idea limits I/O barriers to happen only inside a slidingresync window, for regular I/Os out of this resync window they don’t need to wait for barrier any more. On large raid1 device, it helps a lot to improve parallel writing I/O throughput when there are background resync I/Os performing at same time.

The idea of sliding resync widow is awesome, but code complexity is a challenge. Sliding resync window requires several variables to work collectively, this is complexed and very hard to make it work correctly. Just grep “Fixes: 79ef3a8aa1” in kernel git log, there are 8 more patches to fix the original resync window patch. This is not the end, any further related modification may easily introduce more regression.

Therefore I decide to implement a much simpler raid1 I/O barrier, by removing resync window code, I believe life will be much easier.

The brief idea of the simpler barrier is,

  • Do not maintain a global unique resync window
  • Use multiple hash buckets to reduce I/O barrier conflicts, regular I/O only has to wait for a resync I/O when both them have same barrier bucket index, vice versa.
  • I/O barrier can be reduced to an acceptable number if there are enough barrier buckets

Here I explain how the barrier buckets are designed,

  • BARRIER_UNIT_SECTOR_SIZE

The whole LBA address space of a raid1 device is divided into multiple barrier units, by the size of BARRIER_UNIT_SECTOR_SIZE.

Bio requests won’t go across border of barrier unit size, that means maximum bio size is BARRIER_UNIT_SECTOR_SIZE<<9 (64MB) in bytes. For random I/O 64MB is large enough for both read and write requests, for sequential I/O considering underlying block layer may merge them into larger requests, 64MB is still good enough.

Neil Brown also points out that for resync operation, “we want the resync to move from region to region fairly quickly so that the slowness caused by having to synchronize with the resync is averaged out over a fairly small time frame”. For full speed resync, 64MB should take less then 1 second. When resync is competing with other I/O, it could take up a few minutes. Therefore 64MB size is fairly good range for resync.

  • BARRIER_BUCKETS_NR

There are BARRIER_BUCKETS_NR buckets in total, which is defined by,

#define BARRIER_BUCKETS_NR_BITS (PAGE_SHIFT – 2)
#define BARRIER_BUCKETS_NR (1<<BARRIER_BUCKETS_NR_BITS)

this patch makes the bellowed members of struct r1conf from integer to array of integers,

– int     nr_pending;
– int     nr_waiting;
– int     nr_queued;
– int     barrier;
+ int   *nr_pending;
+ int   *nr_waiting;
+ int   *nr_queued;
+ int   *barrier;

number of the array elements is defined as BARRIER_BUCKETS_NR. For 4KB kernel space page size, (PAGE_SHIFT – 2) indecates there are 1024 I/O barrier buckets, and each array of integers occupies single memory page. 1024 means for a request which is smaller than the I/O barrier unit size has ~0.1% chance to wait for


face

Last week was SUSE Hackweek and one of my projects was to get Let's Encrypt configured and working on my NAS.

Let's Encrypt is a project aimed at providing SSL certificates for free, in an automated way.

I wanted to get a SSL certificate for my Synology NAS. Synology now supports natively Let's Encrypt but only if the NAS accepts incoming HTTP / HTTPS connections (which is not always what you want).

Fortunately, the protocol used by Let's Encrypt to validate a hostname (and generate a certificate), Automatic Certificate Management Environment (ACME) has a alternative validation path, DNS-01, based on DNS.

DNS-01 requires access to your DNS server, so you can add a validation token used by Let's Encrypt server, to ensure you own the domain name you are requesting a certificate for.

There is a lot of ACME implementations, but very few supports DNS-01 validation with my DNS provider (gandi.net).

I ended-up using acme.sh, fully written in shell script and tried to plug Gandi DNS support in it.

After some tests, I discovered Gandi current DNS service is not allowing fast changing DNS zone informations (which is somehow a requirement for DNS-01 validation). Fortunately, Gandi is now providing a new LiveDNS server, available in beta, with a RESTful HTTP API.

I was able to get it working quite rapidly with curl, and once the prototype was working, I've cleaned everything and created a pull request for integrating the support in acme.sh.

Now, my NAS has its own Let's Encrypt certificate and will update it every 90 days automatically. Getting and installing a certificate for another server (running openSUSE Leap) only took me 5 minutes.

This was a pretty productive hackweek !


face

LinuxFest Northwest 2017, coming up the first weekend in May, promises to continue its tradition of providing a unique, active, fun experience for open-source enthusiasts at all experience levels. openSUSE continues its long-term sponsorship of the event, and we are looking forward to having a lot of fun! Submit your session proposals by March 1, 2017!

LinuxFest Northwest, if you’re not familiar, is one of the largest community-centric conferences in the USA, and a free+libre event (no attendance fees and registration is optional) promoting open source, open hardware, and community involvement. Now in its 16th year, with an audience rapidly approaching 2,000 people, the event continues to grow, attract a broader audience, and redefine the experience of a weekend conference. With a Linux Game Den, a Robotics Lab, a Job Fair (new this year), community mini-summits, as well as the expo hall and 8 – 10 parallel tracks of sessions, LFNW is a week of conference stuffed into a weekend.

 

left to right: Peter Linnell, Bryan Lunduke, Jon Hall (with the SUSE Chameleon), James Mason, Michael Miller at LinuxFest Northwest 2014

openSUSE has been a fixture at LFNW for years; I fondly recall my first LFNW, when ‘Reverend’ Ted Haeger demonstrated “compiz” on a tricked out SUSE laptop… I’ve been hooked on the event since; volunteering with (and for the last few years – organizing) the openSUSE presence there. Over the years, we’ve had an all-star group come out and represent SUSE & openSUSE at LFNW:

  • Richard Brown, chairman of the openSUSE board and SUSE QA Technical Lead, who opined “Linuxfest Northwest is such a civilized, enjoyable conference you would be forgiven for mistaking it as Canadian”;
  • Bryan Lunduke, openSUSE board member and SUSE cheerleader-at-large responded “LinuxFest NW embodies the best of what it means to be a red-blooded American in a tech conference: donuts, burgers, robots, and Free Software”;
  • Ross Brunson, Certification Architect at SUSE , ran a SUSE certification course for free last year;
  • Michael Miller, President – Strategy, Alliances & Marketing at SUSE, and Bellingham resident;
  • Alan Clark, Linux Foundation board member;

They are assisted by a dedicated group of local volunteers: openSUSE users looking to give back to the community that supports their technical pursuits.

 

openSUSE community volunteers like Caleb (left), an avid Steam gamer, and Adrian (right), a Python and Postgres consultant, represent the technical diversity in our community.

LFNW’s Call for Papers is still open, until March 1, so it’s not too late to submit a talk, and join the openSUSE community at LFNW. For the first time, the event has a theme: “The Mechanics of Freedom”; inviting talks on AI and IoT, but sessions are welcome on a variety of topics. There’s room for everyone here, and the event is always looking for good ‘beginner’ content as well as technical deep dives, so whether you’d like to explain the mechanics of live kernel patching, or walk people through their first editing session in Gimp, your proposal is welcome. When you


face

Getting Started in Android Development: Part 1: Building the First App

Do you know programming and want to start with the Android platform? Just like me! Read on.

Thanks to SUSE, my employer, for sponsoring a company-wide Hack Week which this project was a part of!

In case you wonder why Android: it is a good balance of work and play. Android is not the coolest toy to play with at the moment, but it is the most versatile device that people are likely to have at hand, especially when traveling. And Android already outnumbers openSUSE and all other OSs in my household.

This is a three part series: 1) building an app, 2) publishing it on Google Play, 3) trimming it down. In this part, we'll set up the development environment, follow the official tutorial to build a trivial app, then build a trivial yet useful app of our own.

a screenshot of my first app

a screenshot of my first app

Installing the SDK

I am using openSUSE Leap 42.1 (x86_64). You will notice that I keep tallying the disk space taken. This is because I am a bit short of space on one of my machines, and need to have an idea how much cleanup is needed.

Went to https://developer.android.com/.

Downloaded Android Studio (2.2.3 for Linux, 438 MiB, unpacks to 785 MiB), followed the instructions, unpacking to /opt (getting /opt/android-studio).

Ran /opt/android-studio/bin/studio.sh. Was greeted by an "Android Studio Setup Wizard": chose a Standard setup. Additional download of 890MB (1412MB unpacked) to ~/Android/Sdk.

Got a slightly confusing notice about KVM emulator acceleration. It seems that if you have used KVM before on your machine, the SDK will use it out of the box. But even with acceleration, don't expect the emulator to be fast. If you have a real device, use that.

"Building Your First App"

For the most part I simply followed the tutorial for building, installing, and running a trivial app that asks for a message and then displays it. The documentation feels excellent!

The one non-obvious part was choosing which Android version, in other words, which API level, to target. in the Target Android Devices dialog, the preselected option is API 15: Android 4.0.3 (IceCreamSandwich). That is presumably based on the current active device statistics which result in the app being compatible with 97% of devices. The oldest one is API 9: Android 2.3 (Gingerbread), which was a bit disappointing since my older phone from 2010 runs API 8, 2.2 (Froyo). (Don't worry, I eventually solved that in part 3.) Fortunately my newer phone has API 22: Android 5.1.1. Installed the API 22 platform too, to match the phone, about 100MB.

Connected my phone with a USB cable, pressed Run, and there it was! Don't worry, a buggy app will just crash and not affect the rest of your phone.

Just Roll One Die

Now it looked like I


face

Stardicter 0.11, the set of scripts to convert some freely available dictionaries to StarDict format, has been released today. There are mostly minor changes and it's time to push them out in official release. The most important being fixed sorting of ascii dictionaries, what did break searching in some programs.

Full list of changes:

  • Improved deaccent filter.
  • Fixed sorting of ASCII dictionaries.

As usual, you can install from pip, download source or download generated dictionaries from my website.

Filed under: Debian English StarDict SUSE | 0 comments


Sunday
26 February, 2017


face

Is someone using notebook as an alarm clock? Yes, it would be easy if I did not suspend machine overnight, but that would waste power and produce noise from fans. I'd like version that suspends the machine...


face

Weblate should be released by end of February, so it's now pretty much clear what will be there. So let's look at some of the upcoming features.

There were many improvements in search related features. They got performance improvements (this is especially noticeable on site wide search). Additionally you can search for strings within translation project. On related topic, search and replace is now available for component or project wide operations, what can help you in case of massive renaming in your translations.

We have worked on improving machine translations as well, this time we've added support for Yandex. In case you know some machine translation service which we do not yet support, please submit that to our issue tracker.

Biggest improvement so far comes for visual context feature - it allows you to upload screenshots which are later shown to translators to give them better idea where and in which context the translation is used. So far you had to manually upload screenshot for every source string, what was far from being easy to use. With Weblate 2.12 (and this is already available on Hosted Weblate right now) the screenshots management got way better.

There is now separate interface to manage screenshots (see screenshots for Weblate as an example), you can assign every screenshot to multiple source strings, however you can also let Weblate automatically recognize texts on the screenshots using OCR and suggest strings to assign. This can save you quite a lot of effort, especially with screenshots with lot of strings. This feature is still in early phase, so the suggestions are not always 100% matching, but we're working to improve it further.

There will be some more features as well, you can look at our 2.12 milestone at GitHub to follow the process.

Filed under: Debian English SUSE Weblate | 0 comments


Saturday
25 February, 2017


face
Old build workers, rack mounted

Old build workers, rack mounted

One year after introducing a new kind of Open Build Service worker machines, the “lambkins”, the openSUSE Build Service got a big hardware refresh. The new machines, sponsored by SUSE, are equipped with:

  • 2,8GHz AMD Opteron Processors (6348)
  • 256 GB RAM
  • one 120 GB SSD

Four of them are located in a chassis with a height of 2 units and run 12-16 workers on them (virtual machines, that are building packages).

That new build power allowed us to remove some of old machines from the pool. The unified hardware makes the management of the machines a lot easier now, even if there are still the most powerful old machines left.

For those who like some more pictures, feel free to check the rest of the entry…

8 servers and 2 switches

A view at the backside of two racks, containing 8×4 servers and 2 switches.

View to 6.5 Racks from their back

Some OBS worker Racks from the back. The white one on the left contains old x86 machines, the four in the middle contain the lamb workers, the right Rack contains the cloud workers.


face

While we had some fun and good food and drinks, we also managed to discuss a lot during the three days in the Nuremberg headquarter. This was needed because this was the first time that the Heroes came together in their current form. In the end, we managed to do no coding and even (nearly) no administration – but instead we started to discuss our (internal and external) policies and work flows – and did some decisions regarding the next steps and the future of the openSUSE infrastructure.

openSUSE Heroes meeting

So what are our results – and how does the prioritized action item list look like?

First of all: the complete meeting minutes can be found on our mailing list. The list items below are just a condensed subset containing the most important parts of the three days.

We identified the following list of tasks and assigned priorities and people behind it:

Priority Task Assignee
1 Setup FreeIPA darix
1 openVPN setup for NUE and PRV machine access by admins darix, tbro
2 Saltifying services cboltz, skriesch, tampakrap, CyReVolt, darix, tbro
2 openSUSE Cloud in Provo gschlotter, cmueller
3 Updating our documentation tampakrap, CyReVolt, lrupp
3 Progress clean up (Projects and Tickets tampakrap, lrupp
4 Provide a hardware wishlist cmueller, tbro
4 Setup external monitoring skriesch, lrupp
4 Mediawiki separation and upgrade cboltz, skriesch
5 CDN77 testing tampakrap, darix
5 handle connect.opensuse.org tampakrap, lrupp
6 Hermes shut down tbro
6 migrate scanner-opensuse tampakrap, darix
6 handle paste.o.o and planet.o.o tampakrap

There are – for sure – a lot more “TODO’s” and “action items” on our list (just read the minutes to get an idea), but we think that we might see a big progress if we can finish the above list in a reasonable time. While we just started and can not provide much documentation or “junior jobs”, yet, we are always looking for people who want to join our team and want to help. In such a case: just get in contact with us!


face

During last year's Summer of Code I had the honor of mentoring Nanduni Indeewaree Nimalsiri. She worked on Inqlude, the comprehensive archive of third party Qt libraries, improving the tooling to create a better structured web site with additional features such as categorization by topic. She did an excellent job with it and all of her code ended up on the master branch. But we hadn't yet made the switch to change the default layout of the web site to fully take advantage of all her work. As part of SUSE's 15th Hack Week, which is taking place this week, I took some time to change that, put up some finishing touches, and switch the Inqlude web site to the new layout. So here we are. I proudly present the new improved home page of Inqlude.


All libraries have additional meta data now to group them by a number of curated topics. You can see the topics in the navigation bar on the left and use them to navigate Inqlude by categories. The listing shows more information on first view, such as the supported platforms, to make it easier to find libraries according to your criteria without having to navigate between different pages. The presentation in general is cleaner now, and some usability testing has shown that the page works better now than before. In addition to the visible changes, Nanduni has also done quite a bit of improvements under the hood, including better automated testing. I'm proud of what we have achieved there.

It always has been a privilege for me to act as mentor as part of Google's Summer of Code or other programs. This is one of the most rewarding parts of working in free software communities, to see how new people learn and grow, especially if they decide to stay involved after the program has ended and become valuable members of the community for the long term. Being able to help with that I feel is one of the most satisfying investments of your time in the community.

face

Dear Tumbleweed users and hackers,

This week we had to cancel a couple snapshots, as a regression in grub was detected, that caused issues on chain-loading bootloaders. But thanks to our genius maintainers, the issue could be found, fixed and integrated into Tumbleweed (and this despite being busy with hackweek! A great THANK YOU!). Despite those canceled snapshots, this review will still span 4 revisions: 0216, 0218, 0219 and 0224. And believe me, there have been quite some things coming your way.

The most impacting were:

  • Linux kernel 4.9.10 & 4.9.11
  • Python 3.6
  • Mesa 17.0
  • KDE Plasma 5.9.2
  • Libreoffice 5.3.0.3

Python 3.6 did cause some trouble for users of applications beyond the rings (remember: only ring-packages are tested on integration of any updated package). You can find a list of packages still failing at goo.gl/3AX7sK – Help to get those sorted is appreciated from all sides.

What else is coming our way in the near future:

  • RPM 4.13.0.1 – A regression was already spotted during 4.13.0 staging and upstream corrected it. A handful of packages still needs adjustments
  • GLibc 2.25 – Nobody seems to be providing fixes for any of the failures… Call for help! Please let’s move this forward
  • Linux Kernel 4.10 (snapshot 0225+)
  • python single-spec work: building code for multiple python targets out of a single spec file; first submissions started

As a reminder, here once again the link to the Factory dashboard, where you can see which staging areas are in need of help. All build failures need to be addressed in one way or another for a staging project to be moving forward.

With all this said, go forth and have a lot of fun!


Friday
24 February, 2017


face
  • Griping about parsers and shitty specifications

    The last time, I wrote about converting librsvg's tree of SVG element nodes from C to Rust — basically, the N-ary tree that makes up librsvg's view of an SVG file's structure. Over the past week I've been porting the code that actually implements specific shapes. I ran into the problem of needing to port the little parser that librsvg uses for SVG's list-of-points data type; this is what SVG uses for the points in a polygon or a polyline. In this post, I want to tell you my story of writing Rust parsers, and I also want to vent a bit.

    My history of parsing in Rust

    I've been using hand-written parsers in librsvg, basically learning how to write parsers in Rust as I go. In a rough timeline, this is what I've done:

    1. First was the big-ass parser for Bézier path data, an awkward recursive-descent monster that looks very much like what I would have written in C, and which is in dire need of refactoring (the tests are awesome, though, if you (hint, hint) want to lend a hand).

    2. Then was the simple parser for RsvgLength values, in which the most expedient thing, if not the prettiest, was to reimplement strtod() in Rust. You know how C and/or glib have a wealth of functions to write half-assed parsers quickly? Yeah, that's what I went for here. However, with this I learned that strtod()'s behavior of returning a pointer to the thing-after-the-number can be readily implemented with Rust's string slices.

    3. Then was the little parser for preserveAspectRatio values, which actually uses Rust idioms like using a split_whitespace() iterator and returing a custom error inside a Result.

    4. Lastly, I've implemented a parser for SVG's list-of-points data type. While shapes like rect and circle simply use RsvgLength values which can have units, like percentages or points:

      <rect x="5pt" y="6pt" width="7.5 pt" height="80%/">
      
      <circle cx="25.0" cy="30.0" r="10.0"/>
      		

      In contrast, polygon and polyline elements let you specify lists of points using a simple but quirky grammar that does not use units; the numbers are relative to the object's current coordinate system:

      <polygon points="100,60 120,140 140,60 160,140 180,60 180,100 100,100"/>
      		

    Quirky grammars and shitty specs

    The first inconsistency in the above is that some object types let you specify their positions and sizes with actual units (points, centimeters, ems), while others don't — for these last ones, the coordinates you specify are relative to the object's current transformation. This is not a big deal; I suspect it is either an oversight in the spec, or they didn't want to think of a grammar that would accomodate units like that.

    The second inconsistency is in SVG's quirky grammars. These are equivalent point lists:

    points="100,-60 120,-140 140,60 160,140 

Wednesday
22 February, 2017


Michael Meeks: 2017-02-22 Wednesday.

17:01 UTCmember

face
  • Excited to be demoing a new Ubuntu / Snap based solution alongside Ubuntu and Nextcloud at MWC Barcelona.
  • Also pleased to see AMD release Ryzen Making 16 threads accessible to everyone - helping your LibreOffice compiles (and load times of course too); good stuff.

face
Flash Fraud is a real and it affects hundreds of millions... or thousands... well, I have no idea how many people it really effects, but I have been a victim of Flash Fraud once and the experience was very irritating.

What is Flash Fraud? In short you are sold a device of lower capacity than what was advertised. Solid state media comes in many forms, most common consumer forms are SD cards and USB flash drives (yes, there are other forms, SSDs, mSATAs, etc but that is not as common for off the shelf or cheap data storage units). It is possible for a seller to take a lower capacity flash drive like a USB drive or SD Card and modify the controller so that it reports that the total capacity is larger than what is reality.

The casual consumer could see a great deal on the Internet, thinking he/she is getting a steal on on 32GB MicroSD Card when it is unknowingly only a 16GB card. Someone may actually go for weeks, months, even a year a while without any problems at all problem will show when the victim takes the 10,401st selfie that over writes the first selfie and the whole file system starts to unravel and corrupt. The average user may not know they are using a faulty drive for months after purchase and well past the 30 day return policy.

Fight Flash Fraud

You can Fight Flash Fraud by checking new devices as soon as you buy them. The tool to do that in Linux is F3. This can be found in the openSUSE Software Service. This is a terminal only program, although there is a User Interface it is not compiled for openSUSE at this time. This is not a problem as the usage of it is really quite easy... and FUN!

The two primary functions you will use are f3write and f3read, that is how you will conduct the test on the drive itself.

Installation

As with everything using the openSUSE Build Service, it is really easy to install just take a quick trip over to the openSUSE Software Service, select your distribution version number, and do the 1 Click Install.

f3write

Since this is only available as a terminal program in openSUSE from the repositories, I will go through using this in the terminal and using KDE as my Desktop Environment.

Insert your USB drive or SD Card, whatever flash medium it is you have that you want to verify.

Mount the drive in the system. I am using KDE but it should work similarly on other Desktops

Open the file location in your file manager, copy the location to your clipboard (ctrl+c)

Open a terminal, like Konsole or xterm

on the prompt type in the the command f3write and the location of the mounted drive. It should look something like this:

f3write [location to the media that needs to be checked]

Here is an example of what it looks

Tuesday
21 February, 2017


Cameron Seader: Link

23:44 UTC

face

OpenStack Summit Boston 2017 Presentation Votes (ends Feb. 21st, 2017 at 11:59pm PST)

Open voting is available for all session submissions until Tuesday, Feb 21, 2017 at 11:59PM PST. This is a great way for the community to decide what they want to hear.

I have submitted a handful of sessions which I hope will be voted for. Below are some short summary's and links to their voting pages.

Avoid the storm! Tips on deploying the Enterprise Cloud
The primary driver for enterprise organizations choosing to deploy a private cloud is to enable on-demand access to the resources that the business needs to respond to market opportunities. But business agility requires availability... 
https://www.openstack.org/summit/austin-2016/vote-for-speakers/#/18317
Keys to Successful Data Center Modernization to Infrastructure Agility
Data center modernization and consolidation is the continuous optimization and enhancement of existing data center infrastructure, enabling better support for mission-critical and Mode 1 applications. The companion Key Initiative, "Infrastructure Agility" focuses on Mode 2...
https://www.openstack.org/summit/austin-2016/vote-for-speakers/#/18403
Best Practices with Cloud Native Microservices on OpenStack
It doesn't matter where your at with your implementation of Microservices, but you do need to understand some key fundamentals when it comes to designing and properly deploying on OpenStack. If your just starting out then you will need to learn some key things such as the common characteristics, monolithic vs  microservice, componetization, decentralized governance, to name a few. In this session you'll learn some of these basics and where to start...
https://www.openstack.org/summit/austin-2016/vote-for-speakers/#/18336
Thanks for your support.
-CS

face

Nice machine. Slightly bigger than X60, bezel around display way too big, but quite powerful. Biggest problem seems to be that it does not accept 9.5mm high drives...

I tried 4.10 there, and got two nasty messages during bootup. Am I the last one running 32 bit kernels?

I was hoping to get three-monitor configuration on my desk, but apparently X220 can not do that. xrandr reports 8 outputs (!), but it physically only has 3: LVDS, displayport and VGA. Unfortunately, it seems to only have 2 CRTCs, so only 2 outputs can be active at a time. Is there a way around that?


Michael Meeks: 2017-02-21 Tuesday.

21:00 UTCmember

face
  • Mail, financials with Tracie, call with Eloy; then Ash & Kendy. Built ESC stats. Out to cubs in the evening with M. - ran a code-breaking table with several fun encoding schemes.

face

This is first security bugfix release for Weblate. This has to come at some point, fortunately the issue is not really severe. But Weblate got it's first CVE ID today, so it's time to address it in a bugfix release.

Full list of changes:

  • Do not leak account existence on password reset form (CVE-2017-5537).

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!

Filed under: Debian English SUSE Weblate | 0 comments


Monday
20 February, 2017


face
The next OpenStack Summit takes place in Boston, MA (USA) in May (8.-11.05.2017). The "Vote for Presentations" period started already. All proposals are now again up for community votes. The period will end February 21th at 11:59pm PST (February 22th at 8:59am CEST).

This time I have submitted a proposal together with WDLabs:

  • Next Generation Hardware for Software Defined Storage - Software Defined Storage like Ceph has changed the storage market dramatically in the last few years. While software has changed, storage hardware stayed basically the same: commodity servers connected to JBODs utilizing SAS/SATA devices. The next step must be a revolution in the hardware too. At the Austin summit the Ceph community presented a 4 PB Ceph cluster comprised of WDLabs Converged Microservers. Each Microserver is built by starting with an HGST HE8 HDD platform and adding an ARM and DDR running Linux on the drives itself. WDLabs provided access to early production devices for key customers such as Deutsche Telekom for adoption and feedback. This talk will provide insight into our findings running a Ceph cluster on this platform as a storage provider to OpenStack.
This period the voting process changed again unique URLs to proposals seems to work again. So if you would like to vote for my talk use this link or search for the proposal (e.g. use the title from above or search for "Al-Gaaf"). As always: every vote is highly welcome! 

As the last times I highly recommend to search also for "Ceph" or what ever topic your are interested in. You find the voting page here with all proposals and abstracts. I'm looking forward to see if and which of these talks will be selected.

Michael Meeks: 2017-02-20 Monday.

21:00 UTCmember

face
  • Poked mail; sync. with Kendy, consultancy call. Out for a walk, lunch. Back to financials / 2017 projections and calls.

face

As we announced in the previous report, our 31th Scrum sprint was slightly shorter than the usual ones. But you would never say so looking to this blog post. We have a lot of things to talk you about!

Teaching the installer self-update to be a better citizen

As you may know, YaST includes a nice feature which allows the installer to update itself in order to solve bugs in the installation media. This mechanism has been included in SUSE Linux Enterprise 12 SP2, although it’s not enabled by default (you need to pass an extra option selfupdate=1 to make it work).

So after getting some feedback, we’re working toward fixing some usability problems. The first of them is that, in some situations, the self-update mechanism is too intrusive.

Consider the following scenario: you’re installing a system behind a firewall which prevents the machine to connect to the outside network. As the SUSE Customer Center will be unreachable, YaST complains about not being able to get the list of repositories for the self-update. And, after that, you get another complain because the fallback repository is not accessible. Two error messages and 2 timeouts.

And the situation could be even worse if you don’t have access to a DNS server (add another error message).

So after some discussion we’ve decided to show such errors only if the user has specified SMT or a custom self-update repository (which is also possible). In any other case, the error is logged and the self-update is skipped completely.

You can find further information in our Self-Update Use Cases and Error Handling document.

During upcoming sprints, we’ll keep working on making the self-update feature great!

Configuring workers during CaaSP installation

While CaaSP release approaches, our team is still working hard to satisfy the new requirements. Thankfully, YaST is a pretty flexible tool and it allows us to change a lot of things.

For CaaSP installation, YaST features a one-dialog installation screen. During this sprint, configuration of the Worker role has been implemented, including validation of the entered URL and writing the value to the installed system. You can check the animated screenshot for details.

The CaaSP worker configuration

New desktop selection screen in openSUSE installer

The world of Linux desktop environments change relatively quick, with new options popping-up and some projects being abandoned. Thanks to the openSUSE’s community of packagers we have a lot of these new desktop environments available on the openSUSE distributions. But the status of those packages for openSUSE is also subject to changes: some desktop environments are poorly maintained while others have a strong and active group of packagers and maintainer behind.

The YaST Team does not have enough overview and time to watch all these desktop environment and evaluate which one is ready or not for being in the installer’s desktop selection screen. So the openSUSE Release Team decided to replace this dialog with something a bit more generic but still useful for newcomers.

They asked


Sunday
19 February, 2017


Michael Meeks: 2017-02-19 Sunday.

21:00 UTCmember

face
  • NCC in The Stable in the morning; Tony & Anne back for lunch, played a game; quartet practice, tea & read babies.

Saturday
18 February, 2017


Michael Meeks: 2017-02-18 Saturday.

21:00 UTCmember

face
  • Up rather late; breakfast; slugged variously. Driven home with babes and more Green Ring-ness. Relaxed variously & watched Robin Hood, Prince of Thieves.

Friday
17 February, 2017


face
  • How librsvg exports reference-counted objects from Rust to C

    Librsvg maintains a tree of RsvgNode objects; each of these corresponds to an SVG element. An RsvgNode is a node in an n-ary tree; for example, a node for an SVG "group" can have any number of children that represent various shapes. The toplevel element is the root of the tree, and it is the "svg" XML element at the beginning of an SVG file.

    Last December I started to sketch out the Rust code to replace the C implementation of RsvgNode. Today I was able to push a working version to the librsvg repository. This is a major milestone for myself, and this post is a description of that journey.

    Nodes in librsvg in C

    Librsvg used to have a very simple scheme for memory management and its representation of SVG objects. There was a basic RsvgNode structure:

    typedef enum {
        RSVG_NODE_TYPE_INVALID,
        RSVG_NODE_TYPE_CHARS,
        RSVG_NODE_TYPE_CIRCLE,
        RSVG_NODE_TYPE_CLIP_PATH,
        /* ... a bunch of other node types */
    } RsvgNodeType;
    	      
    typedef struct {
        RsvgState    *state;
        RsvgNode     *parent;
        GPtrArray    *children;
        RsvgNodeType  type;
    
        void (*free)     (RsvgNode *self);
        void (*draw)     (RsvgNode *self, RsvgDrawingCtx *ctx, int dominate);
        void (*set_atts) (RsvgNode *self, RsvgHandle *handle, RsvgPropertyBag *pbag);
    } RsvgNode;
    	    

    This is a no-frills base struct for SVG objects; it just has the node's parent, its children, its type, the CSS state for the node, and a virtual function table with just three methods. In typical C fashion for derived objects, each concrete object type is declared similar to the following one:

    typedef struct {
        RsvgNode super;
        RsvgLength cx, cy, r;
    } RsvgNodeCircle;
    	    

    The user-facing object in librsvg is an RsvgHandle: that is what you get out of the API when you load an SVG file. Internally, the RsvgHandle has a tree of RsvgNode objects — actually, a tree of concrete implementations like the RsvgNodeCircle above or others like RsvgNodeGroup (for groups of objects) or RsvgNodePath (for Bézier paths).

    Also, the RsvgHandle has an all_nodes array, which is a big list of all the RsvgNode objects that it is handling, regardless of their position in the tree. It also has a hash table that maps string IDs to nodes, for when the XML elements in the SVG have an "id" attribute to name them. At various times, the RsvgHandle or the drawing-time machinery may have extra references to nodes within the tree.

    Memory management is simple. Nodes get allocated at loading time, and they never get freed or moved around until the RsvgHandle is destroyed. To free the nodes, the RsvgHandle code just goes through its all_nodes array and calls the node->free() method on each of them. Any references to the nodes that remain in other places will dangle, but since everything is being freed anyway, things are fine. Before the RsvgHandle is freed, the code can copy pointers around with impunity, as it knows that the all_nodes array basically stores the "master" pointers that will need to be freed in the end.

    But Rust doesn't work that way

    Not so, indeed! C lets you copy pointers


Michael Meeks: 2017-02-17 Friday.

21:00 UTCmember

face
  • Up; worked through the morning on the mail and task backlog. Out for a walk to see some ponies with the babes at Thorpness.
  • Back for lunch, and then to tackle the plumbing problems. Attacked an amazing amount of congealed fat in the sink plumbing - simply extraordinary; eventually had to cut the pipe-work out in sections and replace it with H's help.
  • Relaxed in the evening with some fine food. Couldn't sleep, hacked on socket code instead.

face

Dear Tumbleweed users and hackers,

This week we ‘only’ delivered 5 snapshots. But at least it was big ones, so that makes up for it. The review covers the snapshots {0211..0215}.

What did you receive

  • apparmor 2.11.0
  • KDE Plasma 5.9.1
  • KDE Applications 16.12.2
  • Linux Kernel 4.9.9
  • grep 2.28, with performance improvements
  • PackageKit-Qt: no more support for Qt4

In the staging ares, some work has been happening, but the usual suspects are still awaiting some love:

  • rpm 4.13.0 – the easy ones seem fixed; some obscure errors are left; mainly rpmlint seems to have trouble now
  • KDE Plasma 5.9.2
  • Mesa 17.0.0 (jumping up from 13.0)
  • Linux Kernel 4.9.10
  • glibc 2.25: still some failures left to tackle
  • util-linux will no longer pull in insserv for you. If your package makes use of it, you are now responsible for it
  • Libreoffice 5.3 – still fails the test suite on ppc64le
  • Python 3.6 – almost ready. The final piece is apparmor, where a fix is in the works / almost ready

As many will have noticed, the legal-auto bot is currently ‘much more reluctant’ to accept submissions. This is due to a total rewrite/restructuring of the legal process. See Stephan Kulow’s mail for more information.


Thursday
16 February, 2017


Michael Meeks: 2017-02-16 Thursday.

21:00 UTCmember

face
  • Poked at socket code a lot in the morning; drove to Aldeburgh with the family - listened to the Green Ring Conspiracy in the car. Chat with Philippe, ESC call. More hackery.

face

Today: update the update process!

Yesterday a colleague asked me if it would be possible to apply a driver update (DUD) to the rescue system. He wanted to use a new btrfsprogs package.

My immediate reaction was: no, you can’t do it. But then, there’s no technical reason why it shouldn’t be possible – it actually nearly works. The updates are downloaded as usual – just not applied to the rescue system.

So I thought: “Why not make a driver update so driver updates work also for the rescue system?”

Here’s how I did it.

First, let’s find out how driver updates are usually applied. The code is here:

https://github.com/openSUSE/installation-images/blob/master/data/root/etc/inst_setup#L84-L87

We need just these three lines:

for i in /update/[0-9]*/inst-sys ; do
  [ -d "$i" ] && adddir "$i" /
done

linuxrc downloads the driver updates and stores them in an /update directory. One (numbered) subdirectory for each update.

It obviously uses some adddir script. So we’ll need it as well. Luckily, it’s not far away:

https://github.com/openSUSE/installation-images/blob/master/data/root/etc/adddir

Next, we’ll have to find the spot where the rescue system is set up. It’s done in this script:

https://github.com/openSUSE/installation-images/blob/master/data/initrd/scripts/prepare_rescue

Let’s do some copy-and-paste programming and insert the above code near the end of the script. It then might look like this

# driver update: add files to rescue system
if [ -d /mounts/initrd/update ] ; then
  cp -r /mounts/initrd/update /
  for i in /update/[0-9]*/inst-sys ; do
    [ -d "$i" ] && /mounts/initrd/scripts/adddir "$i" /
  done
fi

Some notes:

  • You have to know that prepare_rescue is run as the last thing before we exec to init. So everything is already in place, the left-over files from initrd are mounted at /mounts/initrd and will be removed at the end of the script.
  • This means we have to copy our updates into the new root directory, else they will be lost.
  • Also, we plan to make the adddir script available at /scripts/adddir by our driver update (see below).

Now let’s create the driver update:

mkdud --create dud_for_rescue.dud \
  --dist tw --dist leap42.1 --dist leap42.2 --dist sle12 \
  --name 'Apply DUD also to rescue system' \
  --exec 'cp adddir prepare_rescue /scripts' \
  adddir prepare_rescue

Here’s what this call does, line-by-line:

  • the fix works for all current SUSE distributions, so let’s support them
  • give the driver update some nice name
  • this command is run right after the driver update got loaded; we copy the scripts out of the driver update to their final location
  • add adddir and our modified prepare_rescue script

Here is the result: dud_for_rescue.dud.

Now, back to the original problem: how to use this to update a package in the rescue system? That’s easy:

mkdud --create new_btrfs.dud \
  --dist sle12 \
  dud_for_rescue.dud btrfsprogs.rpm

creates a driver update (for SLE12) that updates btrfsprogs also in the rescue system.


face

Two Linux Kernels per week in openSUSE Tumbleweed is becoming the norm as the rolling release is providing daily snapshots of new software that are closely aligned with upstream development.

Kernel 4.9.8 and 4.9.9 were released in the 20170208 and 20170212 snapshots respectively and the later brought a fix for a Btrfs system call.

Beside the 4.9.8 Kernel in the first week’s snapshot, 20170208, Mesa users will be happy to see version 13.0.4 had a specfile fix for build configuration for ARM, Power PC and s390 architectures. Gimp 2.8.20 made the color selection of the paint tool more robust and updated translations for a number of European languages. Several other packages were updated in the repositories from this snapshot and python3-kiwi 9.0.2 and vim 8.0.311 provided the most fixes.

Snapshot 20170209 brought the first major release of libosinfo (Operating System information database) in Tumbleweed with version 1.0.0, which focuses on metadata about operating systems and provides a single place to manage it in a virtualized environment.  F Virtual Window Manager (FVWM) 2.6.7 added a handful of new features and removed several other features like  GTK 1.x support.

Plasma 5.9.1 came in the 20170211 snapshot and AppArmor 2.11.0 update provided multiple improvements and fixes, one of which fixed an issue that Kernel 4.8 and above affected Apparmor policy enforcement. Libssh hackers made use of their time at FOSDEM and squashed bugs, which came in the libssh 0.7.4.

Both 20170213 and 20170214 snapshots provided updates for KDE Applications 16.12.2. GNU Compiler Collection 6.3.1 provides from architectural fixes and grub2 now has a Release Candidate 1, which came in the most recent snapshot.

Older blog entries ->