Skip to main content

the avatar of Kohei Yoshida

LibreOffice Development Talk at Triangle C++ Developer’s Group

It was a pleasure to have been given an opportunity to talk about LibreOffice development the other day at the Triangle C++ Developer’s Group. Looking back, what we went through was a mixture of hardship, accomplishments, and learning experience intertwined in such a unique fashion. It was great to be able to talk about it and hopefully it was entertaining enough to those of you who decided to show up to my talk.

Here is a link to the slides I used during my talk.

Thanks again, everyone!

Edit: Here is a PDF version of my slides for those of you who don’t have a program that can open odp files.

the avatar of Richard Brown

Creating openSUSE-style btrfs root partition & subvolumes

openSUSE’s YaST installer creates a detailed btrfs root filesystem configuration that has been designed to be flexible and secure while still efficient when used with tools like Snapper.

One of the overriding requirements is to provide a clearly defined ‘root filesystem’ containing everything we care about for ‘full system rollback’ (facilitated by snapper), while using subvolumes to exclude everything we do not want in the ‘root filesystem’ so snapper does not accidentally destroy user data when rolling back the system and its’ applications. Details of our default subvolume layout can be found on the openSUSE wiki.

However this does lead to complications for some advanced users who wish to recreate this manually, such as when doing complex system recovery, custom automated provisioning or other tinkering. (NOTE: for Full System Recovery it is often better to use a tool like ReaR).

The below steps are the steps to manually create an openSUSE-style btrfs partition believed to be correct at time of writing (19 Jan 2018).

This guide should be valid for openSUSE Tumbleweed 20180117, openSUSE Leap 15, and SUSE Linux Enterprise 15 or later. However care should be taken to double check for new or removed subvolumes in *SUSE distributions as this document gets older.

Older versions of SUSE distributions will need to adjust these instructions to handle the old /var/* subvolume layout previously used.

It should go without saying that this guide should only be followed by people who feel that they know what they are doing. It’s normally a lot easier to use openSUSE’s default tools like YaST and ReaR.

Step-by-Step

For this example we will use /dev/sda as our example disk and /dev/sda1 as our example partition for a btrfs root filesystem.

1: Create a partition table and the partition to be used as our root filesystem using your favourite tool (eg. yast, parted, fdisk)

2: Format /dev/sda1 with a btrfs filesystem

mkfs.btrfs /dev/sda1

3: Mount the new partition somewhere so we can work on it. We’ll use /mnt in this example.

mount /dev/sda1 /mnt

4: Create the default subvolume layout (this assumes an intel architecture, the /boot/grub2/* paths are different for different architectures)

btrfs subvolume create /mnt/@
btrfs subvolume create /mnt/@/.snapshots
mkdir /mnt/@/.snapshots/1
btrfs subvolume create /mnt/@/.snapshots/1/snapshot
mkdir -p /mnt/@/boot/grub2/
btrfs subvolume create /mnt/@/boot/grub2/i386-pc
btrfs subvolume create /mnt/@/boot/grub2/x86_64-efi
btrfs subvolume create /mnt/@/home
btrfs subvolume create /mnt/@/opt
btrfs subvoulme create /mnt/@/root
btrfs subvolume create /mnt/@/srv
btrfs subvolume create /mnt/@/tmp
mkdir /mnt/@/usr/
btrfs subvolume create /mnt/@/usr/local
btrfs subvolume create /mnt/@/var

5: Disable copy-on-write for var to improve performance of any databases and VM images within

chattr +C /mnt/@/var

6: Create /mnt/@/.snapshots/1/info.xml file for snapper’s configuration. Include the following content, replacing $DATE with the current system date/time.

<?xml version="1.0"?>
<snapshot>
  <type>single</type>
  <num>1</num>
  <date>$DATE</date>
  <description>first root filesystem</description>
</snapshot>

7: Set snapshot 1 as the default snapshot for your root file system, unmount it, and remount it.

btrfs subvolume set-default $(btrfs subvolume list /mnt | grep "@/.snapshots/1/snapshot" | grep -oP '(?<=ID )[0-9]+') /mnt
unmount /mnt
mount /dev/sda1 /mnt

8: You should be able to confirm the above worked by doing ls /mnt which should repond with an empty result.

Congratulations, at this point the filesystem is ‘created’ with the correct structure. But you need to know how to mount it properly to make use of it.

9: You now need to create a skeleton of the filesystem to mount all of our subvolumes

mkdir /mnt/.snapshots
mkdir -p /mnt/boot/grub2/i386-pc
mkdir -p /mnt/boot/grub2/x86_64-efi
mkdir /mnt/home
mkdir /mnt/opt
mkdir /mnt/root
mkdir /mnt/srv
mkdir /mnt/tmp
mkdir -p /mnt/usr/local
mkdir /mnt/var

10: Mount all of the subvolumes

mount /dev/sda1 /mnt/.snapshots -o subvol=@/.snapshots
mount /dev/sda1 /mnt/boot/grub2/i386-pc -o subvol=@/boot/grub2/i386-pc
mount /dev/sda1 /mnt/boot/grub2/x86_64-efi -o subvol=@/boot/grub2/x86_64-efi
mount /dev/sda1 /mnt/home -o subvol=@/home
mount /dev/sda1 /mnt/opt -o subvol=@/opt
mount /dev/sda1 /mnt/root -o subvol=@/root
mount /dev/sda1 /mnt/srv -o subvol=@/srv
mount /dev/sda1 /mnt/tmp -o subvol=@/tmp
mount /dev/sda1 /mnt/usr/local -o subvol=@/usr/local
mount /dev/sda1 /mnt/var -o subvol=@/var

11: You’re done, you’ve now successfully created an openSUSE-style btrfs root filesystem structure and mounted it for use. You can now use it for whatever you’d like, such as the manual injection of files from an existing openSUSE installation.

Once populated, care should be made to ensure the /mnt/etc/fstab also includes the appropriate entries for each of the subvolumes except @/.snapshots/1/snapshot which should not be mounted as it provides your initial installed system.

Have a lot of fun

the avatar of Greg Kroah-Hartman

Meltdown and Spectre Linux Kernel Status - Update

I keep getting a lot of private emails about my previous post about the latest status of the Linux kernel patches to resolve both the Meltdown and Spectre issues.

These questions all seem to break down into two different categories, “What is the state of the Spectre kernel patches?”, and “Is my machine vunlerable?”

State of the kernel patches

As always, lwn.net covers the technical details about the latest state of the kernel patches to resolve the Spectre issues, so please go read that to find out that type of information.

a silhouette of a person's head and shoulders, used as a default avatar

Fun with Rust (not spinning this time)

Rust... took me while to install. I decided I did not like curl | sh, so I created fresh VM for that. That took a while, and in the end I ran curl | sh, anyway. I coded weather forecast core in Rust... And I feel like every second line needs explicit typecast. Not nice, but ok; result will be fast, right? Rust: 6m45 seconds, python less then 1m7 seconds. Ouch. Ok, rust really needs optimalizations to be anywhere near reasonable run-time speed. 7 seconds optimized. Compile time is... 4 seconds for 450 lines of code. Hmm. Not great. .. but I guess better than alternatives.
a silhouette of a person's head and shoulders, used as a default avatar

Hey Intel, what about an apology?

Hey, Intel. You were selling faulty CPUs for 15+ years, you are still selling faulty CPUs, and there are no signs you even intend to fix them. You sold faulty CPUs for half a year, knowing they are faulty, without telling you customers. You helped develop band-aids for subset of problems, and subset of configurations. Yeah, so there's work around for Meltdown on 64-bit Linux. Where's work around for Meltdown on 32-bit? What about BSDs? MINIX? L4? Where are work arounds for Spectre? And more importantly -- where are real fixes? You know, your CPUs fail to do security checks in time. Somehow I think that maybe you should fix your CPUs? I hearing you want to achieve “quantum supremacy". But maybe I'd like to hear how you intend to fix the mess you created, first? I actually started creating a workaround for x86-32, but I somehow feel like I should not be the one fixing this. I'm willing to test the patches...

(And yes, Spectre is industry-wide problem. Meltdown is -- you screwed it up.)

the avatar of Federico Mena-Quintero

Help needed for librsvg 2.42.1

Would you like to help fix a couple of bugs in librsvg, in preparation for the 2.42.1 release?

I have prepared a list of bugs which I'd like to be fixed in the 2.42.1 milestone. Two of them are assigned to myself, as I'm already working on them.

There are two other bugs which I'd love someone to look at. Neither of these requires deep knowledge of librsvg, just some debugging and code-writing:

  • Bug 141 - GNOME's thumbnailing machinery creates an icon which has the wrong fill: it's an image of a builder's trowel, and the inside is filled black instead of with a nice gradient. This is the only place in librsvg where a cairo_surface_t is converted to a GdkPixbuf; this involves unpremultiplying the alpha channel. Maybe the relevant function is buggy?

  • Bug 136: The stroke-dasharray attribute in SVG elements is parsed incorrectly. It is a list of CSS length values, separated by commas or spaces. Currently librsvg uses a shitty parser based on g_strsplit() only for commas; it doesn't allow just a space-separated list. Then, it uses g_ascii_strtod() to parse plain numbers; it doesn't support CSS lengths generically. This parser needs to be rewritten in Rust; we already have machinery there to parse CSS length values properly.

Feel free to contact me by mail, or write something in the bugs themselves, if you would like to work on them. I'll happily guide you through the code :)

the avatar of Jos Poortvliet

Nasty fall-out from Spectre and Meltdown

I guess it's hard to miss Spectre and Meltdown so you probably read about it. And there's more bad news than what's been widely reported, it seems.

You trust the cloud? HAHAHAHA

What surprised me a little was how few journalists paid attention to the fact that Meltdown in particular breaks the isolation between containers and Virtual Machines - making it quite dangerous to run your code in places like Amazon S3. Meltdown means: anything you have ran on Amazon S3 or competing clouds from Google and Microsoft has been exposed to other code running on the same systems.

And storage isn't per-se safe, as the systems handling the storage just might also be used for running apps from other customers - who then thus could have gotten at that data. I wrote a bit more about this in an opinion post for Nextcloud.

We don't know if any breaches happened, of course. We also don't know that they didn't.

That's one of my main issues with the big public cloud providers: we KNOW they hide breaches from us. All the time. For YEARS. Yahoo did particularly nasty, but was it really such an outlier? Uber hid data stolen from 57 million users for a year, which came out just November last year.

Particularly annoying if you're legally obliged to report security breaches to the users it has affected, or to your government. Which is, by the way, the case in more and more countries. You effectively can't do that if you put any data in a public cloud...

Considering the sales of the maximum allowed amount of stock just last November by the Intel CEO, forgive me if I have little trust in the ethical standards at that company, or any other for that matter. (oh, and if you thought the selling of the stock by the Intel CEO is just typical stuff, nah, it was noticed as interesting BEFORE Meltdown & Spectre became public)

So no, there's no reason to trust these guys (and girls) on their blue, brown, green or black eyes. None whatsoever.

Vendors screwed up a fair bit. More to come?

But there's more. GregKH, the inofficial number two in Linux kernel development, blogged about what-to-do wrt Meltdown/Spectre and he shared an interesting nugget of information:
We had no real information on exactly what the Spectre problem was at all
Wait. What? So the guys who had to fix the infrastructure for EVERY public and private cloud and home computer and everything else out there had... no... idea?

Yeap. Golem.de notes (in German) that the coordination around Meltdown didn't take place over the usual closed kernel security mailing list, but instead distributions created their own patches. The cleanup of the resulting mess is ongoing and might take a few more weeks. Oh, and some issues regarding Meltdown & Spectre might not be fix-able at all.

But I'm mostly curious to find out what went wrong in the communication that resulted in the folks who were supposed to write the code to protect us didn't know what the problem was. Because that just seems a little crazy to me. just a little.
the avatar of Federico Mena-Quintero

Librsvg gets Continuous Integration

One nice thing about gitlab.gnome.org is that we can now have Continuous Integration (CI) enabled for projects there. After every commit, the CI machinery can build the project, run the tests, and tell you if something goes wrong.

Carlos Soriano posted a "tips of the week" mail to desktop-devel-list, and a link to how Nautilus implements CI in Gitlab. It turns out that it's reasonably easy to set up: you just create a .gitlab-ci.yml file in the toplevel of your project, and that has the configuration for what to run on every commit.

Of course instead of reading the manual, I copied-and-pasted the file from Nautilus and just changed some things in it. There is a .yml linter so you can at least check the syntax before pushing a full job.

Then I read Robert Ancell's reply about how simple-scan builds its CI jobs on both Fedora and Ubuntu... and then the realization hit me:

This lets me CI librsvg on multiple distros at once. I've had trouble with slight differences in fontconfig/freetype in the past, and this would let me catch them early.

However, people on IRC advised against this, as we need more hardware to run CI on a large scale.

Linux distros have a vested interest in getting code out of gnome.org that works well. Surely they can give us some hardware?

a silhouette of a person's head and shoulders, used as a default avatar

Ceph Day Germany 2018


I'm glad to annouce that there will be a Ceph Day on the 7th of February 2018 in Darmstadt. Deutsche Telekom will host the event. The day will start at 08:30 with registration and end around 17:45 with an one hour networking reception. 
We have already several very interesting presentations from SUSE, SAP, CERN, 42.com, Deutsche Telekom AG and Red Hat on the agenda and more to come. If you have an interesting  15-45 min presentation about Ceph, please contact me to discuss if we can add it to the agenda. Presentation language should be German or English.

I would like to thank our current sponsors SUSE and Deutsche Telekom and the Ceph Community  for the support. We are still in  negotiation with potential sponsors and will hopefully announce them soon.

The agenda will be available here soon. You can register through this link. Stay tuned for updates! See you in Darmstadt!
a silhouette of a person's head and shoulders, used as a default avatar

2018w01-02: pkglistgen deployed, comment tests, baselibs.conf in adi, config generalization, and much more

package list wrapper scripts port and rewrite

After the rewrite was merged (before full testing) a series of small follow-ups (#1328, #1332, #1333) were required to bring everything into working order. After that the code has been happily deployed for Leap 15.0.

A few items remain in order to bring the SLE wrappers into the fold.

flesh out comment tests

As part of the long-term goal of increasing test coverage a detailed set was added to cover the code responsible for interacting with OBS comments. Additionally, the ReviewBot specific comment code was also covered. The tests are a combination of unit and functional tests that utilize the local OBS instance brough up on travis-ci for a net gain of around 1.5% coverage.

adi packages with baselibs.conf staged for all archs

Packages not contained within the ring projects are placed in adi stagings which are only built for x86_64. For packages utilizing a baselibs.conf such as wine this is less than ideal since the imported packages are not built. For wine specifically this causes the package to never pass the repo-checker since the wine (x86_64) package requires the wine-32bit package which is not built.

To solve the wine case and allow baselibs.conf to be tested in stagings the adi process was enhanced to detect baselibs.conf and enable all archs from the target project (for Factory this includes i586). This handles both static baselibs.conf and those that are generated during by the spec file.

config generalization

As part of an ongoing process to standardize the execution and configuration of the various tools to reduce the management overhead, various flags have been migrated to be read out of project configuration. This allows tools to be run without flags via standardized service files while the config defaults can be managed centrally and overridden in OBS remote config.

The config system was expanded to support non-distribution projects to allow for tools to be run against devel projects like GNOME:Factory which has long utilized factory-auto. This exposes the flags that were previously available via the project config. Interestingly, the change had to be reverted and re-introduced with pattern ordering. Had been luck that the issue had not been a problem prior to this point.

NonFree workflow

A discussion of the NonFree workflow was started regarding including such requests in the normal staging workflow. As a consequence of not being staged the new repo_checker can not process the requests.

obs_clone improvements

The obs_clone tool clones data between OBS instances which is used to load a facsimile of Factory into a local OBS instance for testing. Without a large amount of specific project setup the various tools cannot function and thus cannot be tested. After a hiccup in Factory the project had to be rebuilt by bootstrapping against a staging project. This caused a circular dependency which the tool cannot currently handle. A workaround was added to resolve the issue.

As part of the debugging process a variety of improvements were made furthering the goal of being able to run all the tools against the local instance.

staging-bot improvements

The quick staging strategy was improved to handle the new Leap 15.0 workflow and future proof the implementation. In order to process requests as quickly as possible the requests that do not require human review (ex. coming from SLE were they were already reviewed) are grouped together to avoid being blocked by other requests. The strategy has already been seen functioning correctly in production.

Towards the same goal, the special strategy, which is used to isolate important packages and/or those that routinely cause problems, was modified to be disablable. This is utilize for Leap which generally sees requests from other sources which have already passed various processes and are less likely to cause issues. Thus rather than isolate such packages and thus take up an entire staging they are simply grouped normally.

last year

Some major work was completed last year at this time.

API request cache

In an effort to drastically improve the runtime of the various staging tools and bots an HTTP cache was implemented for caching specific OBS API queries and expiring them when the OBS projects were changed. The result was a significant improvement and in many cases removed over 90% of execution time once the caches were warmed up. Given the number of staging queries run in a day this was a major quality of life improvement.

ignored request pseudo-state

Another quality of life improvement was the introduction of an ignored request pseudo-state for use in the staging process. This allowed staging masters to place requests in a backlog with a message indicating the reason. Not only did this improve the workflow for communicating such things between staging masters, but it also paved the way for automated staging improvements since this could be used to exclude requests. The pseudo-state has since become an integral part of the workflow.

Ignored requests can be seen along with their reason in the list output:

RequestSplitter

A major step forward was the creation of the RequestSplitter which not only replaced the different request grouping implementations, but providing a vastly more powerful solution. Not only were the previous static grouping options available, but xpath filters and group-bys could be specified to create complex request groupings. Prior to this such grouping had to be done manually via copy-paste from list sub-command or other primitive means. Given that all ring packages must be staged manually using this process any improvement helps.

Additionally, this made the select sub-command much more powerful with a variety of new options and modes including an --interactive mode which allowed a proposal to be modified in a text editor. The combination provides a vastly superior experience and laid the groundwork for further automation via strategies (mentined above and to be covered in future post).

An example of interactive mode, which also includes the strategies introduced later, can be seen below.

miscellaneous

A utility was created to print the list of devel projects for a given project. This tool was intended to eventually replace the expensive query with the cached version which has since been done.

Yet another quality of life improvement was to indicate if origin differs from the expected origin in the leaper review. This simple change aided in being able to more quickly scan through large numbers of such reviews.