Skip to main content

the avatar of Federico Mena-Quintero

Librsvg gets Continuous Integration

One nice thing about gitlab.gnome.org is that we can now have Continuous Integration (CI) enabled for projects there. After every commit, the CI machinery can build the project, run the tests, and tell you if something goes wrong.

Carlos Soriano posted a "tips of the week" mail to desktop-devel-list, and a link to how Nautilus implements CI in Gitlab. It turns out that it's reasonably easy to set up: you just create a .gitlab-ci.yml file in the toplevel of your project, and that has the configuration for what to run on every commit.

Of course instead of reading the manual, I copied-and-pasted the file from Nautilus and just changed some things in it. There is a .yml linter so you can at least check the syntax before pushing a full job.

Then I read Robert Ancell's reply about how simple-scan builds its CI jobs on both Fedora and Ubuntu... and then the realization hit me:

This lets me CI librsvg on multiple distros at once. I've had trouble with slight differences in fontconfig/freetype in the past, and this would let me catch them early.

However, people on IRC advised against this, as we need more hardware to run CI on a large scale.

Linux distros have a vested interest in getting code out of gnome.org that works well. Surely they can give us some hardware?

a silhouette of a person's head and shoulders, used as a default avatar

Ceph Day Germany 2018


I'm glad to annouce that there will be a Ceph Day on the 7th of February 2018 in Darmstadt. Deutsche Telekom will host the event. The day will start at 08:30 with registration and end around 17:45 with an one hour networking reception. 
We have already several very interesting presentations from SUSE, SAP, CERN, 42.com, Deutsche Telekom AG and Red Hat on the agenda and more to come. If you have an interesting  15-45 min presentation about Ceph, please contact me to discuss if we can add it to the agenda. Presentation language should be German or English.

I would like to thank our current sponsors SUSE and Deutsche Telekom and the Ceph Community  for the support. We are still in  negotiation with potential sponsors and will hopefully announce them soon.

The agenda will be available here soon. You can register through this link. Stay tuned for updates! See you in Darmstadt!
a silhouette of a person's head and shoulders, used as a default avatar

2018w01-02: pkglistgen deployed, comment tests, baselibs.conf in adi, config generalization, and much more

package list wrapper scripts port and rewrite

After the rewrite was merged (before full testing) a series of small follow-ups (#1328, #1332, #1333) were required to bring everything into working order. After that the code has been happily deployed for Leap 15.0.

A few items remain in order to bring the SLE wrappers into the fold.

flesh out comment tests

As part of the long-term goal of increasing test coverage a detailed set was added to cover the code responsible for interacting with OBS comments. Additionally, the ReviewBot specific comment code was also covered. The tests are a combination of unit and functional tests that utilize the local OBS instance brough up on travis-ci for a net gain of around 1.5% coverage.

adi packages with baselibs.conf staged for all archs

Packages not contained within the ring projects are placed in adi stagings which are only built for x86_64. For packages utilizing a baselibs.conf such as wine this is less than ideal since the imported packages are not built. For wine specifically this causes the package to never pass the repo-checker since the wine (x86_64) package requires the wine-32bit package which is not built.

To solve the wine case and allow baselibs.conf to be tested in stagings the adi process was enhanced to detect baselibs.conf and enable all archs from the target project (for Factory this includes i586). This handles both static baselibs.conf and those that are generated during by the spec file.

config generalization

As part of an ongoing process to standardize the execution and configuration of the various tools to reduce the management overhead, various flags have been migrated to be read out of project configuration. This allows tools to be run without flags via standardized service files while the config defaults can be managed centrally and overridden in OBS remote config.

The config system was expanded to support non-distribution projects to allow for tools to be run against devel projects like GNOME:Factory which has long utilized factory-auto. This exposes the flags that were previously available via the project config. Interestingly, the change had to be reverted and re-introduced with pattern ordering. Had been luck that the issue had not been a problem prior to this point.

NonFree workflow

A discussion of the NonFree workflow was started regarding including such requests in the normal staging workflow. As a consequence of not being staged the new repo_checker can not process the requests.

obs_clone improvements

The obs_clone tool clones data between OBS instances which is used to load a facsimile of Factory into a local OBS instance for testing. Without a large amount of specific project setup the various tools cannot function and thus cannot be tested. After a hiccup in Factory the project had to be rebuilt by bootstrapping against a staging project. This caused a circular dependency which the tool cannot currently handle. A workaround was added to resolve the issue.

As part of the debugging process a variety of improvements were made furthering the goal of being able to run all the tools against the local instance.

staging-bot improvements

The quick staging strategy was improved to handle the new Leap 15.0 workflow and future proof the implementation. In order to process requests as quickly as possible the requests that do not require human review (ex. coming from SLE were they were already reviewed) are grouped together to avoid being blocked by other requests. The strategy has already been seen functioning correctly in production.

Towards the same goal, the special strategy, which is used to isolate important packages and/or those that routinely cause problems, was modified to be disablable. This is utilize for Leap which generally sees requests from other sources which have already passed various processes and are less likely to cause issues. Thus rather than isolate such packages and thus take up an entire staging they are simply grouped normally.

last year

Some major work was completed last year at this time.

API request cache

In an effort to drastically improve the runtime of the various staging tools and bots an HTTP cache was implemented for caching specific OBS API queries and expiring them when the OBS projects were changed. The result was a significant improvement and in many cases removed over 90% of execution time once the caches were warmed up. Given the number of staging queries run in a day this was a major quality of life improvement.

ignored request pseudo-state

Another quality of life improvement was the introduction of an ignored request pseudo-state for use in the staging process. This allowed staging masters to place requests in a backlog with a message indicating the reason. Not only did this improve the workflow for communicating such things between staging masters, but it also paved the way for automated staging improvements since this could be used to exclude requests. The pseudo-state has since become an integral part of the workflow.

Ignored requests can be seen along with their reason in the list output:

RequestSplitter

A major step forward was the creation of the RequestSplitter which not only replaced the different request grouping implementations, but providing a vastly more powerful solution. Not only were the previous static grouping options available, but xpath filters and group-bys could be specified to create complex request groupings. Prior to this such grouping had to be done manually via copy-paste from list sub-command or other primitive means. Given that all ring packages must be staged manually using this process any improvement helps.

Additionally, this made the select sub-command much more powerful with a variety of new options and modes including an --interactive mode which allowed a proposal to be modified in a text editor. The combination provides a vastly superior experience and laid the groundwork for further automation via strategies (mentined above and to be covered in future post).

An example of interactive mode, which also includes the strategies introduced later, can be seen below.

miscellaneous

A utility was created to print the list of devel projects for a given project. This tool was intended to eventually replace the expensive query with the cached version which has since been done.

Yet another quality of life improvement was to indicate if origin differs from the expected origin in the leaper review. This simple change aided in being able to more quickly scan through large numbers of such reviews.

the avatar of Frank Karlitschek

Nextcloud Talk is here

Today is a big day. The Nextcloud community is launching a new product and solution called Nextcloud Talk. It’s a full audio/video/chat communication solution which is self hosted, open source and super easy to use and run. This is the result of over 1.5 years of planing and development.

For a long time it was clear to me that the next step for a file sync and share solution like Nextcloud is to have communication and collaboration features build into the same platform. You want to have a group chat with the people you have a group file share with. You want to have a video call with the people while you are collaborative editing a document. You want to call a person directly from within Nextcloud to collaborate and discuss a shared file, a calendar invite, an email or anything else. And you want to do this using the same login, the same contacts and the same server infrastructure and webinterface.

So this is why we announced, at the very beginning of Nextcloud, that we will integrate the Spreed.ME WebRTC solution into Nextcloud. And this is what we did. But it became clear that whats really needed is something that is fully integrated into Nextcloud, easy to run and has more features. So we did a full rewrite the last 1.5 years. This is the result.

Nextcloud Talk can, with one click, be installed on every Nextcloud server. It contains a group chat feature so that people and teams can communicate and collaborate easily. It also has WebRTC video/voice call features including screen-sharing. This can be used for one on one calls, web-meetings or even full webinars. This works in the Web UI but the Nextxloud community also developed completely new Android and iOS apps so it works great on mobile too. Thanks to push notifications, you can actually call someone directly on the phone via Nextcloud or a different phone. So this is essentially a fully open source, self hosted, phone system integrated into Nextcloud. Meeting rooms can be public or private and invites can be sent via the Nextcloud Calendar. All calls are done peer to peer and end to end encrypted.

So what are the differences with WhatsApp Calls, Threema, Signal Calls or the Facebook Messenger?
All parts of Nextcloud Talk are fully Open Source and it is self hosted. So the signalling of the calls are done by your own Nextcloud server. This is unique. All the other mentioned solutions might be encrypted, which is hard to check if the source-code is not open, but they all use one central signalling server. So the people who run the service know all the metadata. Who is calling whom, when, how long and from where. This is not the case with Nextcloud Talk. No metadata is leaked. Another benefit is the full integration into all the other file sharing, communication, groupware and collaboration features of Nextcloud.

So when is it available? The Version 1.0 is available today. The Nextcloud App can be installed with one click from within Nextcloud. But you need the latest Nextcloud 13 beta server for now. The Android and iOS apps are available in the Google and Apple App Stores for free. This is only the first step of course. So if you want to give feedback and contribute then collaborate with the rest of the Nextcloud community.

More information can be found here https://apps.nextcloud.com/apps/spreed and here  https://nextcloud.com/talk

 

 

 

 

 

 

What are the plans for the future?
There are still parts missing that are planed for future version. We want to expose the Chat feature via an XMPP compatible API so that third party Chat Apps can talk to a Nextcloud Talk server. And we will also integrate chat into our mobile apps. I hope that Desktop chat apps also integrate this natively. for example on KDE and GNOME. This should be relatively easy because of the standard XMPP BOSH protocol. And the last important feature is call federation so that you can call people on different Nextcloud Talk servers.

If you want to contribute then please join us here on github:
http://github.com/nextcloud/spreed
https://github.com/nextcloud/talk-ios
https://github.com/nextcloud/talk-android

Thanks a lot to everyone who made this happen. I’m proud that we have such a welcoming, creative and open atmosphere in the Nextcloud community so that such innovative new ideas can grow.

the avatar of Federico Mena-Quintero

Loving Gitlab.gnome.org, and getting notifications

I'm loving gitlab.gnome.org. It has been only a couple of weeks since librsvg moved to gitlab, and I've already received and merged two merge requests. (Isn't it a bit weird that Github uses "pull request" and Everyone(tm) knows the PR acronym, but Gitlab uses "merge request"?)

Notifications about merge requests

One thing to note if your GNOME project has moved to Gitlab: if you want to get notified of incoming merge requests, you need to tell Gitlab that you want to "Watch" that project, instead of using one of the default notification settings. Thanks to Carlos Soriano for making me aware of this.

Notifications from Github's mirror

The github mirror of git.gnome.org is configured so that pull requests are automatically closed, since currently there is no way to notify the upstream maintainers when someone creates a pull request in the mirror (this is super-unfriendly by default, but at least submitters get notified that their PR would not be looked at by anyone, by default).

If you have a Github account, you can Watch the project in question to get notified — the bot will close the pull request, but you will get notified, and then you can check it by hand, review it as appropriate, or redirect the submitter to gitlab.gnome.org instead.

a silhouette of a person's head and shoulders, used as a default avatar

LibreOffice Vanilla 5.4.4 released on the Mac App Store

Collabora has now released LibreOffice Vanilla 5.4.4 on the Mac App Store. It is built from the official LibreOffice 5.4.4 sources. If you have purchased LibreOffice Vanilla earlier from the App Store, it will be upgraded in the normally automatic manner of apps purchased from the App Store.

LibreOffice Vanilla from the Mac App Store is recommended to Mac users who want LibreOffice with the minimum amount of manual hassle with installation and upgrades. If you don't mind that, by all means download and install the build from TDF instead.

We would have loved to continue to include a link to the TDF download site directly in the app's description, as we have promised, but we were not allowed to do that this time by Apple's reviewer.

Because of the restrictions on apps distributed in the App Store, features implemented in Java are not available in LibreOffice Vanilla. Those features are mainly the HSQLDB database engine in Base, and some wizards.

This time we include the localised help files, as there were some issues in accessing the on-line help.

Since the LibreOffice Vanilla 5.2 build that was made available in the Mac App Store in September 2016, there have been a few Mac-specific fixes, like the one related to landscape vs. portrait mode printing on Letter paper. There are more Mac-specific bugs in Bugzilla that will be investigated as resources permit.

Some fine-tuning to the code signing script has been necessary. For instance, one cannot include shell scripts in the Contents/MacOS subfolder of the application bundle when building for upload to the App Store. This is because the code signatures for such shell scripts would be stored as extended attributes and those won't survive the mechanism used to upload a build to the App Store for review and distribution. (For other non-binary files, in the Resources folder, signatures are stored in a separate file.)

We also have made sure the LibreOffice code builds with a current Xcode (and macOS SDK).

a silhouette of a person's head and shoulders, used as a default avatar

Why nobody speaks about dig

It is already 2 years since Ruby 2.3 was released. While the controversial &., which is claimed to allow writing incomprehensible code, has become really popular in blog post and conferences, we have heard very little about the Hash#dig and Array#dig methods. Those methods were mentioned together in the release notes, as both try to make easier dealing with nil values. But why is then the dig method not that “popular”? Can we after two years say something new about it? And the most important part, should we start using it, if we haven’t use it until now? :thinking:

What is it?

First things, first, what the method does? It is normal in Ruby that we have nested fields in hashes, for example in Rails parameters, and that we need to ensure that a parameter exits before navigating to the next one. Normally we would do something like:

params[:user] && params[:user][:address] && params[:user][:address][:street] && params[:user][:address][:street][:number]

We have to admit that is not very elegant. But with the new Hash#dig we can just write:

params.dig(:user, :address, :street, :number)

So, as the Ruby documentation says it retrieves the value object corresponding to the each key objects repeatedly. And similarly for the Array#dig method. We would write array.dig(0, 1, 1) instead of array[0][1][1].

What else can we say?

The first thing I wondered is if they are really equivalent to what we previously had. For example:

params = { "user": { name: "Nicolas Cage", married: false } }
  => {:user=>{:name=>"Nicolas Cage", :married=>false}}

params[:user] && params[:user][:married] && params[:user][:married][:date]
  => false

params.dig(:user, :married, :date)
  => TypeError: FalseClass does not have #dig method
             from (irb):6:in `dig'
             from (irb):6
             from /usr/bin/irb.ruby2.4:11:in `<main>'

You can see in this example, that our new method raised an exception, when our code used to work. But I would say that the fact that this work for the old case is unexpected and can cause that we miss bugs in our code. We wanted to return the date of the marriage, and we hadn’t go so far in the nested hash but we have got a result anyway. But it is even worse, because the first option could also raise an exception for a similar case while Hash#dig keeps the same behaviour:

params = { "user": { name: "Nicolas Cage", married: true } }
  => {:user=>{:name=>"Nicolas Cage", :married=>true}}

params[:user] && params[:user][:married] && params[:user][:married][:date]
  => NoMethodError: undefined method `[]' for true:TrueClass
            from (irb):9
            from /usr/bin/irb.ruby2.4:11:in `<main>'
params.dig(:user, :married, :date)
  => TypeError: TrueClass does not have #dig method
             from (irb):10:in `dig'
             from (irb):10
             from /usr/bin/irb.ruby2.4:11:in `<main>'

And we can even find more strange cases. The method str[match_str] allow us to find curious examples when using strings as key, such as:

params = { "user" => "Nicolas Cage" } => {"user"=>"Nicolas Cage"}

params.dig("user","age") => TypeError: String does not have #dig method

params["user"] && params["user"]["age"] => "age"

and the same happens when using integers as key:

params.dig("user",1) => TypeError: String does not have #dig method

params["user"] && params["user"][1] => "i"

And arrays also suffer from this strange differences in the case of numbers:

array=["hola"] => ["hola"]

array.dig(0,1) => TypeError: String does not have #dig method

array[0] && array[0][1] => "o"


And how does try behave? It always returns nil without letting us knowing how deep in the hash it failed:

params.try(:user).try(:married).try(:date)                                  
  => nil

try is coherent, but take into account that it is only available in Rails.

This all show us that refactoring the code won’t be straightforward as this three options, some time presented as equivalent, are not exactly equivalent.

As we have seen, this new methods can be really useful. But why are they then not popular?

The first reason is the one we have already elaborated in, it is not equivalent to what we had before, which implies that things could start failing if we refactor old code.

Another problem can be the lack of the documentation, what we can see illustrated in the following Stack Overflow post: How do I use Array#dig and Hash#dig introduced in Ruby 2.3? :joy: And to be honest even the release notes seem to be confusing to me. What is meant by “Array#dig and Hash#dig are also added. Note that this behaves like try! of Active Support, which specially handles only nil.”?

And another good reason that can cause that this method has been unnoticed, is what I already mentioned at the beginning, that the controversial &. has made that almost nobody has noticed this beautiful method. And this just because we as humans tend to put more emphasis on complains.

Conclusion

It seems that the Hash#dig and Array#dig methods can make our lives easier and help us to detect errors. The best way to increase their use and to ensure we use them when possible in our projects is to create a Rubocop cop which supports autocorrection. :grimacing: I already opened an issue for it, but the fact that both implementations are not equivalent makes that the autocorrection or even the implementation of the cop are not possible. However, it seems that this was already implemented in salsify_rubocop. In this gem, Rubocop is extended with a new Dig cop. This cop enforces my_hash.dig('foo', 'bar') over my_hash['foo']['bar'] and my_hash['foo'] && my_hash['foo']['bar']. It also support autocorrection.

Last but not least, I would like to share another blog post about Hash#dig, which discusses diferent topics to the ones here, such as the efficiency of Hash#dig: Ruby 2.3 dig Method - Thoughts and Examples.

And that was all. Start taking profit of the already old Hash#dig and Array#dig methods and it may be that soon they become as popular as they deserve! :wink:

the avatar of Greg Kroah-Hartman

Meltdown and Spectre Linux Kernel Status

By now, everyone knows that something “big” just got announced regarding computer security. Heck, when the Daily Mail does a report on it , you know something is bad…

Anyway, I’m not going to go into the details about the problems being reported, other than to point you at the wonderfully written Project Zero paper on the issues involved here. They should just give out the 2018 Pwnie award right now, it’s that amazingly good.

a silhouette of a person's head and shoulders, used as a default avatar

Meltdown: Chip-level security bug found in Intel CPUs of the last decade

2018 starts with a big chip-level bug making headlines which Google researchers already found and reported in June 2017.

The reason it's currently heavily discussed in media is that this bug is a significant chip-level security bug affecting all Intel (and possibly other manufacturer's) CPUs of the last decade and therefore affects millions to billions of computers including the huge cloud services from Google, Microsoft, Amazon and all others using Intel x86 CPUs of the last decade.

Briefl

...

a silhouette of a person's head and shoulders, used as a default avatar

2017w51-52: package list rewrite and repo_checker optimization

package list wrapper scripts port and rewrite

The package list generation, or pkglistgen code is responsible for expanding the base package groups to the full list that is then packaged on the various installation media. Recently, the tool was rewritten, but was left with rough edges in the form of wrapper scripts around the core solving code. Much of scripts were hard-coded and would benefit from a re-write as well a porting to python in which the rest of the code lives.

During the port the code was re-structured for readability and the various hard-coded bits replaced with the proper variables and bits loaded from their sources of truth. The final result was a much more flexible and future proof solution. Leap 15.0 is actively using this code as part of the development workflow.

repo_checker build hash optimization follow-up

After the repo_checker was improved to store and compare build hashes to reduce rechecking the expected side-effect was repeated comments during target project rebuild (like after a checkin round) since the comment was always replaced if the build hash differed. The follow-up optimization was to avoid posting comments while target project is rebuilding unless the text of the comment changed. Once the target project completes rebuilding the final build hash is posted and the repo_checker will stop rechecking until the build hash changes.

Combined with the build hash addition this maintains the much improved cycle time of the repo_checker while avoiding the notification spam and still picking up changes quickly. As such new requests are still processed more quickly. Having a different mechanism to act as the source of truth for this information might be beneficial, but would introduce more complexity since all the review bots store their state in request reviews or comments.

For an example of the volume and how this plays at, take a look at a summary from the pre-processing phase of a repo_checker cycle. Generally the not ready state will change to either accepted or build unchanged if there are issues that need to be resolved. Either of these three states can be skipped and accepted is entirely ignore since that is the end state.

last year

Over the past year much of my time has been spent refactoring code to avoid duplication and different implementations of the same thing. In addition simplifying the code were possible to make it easier to improve and maintain. The primary focus at the time was the ReviewBot based code. The bots one interacts with on OBS as part of the distribution development workflow are all based on the ReviewBot base.