Help needed for librsvg 2.42.1
Would you like to help fix a couple of bugs in librsvg, in preparation for the 2.42.1 release?
I have prepared a list of bugs which I'd like to be fixed in the 2.42.1 milestone. Two of them are assigned to myself, as I'm already working on them.
There are two other bugs which I'd love someone to look at. Neither of these requires deep knowledge of librsvg, just some debugging and code-writing:
-
Bug 141 - GNOME's thumbnailing machinery creates an icon which has the wrong fill: it's an image of a builder's trowel, and the inside is filled black instead of with a nice gradient. This is the only place in librsvg where a
cairo_surface_tis converted to aGdkPixbuf; this involves unpremultiplying the alpha channel. Maybe the relevant function is buggy? -
Bug 136: The
stroke-dasharrayattribute in SVG elements is parsed incorrectly. It is a list of CSS length values, separated by commas or spaces. Currently librsvg uses a shitty parser based ong_strsplit()only for commas; it doesn't allow just a space-separated list. Then, it usesg_ascii_strtod()to parse plain numbers; it doesn't support CSS lengths generically. This parser needs to be rewritten in Rust; we already have machinery there to parse CSS length values properly.
Feel free to contact me by mail, or write something in the bugs themselves, if you would like to work on them. I'll happily guide you through the code :)
Nasty fall-out from Spectre and Meltdown
You trust the cloud? HAHAHAHA
What surprised me a little was how few journalists paid attention to the fact that Meltdown in particular breaks the isolation between containers and Virtual Machines - making it quite dangerous to run your code in places like Amazon S3. Meltdown means: anything you have ran on Amazon S3 or competing clouds from Google and Microsoft has been exposed to other code running on the same systems.And storage isn't per-se safe, as the systems handling the storage just might also be used for running apps from other customers - who then thus could have gotten at that data. I wrote a bit more about this in an opinion post for Nextcloud.
We don't know if any breaches happened, of course. We also don't know that they didn't.
That's one of my main issues with the big public cloud providers: we KNOW they hide breaches from us. All the time. For YEARS. Yahoo did particularly nasty, but was it really such an outlier? Uber hid data stolen from 57 million users for a year, which came out just November last year.
Particularly annoying if you're legally obliged to report security breaches to the users it has affected, or to your government. Which is, by the way, the case in more and more countries. You effectively can't do that if you put any data in a public cloud...
Considering the sales of the maximum allowed amount of stock just last November by the Intel CEO, forgive me if I have little trust in the ethical standards at that company, or any other for that matter. (oh, and if you thought the selling of the stock by the Intel CEO is just typical stuff, nah, it was noticed as interesting BEFORE Meltdown & Spectre became public)
So no, there's no reason to trust these guys (and girls) on their blue, brown, green or black eyes. None whatsoever.
Vendors screwed up a fair bit. More to come?
But there's more. GregKH, the inofficial number two in Linux kernel development, blogged about what-to-do wrt Meltdown/Spectre and he shared an interesting nugget of information:We had no real information on exactly what the Spectre problem was at allWait. What? So the guys who had to fix the infrastructure for EVERY public and private cloud and home computer and everything else out there had... no... idea?
Yeap. Golem.de notes (in German) that the coordination around Meltdown didn't take place over the usual closed kernel security mailing list, but instead distributions created their own patches. The cleanup of the resulting mess is ongoing and might take a few more weeks. Oh, and some issues regarding Meltdown & Spectre might not be fix-able at all.
But I'm mostly curious to find out what went wrong in the communication that resulted in the folks who were supposed to write the code to protect us didn't know what the problem was. Because that just seems a little crazy to me. just a little.
Librsvg gets Continuous Integration
One nice thing about gitlab.gnome.org is that we can now have
Continuous Integration (CI) enabled for projects there. After every
commit, the CI machinery can build the project, run the tests, and
tell you if something goes wrong.
Carlos Soriano posted a "tips of the
week" mail to desktop-devel-list, and a link to how Nautilus
implements CI in Gitlab. It turns out that it's reasonably easy to
set up: you just create a .gitlab-ci.yml file in the
toplevel of your project, and that has the configuration for what to
run on every commit.
Of course instead of reading the manual, I copied-and-pasted the file from Nautilus and just changed some things in it. There is a .yml linter so you can at least check the syntax before pushing a full job.
Then I read Robert Ancell's reply about how simple-scan builds its CI jobs on both Fedora and Ubuntu... and then the realization hit me:
This lets me CI librsvg on multiple distros at once. I've had trouble with slight differences in fontconfig/freetype in the past, and this would let me catch them early.
However, people on IRC advised against this, as we need more hardware to run CI on a large scale.
Linux distros have a vested interest in getting code out of gnome.org that works well. Surely they can give us some hardware?
Ceph Day Germany 2018

2018w01-02: pkglistgen deployed, comment tests, baselibs.conf in adi, config generalization, and much more
package list wrapper scripts port and rewrite
After the rewrite was merged (before full testing) a series of small follow-ups (#1328, #1332, #1333) were required to bring everything into working order. After that the code has been happily deployed for Leap 15.0.
A few items remain in order to bring the SLE wrappers into the fold.
flesh out comment tests
As part of the long-term goal of increasing test coverage a detailed set was added to cover the code responsible for interacting with OBS comments. Additionally, the ReviewBot specific comment code was also covered. The tests are a combination of unit and functional tests that utilize the local OBS instance brough up on travis-ci for a net gain of around 1.5% coverage.
adi packages with baselibs.conf staged for all archs
Packages not contained within the ring projects are placed in adi stagings which are only built for x86_64. For packages utilizing a baselibs.conf such as wine this is less than ideal since the imported packages are not built. For wine specifically this causes the package to never pass the repo-checker since the wine (x86_64) package requires the wine-32bit package which is not built.
To solve the wine case and allow baselibs.conf to be tested in stagings the adi process was enhanced to detect baselibs.conf and enable all archs from the target project (for Factory this includes i586). This handles both static baselibs.conf and those that are generated during by the spec file.
config generalization
As part of an ongoing process to standardize the execution and configuration of the various tools to reduce the management overhead, various flags have been migrated to be read out of project configuration. This allows tools to be run without flags via standardized service files while the config defaults can be managed centrally and overridden in OBS remote config.
The config system was expanded to support non-distribution projects to allow for tools to be run against devel projects like GNOME:Factory which has long utilized factory-auto. This exposes the flags that were previously available via the project config. Interestingly, the change had to be reverted and re-introduced with pattern ordering. Had been luck that the issue had not been a problem prior to this point.
NonFree workflow
A discussion of the NonFree workflow was started regarding including such requests in the normal staging workflow. As a consequence of not being staged the new repo_checker can not process the requests.
obs_clone improvements
The obs_clone tool clones data between OBS instances which is used to load a facsimile of Factory into a local OBS instance for testing. Without a large amount of specific project setup the various tools cannot function and thus cannot be tested. After a hiccup in Factory the project had to be rebuilt by bootstrapping against a staging project. This caused a circular dependency which the tool cannot currently handle. A workaround was added to resolve the issue.
As part of the debugging process a variety of improvements were made furthering the goal of being able to run all the tools against the local instance.
staging-bot improvements
The quick staging strategy was improved to handle the new Leap 15.0 workflow and future proof the implementation. In order to process requests as quickly as possible the requests that do not require human review (ex. coming from SLE were they were already reviewed) are grouped together to avoid being blocked by other requests. The strategy has already been seen functioning correctly in production.
Towards the same goal, the special strategy, which is used to isolate important packages and/or those that routinely cause problems, was modified to be disablable. This is utilize for Leap which generally sees requests from other sources which have already passed various processes and are less likely to cause issues. Thus rather than isolate such packages and thus take up an entire staging they are simply grouped normally.
last year
Some major work was completed last year at this time.
API request cache
In an effort to drastically improve the runtime of the various staging tools and bots an HTTP cache was implemented for caching specific OBS API queries and expiring them when the OBS projects were changed. The result was a significant improvement and in many cases removed over 90% of execution time once the caches were warmed up. Given the number of staging queries run in a day this was a major quality of life improvement.
ignored request pseudo-state
Another quality of life improvement was the introduction of an ignored request pseudo-state for use in the staging process. This allowed staging masters to place requests in a backlog with a message indicating the reason. Not only did this improve the workflow for communicating such things between staging masters, but it also paved the way for automated staging improvements since this could be used to exclude requests. The pseudo-state has since become an integral part of the workflow.
Ignored requests can be seen along with their reason in the list output:
RequestSplitter
A major step forward was the creation of the RequestSplitter which not only replaced the different request grouping implementations, but providing a vastly more powerful solution. Not only were the previous static grouping options available, but xpath filters and group-bys could be specified to create complex request groupings. Prior to this such grouping had to be done manually via copy-paste from list sub-command or other primitive means. Given that all ring packages must be staged manually using this process any improvement helps.
Additionally, this made the select sub-command much more powerful with a variety of new options and modes including an --interactive mode which allowed a proposal to be modified in a text editor. The combination provides a vastly superior experience and laid the groundwork for further automation via strategies (mentined above and to be covered in future post).
An example of interactive mode, which also includes the strategies introduced later, can be seen below.
miscellaneous
A utility was created to print the list of devel projects for a given project. This tool was intended to eventually replace the expensive query with the cached version which has since been done.
Yet another quality of life improvement was to indicate if origin differs from the expected origin in the leaper review. This simple change aided in being able to more quickly scan through large numbers of such reviews.
Nextcloud Talk is here
Today is a big day. The Nextcloud community is launching a new product and solution called Nextcloud Talk. It’s a full audio/video/chat communication solution which is self hosted, open source and super easy to use and run. This is the result of over 1.5 years of planing and development.
For a long time it was clear to me that the next step for a file sync and share solution like Nextcloud is to have communication and collaboration features build into the same platform. You want to have a group chat with the people you have a group file share with. You want to have a video call with the people while you are collaborative editing a document. You want to call a person directly from within Nextcloud to collaborate and discuss a shared file, a calendar invite, an email or anything else. And you want to do this using the same login, the same contacts and the same server infrastructure and webinterface.
So this is why we announced, at the very beginning of Nextcloud, that we will integrate the Spreed.ME WebRTC solution into Nextcloud. And this is what we did. But it became clear that whats really needed is something that is fully integrated into Nextcloud, easy to run and has more features. So we did a full rewrite the last 1.5 years. This is the result.
Nextcloud Talk can, with one click, be installed on every Nextcloud server. It contains a group chat feature so that people and teams can communicate and collaborate easily. It also has WebRTC video/voice call features including screen-sharing. This can be used for one on one calls, web-meetings or even full webinars. This works in the Web UI but the Nextxloud community also developed completely new Android and iOS apps so it works great on mobile too. Thanks to push notifications, you can actually call someone directly on the phone via Nextcloud or a different phone. So this is essentially a fully open source, self hosted, phone system integrated into Nextcloud. Meeting rooms can be public or private and invites can be sent via the Nextcloud Calendar. All calls are done peer to peer and end to end encrypted.
So what are the differences with WhatsApp Calls, Threema, Signal Calls or the Facebook Messenger?
All parts of Nextcloud Talk are fully Open Source and it is self hosted. So the signalling of the calls are done by your own Nextcloud server. This is unique. All the other mentioned solutions might be encrypted, which is hard to check if the source-code is not open, but they all use one central signalling server. So the people who run the service know all the metadata. Who is calling whom, when, how long and from where. This is not the case with Nextcloud Talk. No metadata is leaked. Another benefit is the full integration into all the other file sharing, communication, groupware and collaboration features of Nextcloud.
So when is it available? The Version 1.0 is available today. The Nextcloud App can be installed with one click from within Nextcloud. But you need the latest Nextcloud 13 beta server for now. The Android and iOS apps are available in the Google and Apple App Stores for free. This is only the first step of course. So if you want to give feedback and contribute then collaborate with the rest of the Nextcloud community.
More information can be found here https://apps.nextcloud.com/apps/spreed and here https://nextcloud.com/talk
What are the plans for the future?
There are still parts missing that are planed for future version. We want to expose the Chat feature via an XMPP compatible API so that third party Chat Apps can talk to a Nextcloud Talk server. And we will also integrate chat into our mobile apps. I hope that Desktop chat apps also integrate this natively. for example on KDE and GNOME. This should be relatively easy because of the standard XMPP BOSH protocol. And the last important feature is call federation so that you can call people on different Nextcloud Talk servers.
If you want to contribute then please join us here on github:
http://github.com/nextcloud/spreed
https://github.com/nextcloud/talk-ios
https://github.com/nextcloud/talk-android
Thanks a lot to everyone who made this happen. I’m proud that we have such a welcoming, creative and open atmosphere in the Nextcloud community so that such innovative new ideas can grow.
Loving Gitlab.gnome.org, and getting notifications
I'm loving gitlab.gnome.org. It has been only a couple of
weeks since librsvg moved to gitlab, and I've
already received and merged two merge requests. (Isn't it a bit
weird that Github uses "pull request" and Everyone(tm) knows the PR
acronym, but Gitlab uses "merge request"?)
Notifications about merge requests
One thing to note if your GNOME project has moved to Gitlab: if you want to get notified of incoming merge requests, you need to tell Gitlab that you want to "Watch" that project, instead of using one of the default notification settings. Thanks to Carlos Soriano for making me aware of this.
Notifications from Github's mirror
The github mirror of git.gnome.org is configured so that pull requests are automatically closed, since currently there is no way to notify the upstream maintainers when someone creates a pull request in the mirror (this is super-unfriendly by default, but at least submitters get notified that their PR would not be looked at by anyone, by default).
If you have a Github account, you can Watch the project in question to get notified — the bot will close the pull request, but you will get notified, and then you can check it by hand, review it as appropriate, or redirect the submitter to gitlab.gnome.org instead.
LibreOffice Vanilla 5.4.4 released on the Mac App Store
Why nobody speaks about dig
It is already 2 years since Ruby 2.3 was released. While the controversial &., which is claimed to allow writing incomprehensible code, has become really popular in blog post and conferences, we have heard very little about the Hash#dig and Array#dig methods. Those methods were mentioned together in the release notes, as both try to make easier dealing with nil values. But why is then the dig method not that “popular”? Can we after two years say something new about it? And the most important part, should we start using it, if we haven’t use it until now? :thinking:
What is it?
First things, first, what the method does? It is normal in Ruby that we have nested fields in hashes, for example in Rails parameters, and that we need to ensure that a parameter exits before navigating to the next one. Normally we would do something like:
params[:user] && params[:user][:address] && params[:user][:address][:street] && params[:user][:address][:street][:number]
We have to admit that is not very elegant. But with the new Hash#dig we can just write:
params.dig(:user, :address, :street, :number)
So, as the Ruby documentation says it retrieves the value object corresponding to the each key objects repeatedly. And similarly for the Array#dig method. We would write array.dig(0, 1, 1) instead of array[0][1][1].
What else can we say?
The first thing I wondered is if they are really equivalent to what we previously had. For example:
params = { "user": { name: "Nicolas Cage", married: false } }
=> {:user=>{:name=>"Nicolas Cage", :married=>false}}
params[:user] && params[:user][:married] && params[:user][:married][:date]
=> false
params.dig(:user, :married, :date)
=> TypeError: FalseClass does not have #dig method
from (irb):6:in `dig'
from (irb):6
from /usr/bin/irb.ruby2.4:11:in `<main>'
You can see in this example, that our new method raised an exception, when our code used to work. But I would say that the fact that this work for the old case is unexpected and can cause that we miss bugs in our code. We wanted to return the date of the marriage, and we hadn’t go so far in the nested hash but we have got a result anyway. But it is even worse, because the first option could also raise an exception for a similar case while Hash#dig keeps the same behaviour:
params = { "user": { name: "Nicolas Cage", married: true } }
=> {:user=>{:name=>"Nicolas Cage", :married=>true}}
params[:user] && params[:user][:married] && params[:user][:married][:date]
=> NoMethodError: undefined method `[]' for true:TrueClass
from (irb):9
from /usr/bin/irb.ruby2.4:11:in `<main>'
params.dig(:user, :married, :date)
=> TypeError: TrueClass does not have #dig method
from (irb):10:in `dig'
from (irb):10
from /usr/bin/irb.ruby2.4:11:in `<main>'
And we can even find more strange cases. The method str[match_str] allow us to find curious examples when using strings as key, such as:
params = { "user" => "Nicolas Cage" } => {"user"=>"Nicolas Cage"}
params.dig("user","age") => TypeError: String does not have #dig method
params["user"] && params["user"]["age"] => "age"
and the same happens when using integers as key:
params.dig("user",1) => TypeError: String does not have #dig method
params["user"] && params["user"][1] => "i"
And arrays also suffer from this strange differences in the case of numbers:
array=["hola"] => ["hola"]
array.dig(0,1) => TypeError: String does not have #dig method
array[0] && array[0][1] => "o"
And how does try behave? It always returns nil without letting us knowing how deep in the hash it failed:
params.try(:user).try(:married).try(:date)
=> nil
try is coherent, but take into account that it is only available in Rails.
This all show us that refactoring the code won’t be straightforward as this three options, some time presented as equivalent, are not exactly equivalent.
Why is it not popular?
As we have seen, this new methods can be really useful. But why are they then not popular?
The first reason is the one we have already elaborated in, it is not equivalent to what we had before, which implies that things could start failing if we refactor old code.
Another problem can be the lack of the documentation, what we can see illustrated in the following Stack Overflow post: How do I use Array#dig and Hash#dig introduced in Ruby 2.3? :joy: And to be honest even the release notes seem to be confusing to me. What is meant by “Array#dig and Hash#dig are also added. Note that this behaves like try! of Active Support, which specially handles only nil.”?
And another good reason that can cause that this method has been unnoticed, is what I already mentioned at the beginning, that the controversial &. has made that almost nobody has noticed this beautiful method. And this just because we as humans tend to put more emphasis on complains.
Conclusion
It seems that the Hash#dig and Array#dig methods can make our lives easier and help us to detect errors. The best way to increase their use and to ensure we use them when possible in our projects is to create a Rubocop cop which supports autocorrection. :grimacing: I already opened an issue for it, but the fact that both implementations are not equivalent makes that the autocorrection or even the implementation of the cop are not possible. However, it seems that this was already implemented in salsify_rubocop. In this gem, Rubocop is extended with a new Dig cop. This cop enforces my_hash.dig('foo', 'bar') over my_hash['foo']['bar'] and my_hash['foo'] && my_hash['foo']['bar']. It also support autocorrection.
Last but not least, I would like to share another blog post about Hash#dig, which discusses diferent topics to the ones here, such as the efficiency of Hash#dig: Ruby 2.3 dig Method - Thoughts and Examples.
And that was all. Start taking profit of the already old Hash#dig and Array#dig methods and it may be that soon they become as popular as they deserve! :wink:
Meltdown and Spectre Linux Kernel Status
By now, everyone knows that something “big” just got announced regarding computer security. Heck, when the Daily Mail does a report on it , you know something is bad…
Anyway, I’m not going to go into the details about the problems being reported, other than to point you at the wonderfully written Project Zero paper on the issues involved here. They should just give out the 2018 Pwnie award right now, it’s that amazingly good.



