Meltdown: Sicherheits-Bug auf Chip-Level in allen Intel CPUs der letzten Dekade
2018 beginnt mit einem großen Sicherheits-Bug auf Chip-Level, der aktuell Schlagzeilen macht und von Google-Forschern schon im Juni 2017 gefunden und veröffentlicht wurde.
Der Grund dafür, dass dieses Thema aktuell so heiß in den Medien diskutiert wird ist, dass dies ein signifikanter Bug auf Chip-Level ist, der alle modernen Intel CPUs der letzten Dekade und damit Millionen oder gar Milliarden Computer betrifft - und damit auch große Cloud-Anbieter wie Google, Microsoft, Amazon und auch alle...
Meltdown: Chip-level security bug found in Intel CPUs of the last decade
2018 starts with a big chip-level bug making headlines which Google researchers already found and reported in June 2017.
The reason it's currently heavily discussed in media is that this bug is a significant chip-level security bug affecting all Intel (and possibly other manufacturer's) CPUs of the last decade and therefore affects millions to billions of computers including the huge cloud services from Google, Microsoft, Amazon and all others using Intel x86 CPUs of the last decade.
Briefl
...
2017w51-52: package list rewrite and repo_checker optimization
package list wrapper scripts port and rewrite
The package list generation, or pkglistgen code is responsible for expanding the base package groups to the full list that is then packaged on the various installation media. Recently, the tool was rewritten, but was left with rough edges in the form of wrapper scripts around the core solving code. Much of scripts were hard-coded and would benefit from a re-write as well a porting to python in which the rest of the code lives.
During the port the code was re-structured for readability and the various hard-coded bits replaced with the proper variables and bits loaded from their sources of truth. The final result was a much more flexible and future proof solution. Leap 15.0 is actively using this code as part of the development workflow.
repo_checker build hash optimization follow-up
After the repo_checker was improved to store and compare build hashes to reduce rechecking the expected side-effect was repeated comments during target project rebuild (like after a checkin round) since the comment was always replaced if the build hash differed. The follow-up optimization was to avoid posting comments while target project is rebuilding unless the text of the comment changed. Once the target project completes rebuilding the final build hash is posted and the repo_checker will stop rechecking until the build hash changes.
Combined with the build hash addition this maintains the much improved cycle time of the repo_checker while avoiding the notification spam and still picking up changes quickly. As such new requests are still processed more quickly. Having a different mechanism to act as the source of truth for this information might be beneficial, but would introduce more complexity since all the review bots store their state in request reviews or comments.
For an example of the volume and how this plays at, take a look at a summary from the pre-processing phase of a repo_checker cycle. Generally the not ready state will change to either accepted or build unchanged if there are issues that need to be resolved. Either of these three states can be skipped and accepted is entirely ignore since that is the end state.
last year
Over the past year much of my time has been spent refactoring code to avoid duplication and different implementations of the same thing. In addition simplifying the code were possible to make it easier to improve and maintain. The primary focus at the time was the ReviewBot based code. The bots one interacts with on OBS as part of the distribution development workflow are all based on the ReviewBot base.
- Rework ReviewBot.CommandLineInterface to provide class option [+38 −125]
-
Port osc-check_source.py to ReviewBot as check_source.py [+289 −458]
- runs as the
factory-autouser - improved flexibility, fixed bugs, and ~twice as fast
- runs as the
openSUSE Leap 42.3 auf dem Raspberry Pi 2 ohne Monitor nutzen
Der Raspberry Pi 2 ist ein praktischer kleiner Rechner, und das dafür vorgesehene Raspbian auch eine gut gemachte Linux-Distribution. Hat man allerdings schon auf mehreren Rechner openSUSE im Einsatz ist es viel attraktiver dieses auch dort zu verwenden, zumal dies völlig problemlos möglich ist. Der Einsatz von Tumbleweed auf dem Raspberry Pi 2 ist im Wiki von openSUSE detailliert dokumentiert. Mit wenigen kleinen Änderungen lässt sich so auch openSUSE Leap auf dem Raspberry Pi 2 nutzen, und zwar ohne an diesen Monitor oder Maus anschließen zu müssen.
Die Installation eines Betriebssystem auf dem Raspberry PI erfolgt stets dadurch den passenden Installer auf seine SD-Karte zu kopieren.
Beim ersten Start wird diese dann passend eingerichtet.
Die passende Datei für openSUSE Leap 42.3 findet sich auf http://download.opensuse.org/ports/armv7hl/distribution/leap/42.3/appliances/.
Für den Betrieb ohne Tastatur und Monitor bietet sich die Variante Just enough OS (JeOS) an.
Damit findet man die passende Datei indem man die Downloadseite nach JeOS-raspberrypi2 durchsucht.
Aktuell ist dies http://download.opensuse.org/ports/armv7hl/distribution/leap/42.3/appliances/openSUSE-Leap42.3-ARM-JeOS-raspberrypi2.armv7l.raw.xz.
Es empfiehlt sich auch die Prüfsumme zu checken.
> sha256sum -c openSUSE-Leap42.3-ARM-JeOS-raspberrypi2.armv7l-2017.07.26-Build1.1.raw.xz.sha256
openSUSE-Leap42.3-ARM-JeOS-raspberrypi2.armv7l-2017.07.26-Build1.1.raw.xz: OK
sha256sum: WARNUNG: 14 Zeilen sind nicht korrekt formatiert
Die Warnung zu den nicht formatierten Zeilen liegt daran, dass der Inhalt mit PGP signiert ist. Die Signatur lässt sich mit GPG prüfen.
> gpg --verify openSUSE-Leap42.3-ARM-JeOS-raspberrypi2.armv7l-2017.07.26-Build1.1.raw.xz.sha256
gpg: Signatur vom Mi 26 Jul 2017 19:39:17 CEST mittels RSA-Schlüssel ID 3DBDC284
gpg: Korrekte Signatur von "openSUSE Project Signing Key <opensuse@opensuse.org>" [vollständig]
note: random_seed file not updated
Jetzt wo klar ist, dass das Image nicht beschädigt ist, kann man der Anleitung für Tumbleweed im openSUSE-Wiki folgen.
Achtung! Das Schreiben auf die SD-Karte kann, wenn man den falschen Gerätebezeichner erwischt, Daten auf einer anderen Platte zerstören.
Wenn man sich nicht 100% sicher ist welchen Gerätebezeichner die SD-Karte hat, dann kann man direkt nach dem Einstecken in dmesg nachsehen, welchen Bezeichner die Karte bekommen hat.
In meinem Beispiel hat der Kartenleser mehrere Einschübe, so dass gleich mehrere Geräte auftauchen, aber nur für das Gerät mit der SD-Karte wird die Kapazität (sd 8:0:0:3: [sdi] 15677440 512-byte logical blocks: (8.03 GB/7.48 GiB)) und die Partitionsliste (sdi: sdi1 sdi2 < sdi5 sdi6 > sdi3) ausgegeben.
> dmesg | tail -n 30
[13801.698740] EXT4-fs error (device sdi6): htree_dirblock_to_tree:986: inode #2: block 8582: comm dolphin: bad entry in directory: directory entry across range - offset=0(0), inode=4489218, rec_len=32780, name_len=129
[13814.445903] sdi: detected capacity change from 8026849280 to 0
[13897.279227] sd 8:0:0:3: [sdi] 15677440 512-byte logical blocks: (8.03 GB/7.48 GiB)
[13897.285959] sdi: sdi1 sdi2 < sdi5 sdi6 > sdi3
[14939.444590] usb 2-6: USB disconnect, device number 2
[37767.485975] usb 2-6: new SuperSpeed USB device number 3 using xhci_hcd
[37767.532474] usb 2-6: New USB device found, idVendor=8564, idProduct=4000
[37767.532478] usb 2-6: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[37767.532479] usb 2-6: Product: USB3.0 Card Reader
[37767.532479] usb 2-6: Manufacturer: Realtek
[37767.532480] usb 2-6: SerialNumber: 201412031053
[37767.539943] usb-storage 2-6:1.0: USB Mass Storage device detected
[37767.546235] scsi host8: usb-storage 2-6:1.0
[37768.558079] scsi 8:0:0:0: Direct-Access Generic- USB3.0 CRW -0 1.00 PQ: 0 ANSI: 6
[37768.558355] sd 8:0:0:0: Attached scsi generic sg6 type 0
[37768.569622] scsi 8:0:0:1: Direct-Access Generic- USB3.0 CRW -1 1.00 PQ: 0 ANSI: 6
[37768.569862] sd 8:0:0:1: Attached scsi generic sg7 type 0
[37768.582535] scsi 8:0:0:2: Direct-Access Generic- USB3.0 CRW -2 1.00 PQ: 0 ANSI: 6
[37768.582758] sd 8:0:0:2: Attached scsi generic sg8 type 0
[37768.601427] scsi 8:0:0:3: Direct-Access Generic- USB3.0 CRW -3 1.00 PQ: 0 ANSI: 6
[37768.601678] sd 8:0:0:3: Attached scsi generic sg9 type 0
[37769.473299] sd 8:0:0:3: [sdi] 15677440 512-byte logical blocks: (8.03 GB/7.48 GiB)
[37769.473958] sd 8:0:0:0: [sdf] Attached SCSI removable disk
[37769.475113] sd 8:0:0:3: [sdi] Write Protect is off
[37769.475116] sd 8:0:0:3: [sdi] Mode Sense: 2f 00 00 00
[37769.475608] sd 8:0:0:1: [sdg] Attached SCSI removable disk
[37769.477768] sd 8:0:0:3: [sdi] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[37769.478227] sd 8:0:0:2: [sdh] Attached SCSI removable disk
[37769.484922] sdi: sdi1 sdi2 < sdi5 sdi6 > sdi3
[37769.488248] sd 8:0:0:3: [sdi] Attached SCSI removable disk
Um das Image auf die Karte zu schreiben benötigt man Root-Rechte.
Das Gerät muss von /dev/sdX auf den korrekten Bezeichner angepasst werden.
Für obiges Beispiel /dev/sdi.
> sudo -s
# xzcat openSUSE-Leap42.3-ARM-JeOS-raspberrypi2.armv7l-2017.07.26-Build1.1.raw.xz | dd bs=4M of=/dev/sdX iflag=fullblock oflag=direct
328+1 records in
328+1 records out
1377828864 bytes (1.4 GB, 1.3 GiB) copied, 147.571 s, 9.3 MB/s
# sync
Hierbei passiert folgendes:
-
xzcatentpackt das Image auf die Standardausgabe. -
ddübernimmt über die Standardeingabe das entpackte Image und schreibt es 1:1 auf das angegebene Gerät, und zwar ohne zu Puffern. -
syncstellt sicher, dass alle Daten auch wirklich geschrieben wurden und die Karte sicher entnommen werden kann.
Anschließend kann man den Raspberry Pi 2 mit den üblichen Schritten in Betrieb nehmen:
- Karte herausnehmen und in den Pi stecken,
- Netzwerk an den Raspberry Pi 2 anschließen,
- und dann Strom auf Pi geben.
Der initiale Start kann einige Zeit dauern. Bei mir waren dies gut anderthalb Stunden, mit einer schnelleren SD-Karte kann man hier aber Zeit sparen.
Hat man keinen Monitor am Raspberry Pi 2, so steht man nun vor der Herausforderung dessen Netzwerkadresse herauszufinden um sich mit dem zu verbinden.
Prinzipiell kann man diese herausfinden indem man nach dem letzten Gerät schaut, dass sich am Router angemeldet hat.
Löst der Router auch die von den Rechnern selbst gewählten Namen auf, so ist es sogar noch einfacher, da sich ein frisches openSUSE immer mit dem Rechnernamen linux meldet.
An einer Fritz!Box ist er deshalb als linux.fritz.box erreichbar.
Nun kann man sich also das erste mal anmelden. Der Nutzer ist root und das Passwort linux.
Hat man eine Fritz!Box also: ssh root@linux.fritz.box.
Beim ersten Anmelden sollte man gleich noch die Inbetriebnahme das Basissystems abschließen:
- Das Passwort ändern,
- dem eigenen SSH-Key per
ssh-copy-idauf dem Raspberry Pi Berechtigung zum Anmelden geben, - Updates einspielen,
- und den Hostnamen ändern.
Letzterer Schritt ist wichtig, da es sonst bei Inbetriebnahme eines weiteren Raspberry Pi zu kollisionen kommen kann und man sich unnötig die Auflösung der Netzwerkadresse des neuen Systems erschwert.
Das Just enough in JeOS ist übrigens durchaus ernst gemeint.
Wer sein System anschließend mit Ansible weiter einrichten möchte sollte dort noch Python komplett installieren: zypper in python3.
Ohne diesen Schritt kann es zu kuriosen Fehlermeldungen kommen, da Teile der Standardbibliothek in der minimalen Installation von Python fehlen.
A failing test for Christmas
It seems I have really well behaved on 2017, because Santa Claus brought me a failing test for Christmas. :stuck_out_tongue_winking_eye: I found out a piece of code, that was only wrong from 26th to 31st December. :christmas_tree:
The code
Imagine you want to write a Ruby method for a Rails project, where you want to get all the users of the database that have birthday today or in the next 6 days, given that the birth date is stored in the database for all users. How would you do it?
As for the birthday you don’t care of the year of the dates, you could just replace the users’ birth years by the current one and check if they are in the range you want. So, the code I found looked something like:
def next_birthdays_1(number_days)
today = Date.current
User.select do |user|
(today..today + number_days.days).cover?(user.birthday.change(year: today.year))
end
end
This code seemed to work. There was even a test for it and it passed. But on 26th December, the new year coming broke the test, showing that the code was wrong. For example, if a user was born on 01/01/1960, the range 26/12/2017 to 01/01/2018 doesn’t cover 01/01/2017.
Let’s fix the code! :muscle:
We can stop trying to be too smart and just testing if the day and month of the users birth dates match any of the dates in the range:
def next_birthdays_2(number_days)
today = Date.current
select do |user|
(today..today + number_days.days).any? do |date|
user.birthday.strftime('%d%m') == date.strftime('%d%m')
end
end
end
This code works and it works the whole year. :rofl: But what happens if now we want to get the birthdays of the next 30 days and we have for example 30000 users. Would this code be efficient enough? Can we do it better? :thinking:
One thing I come up with was reusing the original idea, that we do not care about the year, but using two dates instead. So to know if a user was born in 01/01/1960, the range 26/12/2017 to 01/01/2018 should cover 01/01/2017 or 01/01/2018. As for both dates, the birthday is the same one. So, we can write this as follows:
def next_birthdays_3(number_days)
today = Date.current
User.select do |user|
(today..today + number_days.days).cover?(user.birthday.change(year: today.year)) ||
(today..today + number_days.days).cover?(user.birthday.change(year: (today.year + 1)))
end
end
But this method is wrong, as it has a problem which also had the first one. What happens if the user was born the 29th February of a leap year? birthday.change(year: 2017) would fail, as 2017 was not a leap year and 29th February of 2017 doesn’t exist. :see_no_evil: But we can do a smart trick to keep using the same idea: using string comparison without taking into account the leap years! :smile: It would look like:
def next_birthdays_4(number_days)
today = Date.current
today_str = today.strftime('%Y%m%d')
limit = today + number_days.days
limit_str = limit.strftime('%Y%m%d')
User.select do |user|
birthday_str = user.birthday.strftime('%m%d')
birthday_today_year = "#{today.year}#{birthday_str}"
birthday_limit_year = "#{limit.year}#{birthday_str}"
birthday_today_year.between?(today_str, limit_str) || birthday_limit_year.between?(today_str, limit_str)
end
end
Note that this method is not equivalent to next_birthdays_2, as it returns the users that have birthday in 29th February if the range includes this date even if it is not a leap year. But I would say this is an advantage, as we do not want that the people who were born on 29th February do not have birthday party some years. :wink:
But remember that this is a Rails project, so we can do it even better if we reuse this idea to build an SQL query! :tada: For example, for PostgreSQL 9.4:
def next_birthdays_5(number_days)
today = Date.current
today_str = today.strftime('%Y%m%d')
limit = today + number_days.days
limit_str = limit.strftime('%Y%m%d')
User.where(
"(('#{today.year}' || to_char(birthday, 'MMDD')) between '#{today_str}' and '#{limit_str}')" \
'or' \
"(('#{limit.year}' || to_char(birthday, 'MMDD')) between '#{today_str}' and '#{limit_str}')"
)
end
As it already happened in next_birthdays_4, this method returns the users that have birthday in 29th February if the range includes this date even if it is not a leap year.
In PostgreSQL we have the Date type and from PostgreSQL 9.2 also daterange which we could have use to make this query more efficient. This would have been equivalent to next_birthdays_3 and would have had the same problem, it fails with leap years.
Efficiency
And now it is time to try it! Let check how efficient are the methods number 2, 4 and 5 (as the the other two doesn’t work properly as we had already analysed).
I have created 30000 users with different birth dates using Faker. I have executed the different methods to know the number of people with birthday in the next 31 days and I have measured the execution time using Benchmark.measured. Those are elapsed real times for every of the method in my computer:
-
next_birthdays_2(30)~ 3.3 seconds -
next_birthdays_4(30)~ 0.9 seconds -
next_birthdays_5(30)~ 0.0003 seconds
Take into account that next_birthdays_2 is much more affected by the number of days than the other two methods. But even for 6 days it is really slow, as the elapsed real time in my computer for next_birthdays_2(6) is arround 1.45 seconds.
Conclusion
And the funny thing of all this is that, as it is already 2018, even if I haven’t fixed the test yet, the Christmas failing test doesn’t fail anymore. :joy: This is because we forgot to add test cases for the limit cases and help us to learn that when working with dates we have to pay special attention to the changes of year and to the 29th of February and the leap years. And of course all this should be properly tested.
Another thing we can learn from the post is that as Rails developers we should never forget that the database queries are always way more efficient than the Ruby code we write.
Last but not least, we may consider if the effort to write a method worths the time you need to invest to write it. For example, in this case we could have considered if we could have lived with a method which returns the birthdays this month, which is much easier to implement, instead of this more complicate option. Or, if we do not expect that out application has a lot of users, we could have even used next_birthdays_2.
And if you want to take a look at the original code which inspired this post, you can find it in the following PR: https://github.com/openSUSE/agile-team-dashboard/pull/100
Happy new year! :christmas_tree: :champagne:
An Introduction To Rendering In React

In this post I want to talk about how React renders components, and how it tries to improve performance by using its reconciliation algorithm to only update the parts of the DOM that need updating. This is not meant to be an introduction to React, but I will quickly go over some of the relevant foundation concepts in the next section.
What Is React?
React is an open source JavaScript library created by Facebook to address some of the needs they had when dealing with the development of the Facebook website. React has played an important role in the evolution of JavaScript Frameworks because it is responsible for popularizing the Component Based Architecture (CBA) paradigm. An important distinction between React and frameworks like Angular is that React is only a library. For example, React does not provide its own routing or HTTP libraries. To perform these type of tasks in React developers have to rely on other libraries like Redux and the Fetch API.
Since React is simply a library, it can be used in existing projects very easily:
While it is nice to be able to use React with pure JavaScript things can get hairy when writing a very large app. For this reason you may find it easier to use JSX, an alternative way of writing React code. JSX essentially allows you to combine HTML and JavaScript syntax together to make it easier to mange the code. The same example above can be written using JSX (more on it below):
Component Based Architecture (CBA)
In this article I am not going to go into detail about CBA (there are a ton of great articles already available about that) but I will just explain some of the motivation as to why Reacts approach is useful for developers. When writing code using a framework like AngularJS, you have to use Controllers and Views to create your application. Controllers house the logic (JS) for your application while the Views house the UI elements (HTML). This separation of logic from UI allows for one to be changed without significantly impacting the other. For example if you need to change the logic in a function, this can be done without having to rewrite any of the UI. One of the problems with this approach however, is that it makes it difficult to modify a specific part of the UI without also having to modify other parts of the interface and logic.
What makes CBA different is that in CBA the logic and UI are kept together, causing the two to be coupled to each other. The goal of CBA is to encapsulate portions of an interface into self contained units called components. The use of encapsulation allows a component to be changed without the developer having to modify any other component. Another major benefit of components is that they are reusable, which is key to writing good code in React.
JSX
JSX works as a syntax extension for JavaScript that combines templating languages with JavaScript. While JSX is optional to use in React, it is much more elegant than using pure JavaScript so I prefer using it when building React applications. Just like XML, JSX elements can have names, attributes and children. Values enclosed in curly braces are interpreted as JavaScript expressions which you can use to substitute a hard coded value with a variable or evaluation.
When React renders JSX (more on this later) it will convert the JSX elements into JavaScript and then use them to create the DOM. An interesting property of JSX is that it prevents injection attacks by converting everything into a string before rendering it. This ensures that a malicious user cannot execute XSS attacks using your application.
Unidirectional Data Flow
Another fundamental concept in React is the Unidirectional Data Flow which is how React propagates updates to components. Actions in the UI lead to the state of components being updated which in turn will cause the view to update to reflect these new changes.

Parent components can pass variables to their child components using props to provide them with the necessary context in which to render. Note that the arrows are not bidirectional, this is because components cannot update their parents, they can only receive updates from them.
The Initial Render
All React applications start at a root DOM node that marks the portion of the DOM that will be managed by React. You add child components to this node using React to get your application to look and behave the way you desire.
JSX
When React is called to render the component tree it will first need the JSX in your code to be converted into pure JavaScript. This can be achieved by making use of Babel. You may already be familiar with the Babel project for its transpiling capabilities, but it is also be used to convert the JSX you write into pure JavaScript. To see this in action you can check out this live demo on the Babel website. On the left is the JSX and on the right is the resulting JavaScript.
Lifecycle Methods
React comes with some lifecycle methods that you can use to update and control the application state. You can find out more about how and when to use them by reading this great article by Bartosz Szczeciński.
By default, React will re-render all child components of any component that itself has to re-render. This behavior may not always be ideal, for example when re-rendering the component requires performing costly calculations. The shouldComponentUpdate lifecycle method can be used to address this concern. By default it will always return true but you can add logic so that it only returns true under specific conditions. Note that this lifecycle method does not apply to when a component has its internal state changed using setState() which will still cause a re-render even if the shouldComponentUpdate method always returns false.
Updating Components
Once React has completed the initial render for your application it waits for one of two events before triggering an update for a component. Either the internal state of the component changes, or the props being passed into the component change.
Internal state of a component can be changed by calling the setState() function. The important thing to remember about setState() is that it is not executed immediately, as React may delay the state update.
Think of setState() as a request rather than an immediate command to update the component. For better perceived performance, React may delay it, and then update several components in a single pass. React does not guarantee that the state changes are applied immediately.
— React Docs
Props, unlike state, cannot be modified by the component they are being passed into. Components must instead rely on their parent to provide updates to the props. These updates can occur under two situations, either the parent will update the props due to some internal state change or the child will call a function passed down by the parent that updates some state in the parent which in turn will update the props of the component causing it to re-render. Below is an example of how a child component would call a function passed down by its parent, note that it assumes component C did not need to be re-rendered (this is not always the case).

Reconciliation
When React has to perform updates to a component it attempts to improve performance by only updating what needs to be updated. The way React accomplishes this is by using its ‘diffing’ or reconciliation algorithm. According to the React docs, there are two assumptions that React makes when it comes to reconciliation:
- Two elements of different types will always produce different trees
- Developers can hint which child elements are stable across different renders using a key prop
The reconciliation algorithm behaves differently depending on what type of root element it has to re-render.
Elements Of Different Types
If the new element is a different type of element than the old element (ex. changing from <span> to <h1>) then React has to perform a full rebuild of the tree. This means that it will destroy the old tree and build a new one from scratch. Note that since the old tree is destroyed, any state associated with it will also be gone. When the old nodes are destroyed, the componentWillUnmount lifecycle method will be invoked. As the new tree is created, the new nodes will call the componentWillMount and componentDidMount lifecycle methods.
Elements Of The Same Type
If the new element is the same type as the old element then React will keep the same DOM node and instead only updates the attributes that have changed. When the props are changed to match the new element, the componentWillReceiveProps and the componentWillUpdate lifecycle methods are called. For example if you update the style for an element, React knows to only update the specific properties of the CSS that changed. Note that since the node is not destroyed any state associated with the old node is still available. Once React has updated the node, it will recurse on its children.
Recursing On Children
When the reconciliation algorithm recurses on the children of a node, React iterates over a list of both the old and new children. There is a good explanation of how this works in the React docs. As mentioned in the docs, there may be performance issues if the new child is not appended to the list as React will not know what did and did not change. To avoid this issue you can use the key prop to give the element a unique ID so React can determine what actually changed.
Conclusion
Hopefully that was a useful explanation of how React does rendering of components and why its important to be aware of how you can improve its performance by following certain coding practices. If you have any questions or feedback, please leave a message in the comments.
An Introduction To Rendering In React was originally published in Information & Technology on Medium, where people are continuing the conversation by highlighting and responding to this story.
AppArmor 2.12 - The Grinch is confined!
There is this old quote from LKML:
Get back there in front of the computer NOW. Christmas can wait.
[Linus "the Grinch" Torvalds, 24 Dec 2000 on linux-kernel]
The AppArmor developers followed this advise - John released AppArmor 2.12 yesterday (Dec 25), and I just submitted updated packages to openSUSE Tumbleweed (SR 560017).
The most visible changes in 2.12 are support for "owner" rules in aa-logprof and upstreaming of the aa-logprof --json interface (used by YaST). Of course that's only the tip of the christmas cookie ;-) - see the Release Notes for all details.
One important change in the openSUSE packages is that I intentionally broke "systemctl stop apparmor". The reason for this is "systemctl restart apparmor" - systemd maps this to stop, followed by start. This resulted in unloading all AppArmor profiles by the "stop" part and, even if they get loaded again a second later, running processes will stay unconfined unless you restart them. The systemd developers were unwilling to implement the proposed ExecRestart= option for unit files, therefore breaking "stop" is the best thing I can do. (See boo#996520 and boo#853019 for more details.)
"systemctl reload apparmor" will continue to work and is still the recommended way to reload the AppArmor profiles, but accidently typing "restart" instead of "reload" can easily happen. Therefore I chose to break "stop" - that's annoying, but more secure than accidently removing the AppArmor confinement from running processes.
If you really want to unload all AppArmor profiles, you can use the new "aa-teardown" command which does what "systemctl stop apparmor" did before - but who would do that? ;-)
Note that the above (except the recommendation to use "reload") only applies to Tumbleweed and Leap 15.
PostmarketOS and digital cameras
GNOME.Asia Summit 2017
It’s import to get support in university when we want to promote open source and freeware all the time.
( https://www.flickr.com/groups/gnomeasia2017/pool )
Librsvg 2.40.20 is released
Today I released librsvg 2.40.20. This will be the last release in the 2.40.x series, which is deprecated effectively immediately.
People and distros are strongly encouraged to switch to librsvg 2.41.x as soon as possible. This is the version that is implemented in a mixture of C and Rust. It is 100% API and ABI compatible with 2.40.x, so it is a drop-in replacement for it. If you or your distro can compile Firefox 57, you can probably build librsvg-2.41.x without problems.
Some statistics
Here are a few runs of loc — a tool to count lines of code — when run on librsvg. The output is trimmed by hand to only include C and Rust files.
This is 2.40.20:
-------------------------------------------------------
Language Files Lines Blank Comment Code
-------------------------------------------------------
C 41 20972 3438 2100 15434
C/C++ Header 27 2377 452 625 1300
This is 2.41.latest (the master branch):
-------------------------------------------------------
Language Files Lines Blank Comment Code
-------------------------------------------------------
C 34 17253 3024 1892 12337
C/C++ Header 23 2327 501 624 1202
Rust 38 11254 1873 675 8706
And this is 2.41.latest *without unit tests*,
just "real source code":
-------------------------------------------------------
Language Files Lines Blank Comment Code
-------------------------------------------------------
C 34 17253 3024 1892 12337
C/C++ Header 23 2327 501 624 1202
Rust 38 9340 1513 610 7217
Summary
Not counting blank lines nor comments:
-
The C-only version has 16734 lines of C code.
-
The C-only version has no unit tests, just some integration tests.
-
The Rust-and-C version has 13539 lines of C code, 7217 lines of Rust code, and 1489 lines of unit tests in Rust.
As for the integration tests:
-
The C-only version has 64 integration tests.
-
The Rust-and-C version has 130 integration tests.
The Rust-and-C version supports a few more SVG features, and it is A LOT more robust and spec-compliant with the SVG features that were supported in the C-only version.
The C sources in librsvg are shrinking steadily. It would be
incredibly awesome if someone could run some git filter-branch magic
with the loc tool and generate some pretty graphs of source
lines vs. commits over time.