Optimizing a boot time, aka 2 second boot
During the hackweek, I have decided to take a look onto a boot process to realize, how fast can be boot of the system. This metric is considered as a not important, especially in a geek community. But I am sure that this is an important part of user experience. Following text is mean to share my investigations with you – but you should be aware what you are doing, since I have been focused on reducing the boot time as much as possible.
A bit of theory
Basically you have three possibilities how to make your boot fast.
- Start things in a parallel.
- Start things on demand.
- Start less things.
… root of all evil …
“Premature optimization is the root of all evil.”
And I will modestly extend Donald Knuth’s sentence, that the blind optimization as well. So if we have to optimize something, we have to know what to do. Fortunately systemd comes with an excellent tool called systemd-analyze, which show us our boot in several ways.
The simple run of command prints the time we spent in a boot and in which phase.
# systemd-analyze Startup finished in 8480ms (kernel) + 30873ms (userspace) = 39353ms
That was the default (minimal X system) 12.2 installation on my EEE 701 netbook, which is probably not suitable to work as nowadays cellsmarphone, because is pathetically slow. On the other hand is it a perfect playground, so let’s continue with an investigating.
The overall time is nice, but won’t help to know what’s going on. There are two more subcommands, blame and plot shows us more information about the boot. The first shows the services sorted by the start time. The ones boots so long are those we should kicked off as a first ones.
Let see what slow the boot down a most
$ systemd-analyze blame | head 11385ms network.service 5664ms SuSEfirewall2_init.service 5575ms systemd-vconsole-setup.service 3032ms ntp.service 2840ms remount-rootfs.service 2230ms postfix.service 2021ms network-remotefs.service 1925ms cpufreq.service 1661ms SuSEfirewall2_setup.service 1506ms xdm.service
And take look at the output of systemd-analyze plot command

You can see, that there is a long chain of SuSEfirewall2_init -> network -> network-remotefs -> SuSEfirewall2_setup tooks several dozen seconds to be finished. And nothing is wrong with that, but that is the server solution, not what I want to have on my tiny laptop.
Making a laptop boot twice more faster
So having the complex dependencies of several services in mind, I decided to mask some of them. Masking in systemd world means the service cannot be started using systemd, so it becomes invisible for it. I masked those
-
network.service– will be replaced by NetworkManager, which is more suitable for laptops usage -
SuSEfirewall2_initandSuSEfirewall2_setup– even if it’s a security feature, a risc for laptop, which is mostly offline and running onlysshdis pretty small. -
ntp.service,network-remotefs.service– those does not makes a sense on my laptop -
postfix.service– I do not want to send emails via/usr/bin/sendmail -
cpufreq.service– it is even not supported by my CPU (grep rc.cpufreq /var/log/messages)
Do not forget to install NetworkManager and the applet and change the /etc/sysconfig/network/config and reboot.
Now we have
$ systemd-analyze Startup finished in 8528ms (kernel) + 11123ms (userspace) = 19652ms
Using an strace with systemd
Now we have a list of worse services
$ systemd-analyze blame | head -n 10 5476ms xdm.service 4172ms systemd-vconsole-setup.service 3950ms systemd-modules-load.service 2781ms remount-rootfs.service 1848ms NetworkManager.service 1439ms media.mount 1426ms systemd-remount-api-vfs.service 1419ms dev-hugepages.mount 1411ms dev-mqueue.mount 1371ms sys-kernel-debug.mount
and a proper boot chart

It shows us an another botleneck, which is the systemd-vconsole-setup.service, because it delay the sysinit.target, which is very early boot stage. In case like this, we can only use strace to know, what is taking too long. And debugging is pretty straightforward in systemd world. All we have to do is copy service file to /etc/systemd/system and change the ExecStart
ExecStart=/usr/bin/strace -f -tt -o /run/%N.strace /lib/systemd/systemd-vconsole-setup
and reboot. Then you will find the output in /run/systemd-vconsole-setup.strace with a timestamps. Looking there it’s obvious calling hwinfo --bios is extremely expensive in this stage. You can speedup the unit by setting the KBD_NUMLOCK to yes or no in /etc/sysconfig/keyboard, or you can try to mask it completely I did.
The next service needs to closer look was system-modules-load – then strace says that it spent 2(!) in init_module() for module microcode. I disabled it as well, even for CPUs needs it can’t be recommended.
Native systemd units
There is one tiny init script called purge-kernels, which starts for 300ms according blame. And in this particular case systemd alternative will be way more effective
$ cat /etc/systemd/system/purge-kernels.service [Unit] Description=Purge old kernels After=local_fs.target ConditionPathExists=/boot/do_pure_kernels [Service] Type=oneshot ExecStart=/sbin/purge-kernels
because systemd only do one stat on the file and do not run it at all, so this service disappears from the blame at all.
The kernel time
There is one interesting thing about kernel time – 8 seconds spent there seems to be a lot to me. Simple ls on /boot gave me a pointer
$ ls -lh /boot/vmlinuz-* /boot/initrd-* -rw-r--r-- 1 root root 14M Jul 24 11:03 /boot/initrd-3.4.4-1.1-desktop -rw-r--r-- 1 root root 4.7M Jul 10 15:48 /boot/vmlinuz-3.4.4-1.1-desktop
The initrd is huge, around three times bigger than kernel? So let’s try to find what caused that. Every package can add it’s own setup script into /lib/mkinitrd/scripts/ thus let ask rpm whose did that
$ rpm -qf /lib/mkinitrd/scripts/setup-* | sort -u cifs-utils-5.5-2.2.2.i586 cryptsetup-1.4.2-3.2.1.i586 device-mapper-1.02.63-26.1.1.i586 dmraid-1.0.0.rc16-18.2.1.i586 kpartx-0.4.9-3.1.1.i586 lvm2-2.02.84-26.1.1.i586 mdadm-3.2.5-3.3.2.i586 mkinitrd-2.7.0-62.2.1.i586 multipath-tools-0.4.9-3.1.1.i586 plymouth-scripts-0.8.5.1-1.3.1.noarch splashy-0.3.13-35.1.1.i586
So I went through a list and try to uninstall things I do not need
-
cifs-utils– if you do not have any windows disc to mount, you can remove, but no impact on initrd size -
cryptsetup– this is a popular service for laptops, but I do not have any luks device, so let skip that. It removes a half of Yast as well, so I saved 18M of space, but a little in initrd. -
device-mapper,dmraid,kpartxandlvm2– cannot be easily removed as too much low-level stuff depends on it -
mdadm– no linux md devides, skip that -
mkinitrd– removal can reduce initrd to zero, but we would need own kernel -
plymouth-scripts– who would need the “fancy” boot when booting so fast? – reducing initrd to 8.9M -
splashy– the same – and reducing initrd to 6.6M
multipath-tools – no multipath device, let skip that
So the things intended to provide fancy boot actually bloats the system. Let’s measure the impact of those changes
$ systemd-analyze 2781ms (kernel) + 4999ms (userspace) = 7780ms

And that’s all folks …?
There are a lot of factors slowing our boot – reducing it to 8 seconds is not that bad. One have to go carefully through blame and plot output to see what delays his computer in start. I would say making NetworkManager default one at least when installing laptop pattern would be nice and simple change as well as continue on “systemdifization” of openSUSE.
There are few other tricks, which get us closer to the target time, but I’ll post them next day.
GUADEC, here I come
I'm leaving for the airport in a few minutes: GUADEC is my next stop!
Like a few other people, I'll land just before midnight, and hopefully there'll still be people hanging around in the lobby with the pre-registration event. Will be good to see old friends and discuss crazy things :-)
Snapper and LVM thin-provisioned Snapshots
SUSEs Hackweek 8 allowed me to implement support for LVM thin-provisioned snapshots in snapper. Since thin-provisioned snapshots themself are new I will shortly show their usage.
Unfortunately openSUSE 12.2 RC1 does not include LVM tools with thin-provisioning so you have to compile them on your own. First install the thin-provisioning-tools. Then install LVM with thin-provisioning enabled (configure option –with-thin=internal).
To setup LVM we first have to create a volume group either using the LVM tools or YaST. I assume it’s named test. Then we create a storage pool with 3GB space.
# modprobe dm-thin-pool # lvcreate --thin test/pool --size 3G
Now we can create a thin-provisioned logical volume named thin with a size of 5GB. The size can be larger than the pool since data is only allocated from the pool when needed.
# lvcreate --thin test/pool --virtualsize 5G --name thin # mkfs.ext4 /dev/test/thin # mkdir /thin # mount /dev/test/thin /thin
Finally we can create a snapshot from the logical volume.
# lvcreate --snapshot --name thin-snap1 /dev/test/thin # mkdir /thin-snapshot # mount /dev/test/thin-snap1 /thin-snapshot
Space for the snapshot is also allocated from the pool when needed. The command lvs gives an overview of the allocated space.
# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert pool test twi-a-tz 3.00g 4.24 thin test Vwi-aotz 5.00g pool 2.54 thin-snap1 test Vwi-a-tz 5.00g pool thin 2.54
After installing snapper version 0.0.12 or later we can create a config for the logical volume thin.
# snapper -c thin create-config --fstype="lvm(ext4)" /thin
As a simple test we can create a new file and see that snapper detects its creation.
# snapper -c thin create --command "touch /thin/lenny" # snapper -c thin list Type | # | Pre # | Date | Cleanup | Description | Userdata -------+---+-------+-------------------------------+---------+-------------+--------- single | 0 | | | | current | pre | 1 | | Tue 24 Jul 2012 15:49:51 CEST | | | post | 2 | 1 | Tue 24 Jul 2012 15:49:51 CEST | | | # snapper -c thin status 1..2 +... /thin/lenny
So now you can use snapper even if you don’t trust btrfs. Feedback is welcomed.
[gsoc] osc2 client – summary of week 9
Hi,
here’s a small summary of the 9th (coding) week. Last week I worked
on the fetcher and cache manager code. In order to support all
features some of the existing classes had to be enhanced with some
more parameters.
Done:
- cache manager code
- BinaryList class supports view=cpio
- RORemoteFile class supports lazy_open
parameter (by default the file is opened lazily that is when a
read request is issued) (for the fetcher code we use
lazy_open=False) - minor changes in the httprequest module (AbstractHTTPRequest
supports the same query parameter more than once)
The fetcher code is more or less done (not yet committed) and will be
finished by friday evening (I’ve some exams this week…).
Marcus
Self-Made Router-Powered Repository Mirror
After migrating most desktop PCs at work, at home and of some friends to openSUSE with upstream KDE repo, it was always a time-consuming task to keep all these systems up-to-date. I was most bothered by downloading the same files over and over again, which happened sometimes to be quite slow, especially via WLAN.
Fortunatly I got a couple of month ago the Linux-based router AVM Fritz!Box 7270, which can be rooted with a modified firmware. The customizable firmware is provided by the Freetz project and allows you to combine different modular packages to add functionality.
After playing a little bit with Freetz, I even got a lighttpd webserver with ruby and caching capabilities running. Of course, the original firmware as well as Freetz allow the configuration of dynamic DNS services. :wink:
I ended up with a configuration including:
- dropbear ssh server (allows remote login via ssh – otherwise only telnet is supported)
- automount-scripts supporting ext3 and ext4 (latter is not supported by AVM firmware)
- nfs including a CGI configuration web page
- rsync
I formatted an external hard drive to use ext4 and attached it to the router.
The file mirror-rsync.sh needs to be copied to an arbitrary folder on the
external drive.
# file: "code-block.js"
#!/bin/sh
set -o errexit
cd /var/media/ftp/uStor02/opensuse
echo -e "\nII: Start time: $(date)\nII: directory: $(pwd)" 2>&1 | tee -a log.txt
trap - INT
# rsync -vaH --delete --delete-after --progress --partial --size-only --delay-updates --include-from=include.txt --exclude-from=exclude.txt --timeout=300 ftp-1.gwdg.de::pub/opensuse/update/12.1/ update/12.1 2>&1 | tee -a log.txt
rsync -vaH --delete --delete-after --progress --partial --size-only --delay-updates --include-from=include.txt --exclude-from=exclude.txt --timeout=300 ftp.halifax.rwth-aachen.de::opensuse/update/12.1/ update/12.1 2>&1 | tee -a log.txt
# rsync -vaH --delete --delete-after --progress --partial --size-only --delay-updates --include-from=include.txt --exclude-from=exclude.txt --timeout=300 ftp5.gwdg.de::pub/opensuse/repositories/KDE:/Release:/48/openSUSE_12.1/ repositories/KDE:/Release:/48/openSUSE_12.1 2>&1 | tee -a log.txt
rsync -vaH --delete --delete-after --progress --partial --size-only --delay-updates --include-from=include.txt --exclude-from=exclude.txt --timeout=300 ftp.halifax.rwth-aachen.de::opensuse/repositories/KDE:/Release:/48/openSUSE_12.1/ repositories/KDE:/Release:/48/openSUSE_12.1 2>&1 | tee -a log.txt
rsync -vaH --delete --delete-after --progress --partial --size-only --delay-updates --include-from=include.txt --exclude-from=exclude.txt --timeout=300 packman.inode.at::packman/suse/12.1/Essentials/ repositories/packman/12.1/Essentials 2>&1 | tee -a log.txt
To lower the data transfer, I tuned my rsync commands to download only x86_64 packages as well as only German language packages.
# file: 'include.txt'
*l10n-de*x86_64*
*l10n-de*noarch*
*wine*
*libqt4-debuginfo*x86_64*
*libkdepimlibs4-debuginfo*x86_64*
*libkdecore4-debuginfo*x86_64*
*libkde4-debuginfo*x86_64*
*kdepim4-debuginfo*x86_64*
*libqt4-x11-debuginfo*x86_64*
*amarok-debuginfo*x86_64*
*kmail-debuginfo*x86_64*
*libakonadi4-debuginfo*x86_64*
# file: 'exclude.txt'
src
*i686.rpm
*i686.drpm
*i586.rpm
*i586.drpm
*buildsymbols*
ia64
*debuginfo*
*debugsource*
*l10n*
To finish the router setup, the only thing left is to add a Cron job (supported by Freetz interface) which runs the script daily – preferably during the night.
The original firmware provides SMB access to files on the hard drive. This can be quite slow. So I configured NFS for access these repositories. There’s a German blog post with some benchmarks to compare SMB and NFS.
Finally, the new repositories have to be activated. The priority is set to 90 and this way overranks the original repos, which don’t have to be disabled.
# file: 'client.sh'
#!/bin/sh
zypper ar --check --no-keep-packages --no-gpgcheck --type rpm-md --name fritzbox-12.1-update nfs://fritz.box:/var/media/ftp/uStor02/opensuse/update/12.1?mountoptions=vers=3 fritzbox-12.1-update
zypper ar --check --no-keep-packages --no-gpgcheck --type rpm-md --name fritzbox-12.1-kde48 nfs://fritz.box:/var/media/ftp/uStor02/opensuse/repositories/KDE:/Release:/48/openSUSE_12.1?mountoptions=vers=3 fritzbox-12.1-kde48
zypper ar --check --no-keep-packages --no-gpgcheck --type rpm-md --name fritzbox-12.1-packman nfs://fritz.box:/var/media/ftp/uStor02/opensuse/repositories/packman/12.1/Essentials?mountoptions=vers=3 fritzbox-12.1-packman
zypper mr -p 90 fritzbox-12.1-update
zypper mr -p 90 fritzbox-12.1-kde48
zypper mr -p 90 fritzbox-12.1-packman
The most important thing is here to append ?mountoptions=vers=3 to all URLs.
The Freetz NFS build doesn’t support NFS version 4 and zypper fails to auto-detect
this.
So the next time I want to bring the latest KDE release to a friend, I just have
to unplug my hard-drive and have everything with me. At work, I don’t have to
bring my own hard drive with me – they have their own one :p with an equal script
which can be triggered from time to time.
We can do better
Maybe it is just me, but lately it appears that there is a lot brewing on our lists. Generally I try to stay out of the fray, but of course we, as members of our community, are all in the middle of it in one way or another. With a very recent endless thread on the opensuse list fresh in memory, not that I read all or even most of the messages it generated, and the follow on thread of the original poster bidding his farewell to the list, supposedly because the poster didn’t like the responses, I feel compelled to share some of my own thoughts on the topic.
My feeling is that a good number of people that complain about the noise on our lists are also those that contribute to that noise at a good rate. Thus, I can only say that sometimes it is nice to exercise some restraint and not hit that “Send” button; who hasn’t had the “I shouldn’t have sent this” thought? The bottom line is, that it is almost impossible to write anything that does not step on somebody’s toes somewhere along the way and if everyone that feels the least bit uneasy about some comment would respond all the time we would really be in an endless loop. I am certain, this post will make someone uneasy, upset, angry or worse. If that’s the way you feel right now, I am sorry, I am not trying to make you angry or upset on purpose. Please accept my apology.
That said, who hasn’t been frustrated out of their mind by some perceived dumb software problem or other issue? Even worse when it is our own hurdle we cannot cross. In the end we just wanted to yell and scream and the result is too often a message with: pick your “Emotional state appropriate inflamatory subject…” and rant your question to the list. Everyone gets emotional, and frustration happens to be a very strong reaction. However, the question that should come to mind before hitting that “Send” button to a list of volunteers is this, “Will an emotional reaction with moaning, groaning, whining, and complaining, that hides the real problem, give me the feedback needed to resolve my issue?” The answer is simple; No it will not. A post charged with negative energy will, and there is plenty of proof on our lists, solicit emotional, mostly negative, responses. These do not contribute to resolving the problem at hand. However, once this storm is set in motion there is, for better or worse no stopping it and it just has to run its course, i.e. eventually people will be tired of feeding the storm that should have never happened and things will go back to “normal”, whatever that may be.
I believe that the people, volunteers, on the opensuse mailing lists are generally willing to help solve problems others encounter. Yes, there will be the occasional cynical remark here and there, but I do not believe these remarks are ushered in a mean spirited way, and in the end we do not really have to nit-pick everything to death. It is one thing to yell at someone because the product or service you purchased from them does not meet your expectations, it is another thing to go bananas on a list where answers are provided by people with generally good intentions that volunteer their time. As posters of questions we all have a responsibility to keep this in mind before we go down the emotionally charged road to the abyss of not getting our questions answered.
However, putting the onus completely on the question poster is a bit too easy, isn’t it. While we as helpers would all love to get the “perfect” problem description, that is completely factual and contains no mistakes in description and actions to reproduce the problem, we have to realize that this is just not going to happen. By the time an issue hits the list the person posting probably is charged up in one way or another and ready to get rid of some of that stored energy, some more than others. Thus, as helpers we have to develop a bit more tolerance and let things roll off our backs a bit more. As potential helper we can just ignore the ranting posters. If you have it within you to provide the answer to the hidden problem and can rise above the fray, fantastic! help and do a good deed. However, if you know the answer to the hidden problem but feel the urge to feed the emotional storm it may be best to just ignore the message and not respond. In the end a raging answer hides the kind deed of help just as the raging question hides the problem.
No we do not have to have boring and no fun lists, but in the end flame wars or endless bickering threads are not fun and having people leave the lists or even not using openSUSE because of silly things is just not helpful to anyone. If we as posters, seekers of answers and providers thereof can just tone it down a bit things will probably work better for everyone.
Offended
---
Will I write such a code ?
Calling "[Microsoft] managed to make the kernel more offensive to half the population" is too much of an over-reaction, to, what is just a lame joke of a programmer trying to be funny. There is no need to drag Microsoft's name here, for the same reason, why we don't accuse The Linux Foundation of propagating male supremacy ideas, when Linus Torvalds says "Do you pine for the days when men were men and wrote their own device drivers?"
Disclaimer: All opinions expressed are purely personal and do not represent my employer.
Moving on to something completely different
Last month, I got a new job! After three years in the openSUSE Boosters team, I joined the SUSE Cloud team. I'm now working on OpenStack and on SUSE Cloud itself. Quite a big change!
I had planned a long time ago that the release of GNOME 3.0 would be a good time for me to look at what's next. When it went out, I actually took a few months to cool down a bit (it was pretty much needed), and also have some good fun with openSUSE. But after a while, this desire of trying something new came back: I had been working on the desktop for nearly ten years, and on a distribution for four years. Those were exciting years, but at the end, it started to feel like, you know, work
. I wanted to stay involved in GNOME, in the free desktop in general, in openSUSE, in cross-distro collaboration: this is not just work
and this should not be just work
. I didn't want to slowly move to doing stuff while not caring anymore. This is how I found out that I needed to go back to the early days and contribute in my free time again :-)
There was still the question of, well, work
. I started looking around, and I had some good discussions with several people about what to do next (thanks to everyone who took some time for this!). i must admit I changed my mind several times. I was not necessarily looking for a developer position (quite the contrary, actually), as I knew that for me to be motivated for a new project as a developer, the project had to be one that I could care about, one that has a free software community around it and one that would get me out of my comfort zone (so not on the desktop nor on a distro) — yeah, not easy :-) But at some point, SUSE had this cool developer position related to OpenStack. Good timing. (Btw, we're still hiring!)
It's been great so far; of course, you need to ignore the buzz words ;-) I wanted a new challenge and I wanted to get out of my comfort zone, I got served: new project, new community, new code, etc. It didn't help that the hard disk in my laptop decided it was the perfect moment to die, and that Lenovo took weeks to send me a replacement disk (finally got it yesterday). But now I'm all set, so let's have fun!
GNU screen window title and SSH
If you want to keep track of all your GNU screen sessions, it's helpful to set the window title to the current machine you ssh'd into. One way to achieve this is to put this into your .zshrc (.profile should work too): # Set/reset hostname in screen window title SSH'd machine: if [ $TERM = "screen" … Continue reading GNU screen window title and SSH
Midterm completed
I successfully completed my midterm evaluation, its awesome! I just can't stop staring at that figure in my account :D
Till now, I have completed integration of Karma with Twitter, Bugzilla, planet openSUSE and Build service. The Build service part though complete, still needs modifications due to some troubles in the approach that I am following. Using the api and getting commit history or parsing the changes file associated with each package both have some side effects. So I'm still looking for a better solution.
Then I also have plans to reward people on forums. People providing technical help to others deserve to be rewarded I think. Though I haven't researched on how to go about it, so I won't commit anything but lets hope it works out.
And karma plugin is deployed on Connect and working, so pull in that widget on your dashboard or your profile and have fun. I made a post about the karma widget not working but I completely forgot to post about the issue being resolved, my bad :)
So try it out, and please let me if you like it or dislike it even or leave any suggestion for me, if you think I can make it better in any way.
You can even check my code at Karma. Let me know what you feel.
