Skip to main content

the avatar of Andres Silva

Crystals Wallpaper

Hello friends!

Here is another wallpaper that I have been working on recently. I want to make this one part of the supplemental wallpaper package for openSUSE 13.1. I hope it strikes the idea of simplicity and keeping the dark colors introduced on openSUSE 12.2.

The idea here is to show simple crystals with some light effects. Looking at it from far away makes it stand out and I think that it enhances your work being unobtrusive and "blingy" at the same time.

Please remember that you can also turn in your submissions for the next release of openSUSE. Everyone is invited to work on an image that they would like be considered for the wallpaper. Something to remember is that if you are making an image proposal for the "DEFAULT" wallpaper, your work will have to be done in SVG format. If the image proposal you are making is intended for the "SUPPLEMENTAL" wallpaper package, you can do it on raster as well as svg. If done in raster, you will have to prove copies of your work in 4 different resolutions, 2560 × 1440, 2560x1600, 2560x2048 and 2048x1536.



Thank you for you work.

Anditosan

a silhouette of a person's head and shoulders, used as a default avatar

Apache Subversion 1.8 preview packages

RPM packages of what will become Apache Subversion 1.8 fairly soon are now available for testing on all current releases of openSUSE and SLE 11.

Note that in this release, serf will replace neon as the default HTTP library, to the extend that the latter is removed completely. I wrote about ra_serf before and added support for it in recent packages. You can test this now with either 1.7 or 1.8 if you are concerned about performance in your network. Please note that for servers running httpd and mod_dav_svn, increasing MaxKeepAliveRequests is highly recommended.

Update: Apache Subversion 1.8 is now released. You can find maintained packages via the software search in the devel:tools:scm:svn project. This will be part of the next release of openSUSE.

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE Conference 2013 : La “Llamada de trabajos” se extiende hasta el 17 de Junio

Plazos de entrega….Originalmente, la “llamada de trabajos” de la Conferencia de openSUSE 2013 (oSC13),  la reunión anual de nuestra comunidad, ha terminado el 3 de abril.

Sin embargo, algunos de ustedes parecen haber perdido la fecha límite y todavía hay un puñado de ranuras que quedan por cubrir, así que estamos extendiendo la convocatoria de propuestas hasta el lunes, 17 de junio 24:00.

Sin embargo, tenía que haber un pero, esperamos que el programa le llenan rápidamente, así que su “merde juntos”, “no es que francés nuevo”, y presentar sus propuestas lo más antes posible!

Lo que estamos buscando

Su presentación debe ser una charla, una presentación con diapositivas, o un taller en el que se indica a la gente en una experiencia práctica de laboratorio. El foco de su presentación debe ser uno de los siguientes 3 ​​temas:

Comunidad y Proyectos

Presentaciones en esta área deben centrarse en las actividades del proyecto y de la comunidad openSUSE, incluyendo pero no limitado a la gobernabilidad de proyectos, marketing, obras de arte, informes embajador y así sucesivamente.

Geeko Tech

Presentaciones en esta área deben centrarse en las tecnologías de openSUSE como el embalaje, la distribución, la infraestructura, etc openSUSE

openWorld

En esta área , invitamos otros proyectos de software libre para compartir su trabajo y colaborar con la comunidad openSUSE. Las contribuciones no se limitan a contenido técnico, que puede optar por hablar de su proyecto  favorito (“pet project”), como la construcción de un barco, un robot, u otros temas de interés.

y ya que estamos en eso, no olvide registrarse!

El registro seguirá abierto hasta que comience el evento e instamos a que se registre tan pronto como puedas! Las inscripciones nos ayuda a negociar con el lugar de celebración, hoteles y otros proveedores, que hace que sea más fácil para nosotros para planificar para la alimentación y la cantidad correcta de la diversión del partido durante oSC13.

Y recuerde: usted puede apoyar al oSC13 mediante con la compra de entradas de aficionados ($ 50) o boletos profesionales ($ 250) durante el registro. Los fondos provenientes de estas ventas de entradas son una parte muy importante del presupuesto para la conferencia general!

Power to the Geeko!

La conferencia de openSUSE es la reunión anual de los muchos que apoyan el proyecto openSUSE y otros colaboradores de software libre y los entusiastas. El evento en Salónica será nuestra quinta conferencia y esperamos que sea una vez más un gran éxito. Las charlas, talleres y discusiones de interés común constituyen el marco para el intercambio de información y conocimiento. En este marco, se proporciona un gran ambiente para la colaboración y la creación de conexiones y recuerdos duraderos.

El “Poder para el Geeko” lema de la conferencia de este año nos conecta con el pasado de nuestro país de acogida mientras se mira en el futuro a medida que continuamos en nuestro camino a cambiar el mundo.

Vamos a tener diversión!

Los filósofos griegos fueron parte de una revolución que cambió el mundo. Así somos nosotros, y por lo tanto, bajo el lema de “El poder de la Geeko”, nos reunimos y trabajamos en nuestra revolución. Permite obtener los engranajes giratorios presentar sus propuestas de sesiones, registrar su asistencia, nos ayudan a encontrar patrocinadores y hacer que la próxima conferencia de openSUSE un evento impresionante.

 

a silhouette of a person's head and shoulders, used as a default avatar

One More chef-client Run

Carrying on from my last post, the failed chef-client run came down to the init script in ceph 0.56 not yet knowing how to iterate /var/lib/ceph/{mon,osd,mds} and automatically start the appropriate daemons. This functionality seems to have been introduced in 0.58 or so by commit c8f528a. So I gave it another shot with a build of ceph 0.60.

On each of my ceph nodes, a bit of upgrading and cleanup. Note the choice of ceph 0.60 was mostly arbitrary, I just wanted the latest thing I could find an RPM for in a hurry. Also some of the rm invocations won’t be necessary, depending on what state things are actually in:

# zypper ar -f http://download.opensuse.org/repositories/home:/dalgaaf:/ceph:/extra/openSUSE_12.3/home:dalgaaf:ceph:extra.repo
# zypper ar -f http://gitbuilder.ceph.com/ceph-rpm-opensuse12-x86_64-basic/ref/next/x86_64/ ceph.com-next_openSUSE_12_x86_64
# zypper in ceph-0.60
# kill $(pidof ceph-mon)
# rm /etc/ceph/*
# rm /var/run/ceph/*
# rm -r /var/lib/ceph/*/*

That last gets rid of any half-created mon directories.

I also edited the Ceph environment to only have one mon (one of my colleagues rightly pointed out that you need an odd number of mons, and I had declared two previously, for no good reason). That’s knife environment edit Ceph on my desktop, and set "mon_initial_members": "ceph-0" instead of "ceph-0,ceph-1".

I also had to edit each of the nodes, to add an osd_devices array to each node, and remove the mon role from ceph-1. That’s knife node edit ceph-0.example.com then insert:

  "normal": {
    ...
    "ceph": {
      "osd_devices": [  ]
    }
  ...

Without the osd_devices array defined, the osd recipe fails (“undefined method `each_with_index’ for nil:NilClass”). I was kind of hoping an empty osd_devices array would allow ceph to use the root partition. No such luck, the cookbook really does expect you to be doing a sensible deployment with actual separate devices for your OSDs. Oh, well. I’ll try that another time. For now at least I’ve demonstrated that ceph-0.60 does give you what appears to be a clean mon setup when using the upstream cookbooks on openSUSE 12.3:

knife ssh name:ceph-0.example.com -x root chef-client
[2013-04-15T06:32:13+00:00] INFO: *** Chef 10.24.0 ***
[2013-04-15T06:32:13+00:00] INFO: Run List is [role[ceph-mon], role[ceph-osd], role[ceph-mds]]
[2013-04-15T06:32:13+00:00] INFO: Run List expands to [ceph::mon, ceph::osd, ceph::mds]
[2013-04-15T06:32:13+00:00] INFO: HTTP Request Returned 404 Not Found: No routes match the request: /reports/nodes/ceph-0.example.com/runs
[2013-04-15T06:32:13+00:00] INFO: Starting Chef Run for ceph-0.example.com
[2013-04-15T06:32:13+00:00] INFO: Running start handlers
[2013-04-15T06:32:13+00:00] INFO: Start handlers complete.
[2013-04-15T06:32:13+00:00] INFO: Loading cookbooks [apache2, apt, ceph]
[2013-04-15T06:32:13+00:00] INFO: Processing template[/etc/ceph/ceph.conf] action create (ceph::conf line 6)
[2013-04-15T06:32:13+00:00] INFO: template[/etc/ceph/ceph.conf] updated content
[2013-04-15T06:32:13+00:00] INFO: template[/etc/ceph/ceph.conf] mode changed to 644
[2013-04-15T06:32:13+00:00] INFO: Processing service[ceph_mon] action nothing (ceph::mon line 23)
[2013-04-15T06:32:13+00:00] INFO: Processing execute[ceph-mon mkfs] action run (ceph::mon line 40)
creating /var/lib/ceph/tmp/ceph-ceph-0.mon.keyring
added entity mon. auth auth(auid = 18446744073709551615 key=AQC8umZRaDlKKBAAqD8li3u2JObepmzFzDPM3g== with 0 caps)
ceph-mon: mon.noname-a 192.168.4.118:6789/0 is local, renaming to mon.ceph-0
ceph-mon: set fsid to f80aba97-26c5-4aa3-971e-09c5a3afa32f
ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph-0 for mon.ceph-0
[2013-04-15T06:32:14+00:00] INFO: execute[ceph-mon mkfs] ran successfully
[2013-04-15T06:32:14+00:00] INFO: execute[ceph-mon mkfs] sending start action to service[ceph_mon] (immediate)
[2013-04-15T06:32:14+00:00] INFO: Processing service[ceph_mon] action start (ceph::mon line 23)
[2013-04-15T06:32:15+00:00] INFO: service[ceph_mon] started
[2013-04-15T06:32:15+00:00] INFO: Processing ruby_block[tell ceph-mon about its peers] action create (ceph::mon line 64)
mon already active; ignoring bootstrap hint

[2013-04-15T06:32:16+00:00] INFO: ruby_block[tell ceph-mon about its peers] called
[2013-04-15T06:32:16+00:00] INFO: Processing ruby_block[get osd-bootstrap keyring] action create (ceph::mon line 79)
2013-04-15 06:32:16.872040 7fca8e297780 -1 monclient(hunting): authenticate NOTE: no keyring found; disabled cephx authentication
2013-04-15 06:32:16.872042 7fca8e297780 -1 unable to authenticate as client.admin
2013-04-15 06:32:16.872400 7fca8e297780 -1 ceph_tool_common_init failed.
[2013-04-15T06:32:18+00:00] INFO: ruby_block[get osd-bootstrap keyring] called
[2013-04-15T06:32:18+00:00] INFO: Processing package[gdisk] action upgrade (ceph::osd line 37)
[2013-04-15T06:32:27+00:00] INFO: package[gdisk] upgraded from uninstalled to 
[2013-04-15T06:32:27+00:00] INFO: Processing service[ceph_osd] action nothing (ceph::osd line 48)
[2013-04-15T06:32:27+00:00] INFO: Processing directory[/var/lib/ceph/bootstrap-osd] action create (ceph::osd line 67)
[2013-04-15T06:32:27+00:00] INFO: Processing file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] action create (ceph::osd line 76)
[2013-04-15T06:32:27+00:00] INFO: entered create
[2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] owner changed to 0m
[2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] group changed to 0
[2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] mode changed to 440
[2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] created file /var/lib/ceph/bootstrap-osd/ceph.keyring.raw
[2013-04-15T06:32:27+00:00] INFO: Processing execute[format as keyring] action run (ceph::osd line 83)
creating /var/lib/ceph/bootstrap-osd/ceph.keyring
added entity client.bootstrap-osd auth auth(auid = 18446744073709551615 key=AQAOl2tR0M4bMRAAatSlUh2KP9hGBBAP6u5AUA== with 0 caps)
[2013-04-15T06:32:27+00:00] INFO: execute[format as keyring] ran successfully
[2013-04-15T06:32:28+00:00] INFO: Chef Run complete in 14.479108446 seconds
[2013-04-15T06:32:28+00:00] INFO: Running report handlers
[2013-04-15T06:32:28+00:00] INFO: Report handlers complete

Witness:

ceph-0:~ # rcceph status
=== mon.ceph-0 === 
mon.ceph-0: running {"version":"0.60-468-g98de67d"}

On the note of building an easy-to-deploy Ceph appliance, assuming you’re not using Chef and just want something to play with, I reckon the way to go is use config pretty similar to what would be deployed by this Chef cookbook, i.e. an absolute minimal /etc/ceph/ceph.conf, specifying nothing other than initial mons, then use the various Ceph CLI tools to create mons and osds on each node and just rely on the init script in Ceph >= 0.58 to do the right thing with what it finds (having to explicitly specify each mon, osd and mds in the Ceph config by name always bugged me). Bonus points for using csync2 to propagate /etc/ceph/ceph.conf across the cluster.

a silhouette of a person's head and shoulders, used as a default avatar

Диагностика ядра Linux

Сбои в ядре Linux приводят к возникновению чрезвычайно разнообразных симптомов. Некоторые авторы дают следующую неполную классификацию проблем, с которыми может столкнуться пользователь:
  • kernel oops — проблема при выполнении кода в ядре, например кто-нибудь попытался разыменовать нулевой указатель. С кем из программистов на C такого не случалось?
  • kernel panic — кто-то попытался разыменовать нулевой указатель в обработчике прерываний. После чего ядро уходит в полную несознанку и начинает мигать caps-lock-ом.
  • soft lockup ­— что-то заблокировалось (вероятно наткнувшись на не освобожденную блокировку), несмотря на это прерывания продолжают обрабатываться. Хорошее упражнение — попинговать машину или попытаться зажечь numlock.
  • hard lockup — компьютер жужжит вентиляторами и ни на что не реагирует. Провести какую-то диагностику в этом случае особенно сложно. Проблема может быть вызвана как вышедшим из строя железом, так и неаккуратной обработкой прерываний.
кроме того, можно добавить:
  • hung — индуцированное неправильной блокировкой последовательное попадание всех или большинства процессов системы в состояние TASK_UNINTERRUPTIBLE (известного так-же как D-состояние, по обозначению в top или ps). Согласно книге Р.Лава такой «наводящий ужас» процесс нельзя убить, завершить, и вообще что-то с ним сделать. При этом с точки зрения пространства пользователя программа просто заблокирована на каком-то системном вызове (например ввода-вывода). При попадании в подобное состояние всех критических процессов системы можно получить симптом похожий на soft lockup.
Получить как можно более подробную информацию о сути возникающей проблемы важно как для правильного написания сообщения об ошибке, так и для временного переконфигурирования системы с целью избегания выполнения проблемного кода и повышения её живучести.

К счастью, ядро обладает неким набором средств первичной самодиагностики, пригодным для постановки примерного диагноза. Согласно, видимо всеобщей, философской парадигме, наиболее устойчивыми и надежными являются наиболее простые подсистемы, поэтому сообщения ядра имеет смысл искать в текстовой и последовательной консолях. Конечно, при достаточном уровне везения нужное сообщение будет сброшено syslog-демону, записано на диск, отправлено по сети на удаленный syslog-демон и прочее, но при недостаточном везении монитор (и фотоаппарат на телефоне), клавиатура и COM-порт являются последней надеждой.

printk


Одним из механизмов общения ядра с внешним миром является printk. При неправильной настройке сообщение скорее всего будет просто потеряно как не обладающее значимостью. Для настройки используется /proc/sys/kernel/printk, соответствующий ему параметр sysctl, утилита klogconsole, SysRq-клавиша и т.п. Ядро обладает восемью уровнями важности сообщений, поэтому установка уровня в 8 заведомо напечатает все на консоль.

sysrq


Магическая кнопка включается в /proc/sys/kernel/sysrq или соответствующим ему параметром sysctl. Предлагается просто записать туда 1, несмотря на то, что новые ядра позволяют изысканный контроль над функционалом. В данной ситуации незачем себя ограничивать, хотя для нормальной работы разумно ограничить функционал на случай случайных нажатий. Если осталась рабочая терминальная сессия (например удаленная сессия по ssh), можно использовать как альтернативу, например, echo b > /proc/sysrq-trigger; с последовательной консоли поведение активируется кнопкой 'break'.

Важно помнить, что SysRq-w, например, выдает сообщение с уровнем KERN_INFO, то есть выдача printk должна быть настроена правильно, чтобы можно было что-то увидеть. Полный список команд SysRq.

softlockup_panic


Параметр ядра softlockup_panic или параметр kernel.softlockup_panic для sysctl включают панику ядра при обнаружении soft lockup. Описание внутреннего устройства. Включение паники позволит остановить выполнение системы и проанализировать выдачу, снабженную трассировкой, хотя контроль блокировки исполняется постоянно. Существует аналогичный механизм отслеживания hard lockup для многопроцессорных систем (если остались живые процессоры — есть шанс что сработает), с соответствующим параметром hardlockup_panic. Время срабатывания обычно в пределах одной минуты.

hung_task_timeout_sec


Механизм отслеживания процессов, надолго застрявших в состоянии TASK_UNINTERRUPTIBLE, параметры настраиваются через sysctl и /proc:
  • kernel.hung_task_panic — включает/отключает панику при обнаружении не прерываемого процесса;
  • kernel.hung_task_warnings — счетчик сообщений. Иными словами, если паника отключена, будет сообщено о таком количестве зависших процессов;
  • kernel.hung_task_timeout_secs — сколько секунд процесс должен непрерывно пробыть в TASK_UNINTERRUPTIBLE, чтобы вызвать сообщение. По умолчанию бывает либо выключено (0), либо очень большое число порядка 10 минут.
Сообщения снабжаются трассировкой, которая подсказывает в какой подсистеме происходит сбой.

a silhouette of a person's head and shoulders, used as a default avatar
the avatar of Flavio Castelli

Docker and openSUSE: a container full of Geekos

SUSE’s Hackweek #9 is over. It has been an awesome week during which I worked hard to make docker a first class citizen on openSUSE. I also spent some time working on an openSUSE container that could be used by docker’s users.

The project has been tracked on this page of hackweek’s wiki, this is a detailed report of what I achieved.

Installing docker on openSUSE 12.3

Docker has been packaged inside of this OBS project.

So installing it requires just two commands:

sudo zypper ar http://download.opensuse.org/repositories/home:/flavio_castelli:/docker/openSUSE_12.3 docker
sudo zypper in docker

There’s also a 1 Click Install for the lazy ones :)

Zypper will install docker and its dependencies which are:

  • lxc: docker’s “magic” is built on top of LXC.
  • bridge-utils: is used to setup the bridge interface used by docker’s containers.
  • dnsmasq: is used to start the dhcp server used by the containers.
  • iptables: is used to get containers’ networking work.
  • bsdtar: is used by docker to compress/extract the containers.
  • aufs3 kernel module: is used by docker to track the changes made to the containers.

The aufs3 kernel module is not part of the official kernel and is not available on the official repositories. Hence adding docker will trigger the installation of a new kernel package on your machine.

Both the kernel and the aufs3 packages are going to be installed from the home:flavio_castelli:docker repository but they are in fact links to the packages created by Michal Hrusecky inside of his aufs project on OBS.

Note well: docker works only on 64bit hosts. That’s why there are no 32bit packages.

Docker appliance

If you don’t want to install docker on your system or you are just curious and want to jump straight into action there’s a SUSE Studio appliance ready for you. You can find it here.

If you are not familiar with SUSE Gallery let me tell you two things about it:

  • you can download the appliance on your computer and play with it or…
  • you can clone the appliance on SUSE Studio and customize it even further or…
  • you can test the appliance from your browser using SUSE Studio’s Testdrive feature (no SUSE Studio account required!).

The latter option is really cool, because it will allow you to play with docker immediately. There’s just one thing to keep in mind about Testdrive: outgoing connections are disabled, so you won’t be able to install new stuff (or download new docker images). Fortunately this appliance comes with the busybox container bundled, so you will be able to play a bit with docker.

Running docker

The docker daemon must be running in order to use your containers. The openSUSE package comes with a init script which can be used to manage it.

The script is /etc/init.d/docker, but there’s also the usual symbolic link called /usr/sbin/rcdocker.

To start the docker daemon just type:

sudo /usr/sbin/rcdocker start

This will trigger the following actions:

  1. The docker0 bridge interface is created. This interface is bridged with eth0.
  2. A dnsmasq instance listening on the docker0 interface is started.
  3. IP forwarding and IP masquerading are enabled.
  4. Docker daemon is started.

All the containers will get an IP on the 10.0.3.0/24 network.

Playing with docker

Now is time to play with docker.

First of all you need to download an image: docker pull base

This will fetch the official Ubuntu-based image created by the dotCloud guys.

You will be able to run the Ubuntu container on your openSUSE host without any problem, that’s LXC’s “magic” ;)

If you want to use only “green” products just pull the openSUSE 12.3 container I created for you:

docker pull flavio/opensuse-12-3

Please experiment a lot with this image and give me your feedback. The dotCloud guys proposed me to promote it to top-level base image, but I want to be sure everything works fine before doing that.

Now you can go through the examples reported on the official docker’s documentation.

Create your own openSUSE images with SUSE Studio

I think it would be extremely cool to create docker’s images using SUSE Studio. As you might know I’m part of the SUSE Studio team, so I looked a bit into how to add support to this new format.

– personal opinion –

There are some technical challenges to solve, but I don’t think it would be hard to address them.

– personal opinion –

If you are interested in adding the docker format to SUSE Studio please create a new feature request on openFATE and vote it!

In the meantime there’s another way to create your custom docker images, just keep reading.

Create your own openSUSE images with KIWI

KIWI is the amazing tool at the heart of SUSE Studio and can be used to create LXC containers.

As said earlier docker runs LXC containers, so we are going to follow these instructions.

First of all install KIWI from the Virtualization:Appliances project on OBS:

sudo zypper ar http://download.opensuse.org/repositories/Virtualization:/Appliances/openSUSE_12.3 virtualization:appliances
sudo zypper in kiwi kiwi-doc

We are going to use the configuration files of a simple LXC container shipped the kiwi-doc package:

cp -r /usr/share/doc/packages/kiwi/examples/suse-11.3/suse-lxc-guest ~/openSUSE_12_3_docker

The openSUSE_12_3_docker directory contains two configuration files used by KIWI (config.sh and config.xml) plus the root directory.

The contents of this directory are going to be added to the resulting container. It’s really important to create the /etc/resolv.conf file inside of the final image since docker is going to mount the resol.conf file of the host system inside of the running guest. If the file is not found docker won’t be able to start our container.

An empty file is enough:

touch ~/openSUSE_12_3_docker/root/etc/resolv.conf

Now we can create the rootfs of the container using KIWI:

sudo /usr/sbin/kiwi --prepare ~/openSUSE_12_3_docker --root /tmp/openSUSE_12_3_docker_rootfs

We can skip the next step reported on KIWI’s documentation, that’s not needed with docker because it will produce an invalid container. Just execute the following command:

sudo tar cvjpf openSUSE_12_3_docker.tar.bz2 -C /tmp/openSUSE_12_3_docker_rootfs/ .

This will produce a tarball containing the rootfs of your container.

Now you can import it inside of docker, there are two ways to achieve that:

  • from a web server.
  • from a local file.

Importing the image from a web server is really convenient if you ran KIWI on a different machine.

Just move the tarball to a directory which is exposed by the web server. If you don’t have one installed just move to the directory containing the tarball and type the following command:

python -m SimpleHTTPServer 8080

This will start a simple http server listening on port 8080 of your machine.

On the machine running docker just type:

docker import http://mywebserver/openSUSE_12_3_docker.tar.bz2 my_openSUSE_image latest

If the tarball is already on the machine running docker you just need to type:

cat ~/openSUSE_12_3_docker.tar.bz2 | docker import - my_openSUSE_image latest

Docker will download (just in the 1st case) and import the tarball. The resulting image will be named ‘my_openSUSE_image’ and it will have a tag named ‘latest’.

The name of the tag is really important since docker tries to run the image with the ‘latest’ tag unless you explicitly specify a different value.

Conclusion

Hackweek #9 has been both productive and fun for me. I hope you will have fun too using docker on openSUSE.

As usual, feedback is welcome.

a silhouette of a person's head and shoulders, used as a default avatar

The Ceph Chef Experiment

Sometimes it’s most interesting to just dive in and see what breaks. There’s a Chef cookbook for Ceph on github which seems rather more recently developed than the one in SUSE-Cloud/barclamp-ceph, and seeing as its use is documented in the Ceph manual, I reckon that’s the one I want to be using. Of course, the README says “Tested as working: Ubuntu Precise (12.04)”, and I’m using openSUSE 12.3…

First things first, need a Chef server, so I installed openSUSE 12.3 on a VM, then installed Chef 10 on that, roughly following the manual installation instructions. Note for those following along at home – sometimes the blocks I’ve copied here are just commands, sometimes they include command output as well. You’ll figure it out 🙂

# zypper ar -f http://download.opensuse.org/repositories/systemsmanagement:/chef:/10/openSUSE_12.3/systemsmanagement:chef:10.repo
# zypper in rubygem-chef-server
# chkconfig couchdb on
# rccouchdb start
# chkconfig rabbitmq-server on
# rcrabbitmq-server start
# rabbitmqctl add_vhost /chef
# rabbitmqctl add_user chef testing
# rabbitmqctl set_permissions -p /chef chef ".*" ".*" ".*"
# for service in solr expander server server-webui; do
      chkconfig chef-$service on
      rcchef-$service start
  done

I didn’t bother editing /etc/chef/server.rb, the config as shipped works fine (not that the AMQP password is very secure, mind). The only catch is the web UI didn’t start. IIRC this is due to /etc/chef/webui.pem not existing yet (chef-server creates it, but this doesn’t finish until later).

Then configured knife:

# knife configure -i
WARNING: No knife configuration file found
Where should I put the config file? [/root/.chef/knife.rb]
Please enter the chef server URL: [http://os-chef.example.com:4000]
Please enter a clientname for the new client: [root]
Please enter the existing admin clientname: [chef-webui]
Please enter the location of the existing admin client's private key: [/etc/chef/webui.pem]
Please enter the validation clientname: [chef-validator]
Please enter the location of the validation key: [/etc/chef/validation.pem]
Please enter the path to a chef repository (or leave blank):
Creating initial API user...
Created client[root]
Configuration file written to /root/.chef/knife.rb

And make a client for me:

# knife client create tserong -d -a -f /tmp/tserong.pem
Created client[tserong]

Then set up my desktop as a Chef workstation (roughly following these docs, and again pulling Chef from systemsmanagement:chef:10 on OBS):

# sudo zypper in rubygem-chef
# cd ~
# git clone git://github.com/opscode/chef-repo.git
# cd chef-repo
# mkdir -p ~/.chef
# scp root@os-chef:/etc/chef/validation.pem ~/.chef/
# scp root@os-chef:/tmp/tserong.pem ~/.chef/
# knife configure
WARNING: No knife configuration file found
Where should I put the config file? [/home/tserong/.chef/knife.rb]
Please enter the chef server URL: [http://desktop.example.com:4000] http://os-chef.example.com:4000
Please enter an existing username or clientname for the API: [tserong]
Please enter the validation clientname: [chef-validator]
Please enter the location of the validation key: [/etc/chef/validation.pem] /home/tserong/.chef/validation.pem
Please enter the path to a chef repository (or leave blank): /home/tserong/chef-repo
[...]
Configuration file written to /home/tserong/.chef/knife.rb

Make sure it works:

# knife client list
chef-validator
chef-webui
root
tserong

Grab the cookbooks and upload them to the Chef server. The Ceph cookbook claims to depend on apache and apt, although presumably the former is only necessary for RADOSGW, and the latter for Debian-based systems. Anyway:

# cd ~/chef-repo
# git submodule add git@github.com:opscode-cookbooks/apache2.git cookbooks/apache2
# git submodule add git@github.com:opscode-cookbooks/apt.git cookbooks/apt
# git submodule add git@github.com:ceph/ceph-cookbooks.git cookbooks/ceph
# knife cookbook upload apache2
# knife cookbook upload apt
# knife cookbook upload ceph

Boot up a couple more VMs to be Ceph nodes, using the appliance image from last time. These need chef-client installed, and need to be registered with the chef server. knife bootstrap will install chef-client and dependencies for you, but after looking at the source, if /usr/bin/chef doesn’t exist, it actually uses wget or curl to pull http://opscode.com/chef/install.sh and runs that. How this is considered a good idea is completely baffling to me, so again I installed our chef build from OBS on each of my Ceph nodes (note to self: should add this to appliance image on Studio):

# zypper ar -f http://download.opensuse.org/repositories/systemsmanagement:/chef:/10/openSUSE_12.3/systemsmanagement:chef:10.repo
# zypper in rubygem-chef

And ran the now-arguably-safe knife bootstrap from my desktop:

# knife bootstrap ceph-0.example.com
Bootstrapping Chef on ceph-0.example.com
[...]
# knife bootstrap ceph-1.example.com
Bootstrapping Chef on ceph-1.example.com
[...]

Then, roughly following the Ceph Deploying with Chef document.

Generate a UUID and monitor secret (had to do the latter on one of my Ceph VMs, as ceph-authtool is conveniently already installed):

# uuidgen -r
f80aba97-26c5-4aa3-971e-09c5a3afa32f
# ceph-authtool /dev/stdout --name=mon. --gen-key
[mon.]
key = AQC8umZRaDlKKBAAqD8li3u2JObepmzFzDPM3g==

Then on my desktop:

knife environment create Ceph

This I filled in with:

{
  "name": "Ceph",
  "description": "",
  "cookbook_versions": {
  },
  "json_class": "Chef::Environment",
  "chef_type": "environment",
  "default_attributes": {
    "ceph": {
      "monitor-secret": "AQC8umZRaDlKKBAAqD8li3u2JObepmzFzDPM3g==",
      "config": {
        "fsid": "f80aba97-26c5-4aa3-971e-09c5a3afa32f",
        "mon_initial_members": "ceph-0,ceph-1",
        "global": {
        },
        "osd": {
          "osd journal size": "1000",
          "filestore xattr use omap": "true"
        }
      }
    }
  },
  "override_attributes": {
  }
}

Uploaded roles:

# knife role from file cookbooks/ceph/roles/ceph-mds.rb
# knife role from file cookbooks/ceph/roles/ceph-mon.rb
# knife role from file cookbooks/ceph/roles/ceph-osd.rb
# knife role from file cookbooks/ceph/roles/ceph-radosgw.rb

Assigned roles to nodes:

# knife node run_list add ceph-0.example.com 'role[ceph-mon],role[ceph-osd],role[ceph-mds]'
# knife node run_list add ceph-1.example.com 'role[ceph-mon],role[ceph-osd],role[ceph-mds]'

I didn’t bother with recipe[ceph::repo] as I don’t care about installation right now (Ceph is already installed in my VM images).

Had to set "chef_environment": "Ceph" for each node by running:

# knife node edit ceph-0.example.com
# knife node edit ceph-1.example.com

Didn’t set Ceph osd_devices per node – I’m just playing, so can sit on top of the root partition.

Now let’s see if it works:

# knife ssh name:ceph-0.example.com -x root chef-client
[2013-04-11T13:44:47+00:00] INFO: *** Chef 10.24.0 ***
[2013-04-11T13:44:48+00:00] INFO: Run List is [role[ceph-mon], role[ceph-osd], role[ceph-mds]]
[2013-04-11T13:44:48+00:00] INFO: Run List expands to [ceph::mon, ceph::osd, ceph::mds]
[2013-04-11T13:44:48+00:00] INFO: HTTP Request Returned 404 Not Found: No routes match the request: /reports/nodes/ceph-0.example.com/runs
[2013-04-11T13:44:48+00:00] INFO: Starting Chef Run for ceph-0.example.com
[2013-04-11T13:44:48+00:00] INFO: Running start handlers
[2013-04-11T13:44:48+00:00] INFO: Start handlers complete.
[2013-04-11T13:44:48+00:00] INFO: Loading cookbooks [apache2, apt, ceph]
No ceph-mon found.

[2013-04-11T13:44:48+00:00] INFO: Processing template[/etc/ceph/ceph.conf] action create (ceph::conf line 6)
[2013-04-11T13:44:48+00:00] INFO: template[/etc/ceph/ceph.conf] backed up to /var/chef/backup/etc/ceph/ceph.conf.chef-20130411134448
[2013-04-11T13:44:48+00:00] INFO: template[/etc/ceph/ceph.conf] updated content
[2013-04-11T13:44:48+00:00] INFO: template[/etc/ceph/ceph.conf] owner changed to 0
[2013-04-11T13:44:48+00:00] INFO: template[/etc/ceph/ceph.conf] group changed to 0
[2013-04-11T13:44:48+00:00] INFO: template[/etc/ceph/ceph.conf] mode changed to 644
[2013-04-11T13:44:48+00:00] INFO: Processing service[ceph_mon] action nothing (ceph::mon line 23)
[2013-04-11T13:44:48+00:00] INFO: Processing execute[ceph-mon mkfs] action run (ceph::mon line 40)
creating /var/lib/ceph/tmp/ceph-ceph-0.mon.keyring
added entity mon. auth auth(auid = 18446744073709551615 key=AQC8umZRaDlKKBAAqD8li3u2JObepmzFzDPM3g== with 0 caps)
ceph-mon: mon.noname-a 192.168.4.118:6789/0 is local, renaming to mon.ceph-0
ceph-mon: set fsid to f80aba97-26c5-4aa3-971e-09c5a3afa32f
ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph-0 for mon.ceph-0
[2013-04-11T13:44:49+00:00] INFO: execute[ceph-mon mkfs] ran successfully
[2013-04-11T13:44:49+00:00] INFO: execute[ceph-mon mkfs] sending start action to service[ceph_mon] (immediate)
[2013-04-11T13:44:49+00:00] INFO: Processing service[ceph_mon] action start (ceph::mon line 23)
[2013-04-11T13:44:49+00:00] INFO: service[ceph_mon] started
[2013-04-11T13:44:49+00:00] INFO: Processing ruby_block[tell ceph-mon about its peers] action create (ceph::mon line 64)
connect to
/var/run/ceph/ceph-mon.ceph-0.asok
failed with
(2) No such file or directory

connect to
/var/run/ceph/ceph-mon.ceph-0.asok
failed with
(2) No such file or directory

[2013-04-11T13:44:49+00:00] INFO: ruby_block[tell ceph-mon about its peers] called
[2013-04-11T13:44:49+00:00] INFO: Processing ruby_block[get osd-bootstrap keyring] action create (ceph::mon line 79)
2013-04-11 13:44:49.928800 7f58e9677700 0
-- :/23863 >> 192.168.4.117:6789/0 pipe(0x18f0d30 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault

2013-04-11 13:44:52.928739 7f58efc1c700 0 -- :/23863 >> 192.168.4.118:6789/0 pipe(0x7f58e0000c00 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-04-11 13:44:55.929375 7f58e9677700 0 -- :/23863 >> 192.168.4.117:6789/0 pipe(0x7f58e0003010 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-04-11 13:44:58.929211 7f58efc1c700 0 -- :/23863 >> 192.168.4.118:6789/0 pipe(0x7f58e00039f0 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-04-11 13:45:01.929787 7f58e9677700 0 -- :/23863 >> 192.168.4.117:6789/0 pipe(0x7f58e00023b0 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
[...]

And it’s stuck there, trying and failing to talk to something.

See those “no such file or directory” errors after “service[ceph_mon] started”? Yeah? Well, the mon isn’t started, hence the missing sockets in /var/run/ceph.

Why isn’t the mon started? Turns out the ceph init script won’t start any mon (or osd or mds for that matter) if you don’t have entries in the config file with some suffix, e.g. [mon.a]. And all I’ve got is:

[global]
  fsid =  f80aba97-26c5-4aa3-971e-09c5a3afa32f
  mon initial members = ceph-0,ceph-1
  mon host = 192.168.4.118:6789, 192.168.4.117:6789

[osd]
    osd journal size = 1000
    filestore xattr use omap = true

But given the mon recipe triggers ceph-mon-all-starter if using upstart (which it would be, on the “Tested as working: Ubuntu Precise”), and ceph-mon-all-starter seems to just ultimately run something like ceph-mon --cluster=ceph -i ceph-0 regardless of what’s in the config file… Maybe I can cheat.

Directly starting ceph-mon from a shell on ceph-0 before the chef-client run turned out to be a bad idea (bit of a chicken and egg problem figuring out what to inject into the “mon host” line of the config file). So I put a bit of evil into the mon recipe:

diff --git a/recipes/mon.rb b/recipes/mon.rb
index 5cd76de..a518830 100644
--- a/recipes/mon.rb
+++ b/recipes/mon.rb
@@ -61,6 +61,10 @@ EOH
   notifies :start, "service[ceph_mon]", :immediately
 end
 
+execute 'hack to force mon start' do
+  command "ceph-mon --cluster=ceph -i #{node['hostname']}"
+end
+
 ruby_block "tell ceph-mon about its peers" do
   block do
     mon_addresses = get_mon_addresses()

Try again:

# knife ssh name:ceph-0.example.com -x root chef-client
[2013-04-11T15:10:43+00:00] INFO: *** Chef 10.24.0 ***
[2013-04-11T15:10:44+00:00] INFO: Run List is [role[ceph-mon], role[ceph-osd], role[ceph-mds]]
[2013-04-11T15:10:44+00:00] INFO: Run List expands to [ceph::mon, ceph::osd, ceph::mds]
[2013-04-11T15:10:44+00:00] INFO: HTTP Request Returned 404 Not Found: No routes match the request: /reports/nodes/ceph-0.example.com/runs
[2013-04-11T15:10:44+00:00] INFO: Starting Chef Run for ceph-0.example.com
[2013-04-11T15:10:44+00:00] INFO: Running start handlers
[2013-04-11T15:10:44+00:00] INFO: Start handlers complete.
[2013-04-11T15:10:44+00:00] INFO: Loading cookbooks [apache2, apt, ceph]
[2013-04-11T15:10:44+00:00] INFO: Storing updated cookbooks/ceph/recipes/mon.rb in the cache.
No ceph-mon found.

[2013-04-11T15:10:44+00:00] INFO: Processing template[/etc/ceph/ceph.conf] action create (ceph::conf line 6)
[2013-04-11T15:10:44+00:00] INFO: Processing service[ceph_mon] action nothing (ceph::mon line 23)
[2013-04-11T15:10:44+00:00] INFO: Processing execute[ceph-mon mkfs] action run (ceph::mon line 40)
[2013-04-11T15:10:44+00:00] INFO: Processing execute[hack to force mon start] action run (ceph::mon line 65)
starting mon.ceph-0 rank 1 at 192.168.4.118:6789/0 mon_data /var/lib/ceph/mon/ceph-ceph-0 fsid f80aba97-26c5-4aa3-971e-09c5a3afa32f
[2013-04-11T15:10:44+00:00] INFO: execute[hack to force mon start] ran successfully
[2013-04-11T15:10:44+00:00] INFO: Processing ruby_block[tell ceph-mon about its peers] action create (ceph::mon line 69)
adding peer 192.168.4.118:6789/0 to list: 192.168.4.117:6789/0,192.168.4.118:6789/0

adding peer 192.168.4.117:6789/0 to list: 192.168.4.117:6789/0,192.168.4.118:6789/0

[2013-04-11T15:10:44+00:00] INFO: ruby_block[tell ceph-mon about its peers] called
[2013-04-11T15:10:44+00:00] INFO: Processing ruby_block[get osd-bootstrap keyring] action create (ceph::mon line 84)
2013-04-11 15:10:44.432266 7f8f9f8c0700  0 
-- :/25965 >> 192.168.4.117:6789/0 pipe(0x16d9d30 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault

2013-04-11 15:10:50.433053 7f8f9f7bf700  0 -- 192.168.4.118:0/25965 >> 192.168.4.117:6789/0 pipe(0x7f8f94001d30 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-04-11 15:10:56.433268 7f8fa5e65700  0 -- 192.168.4.118:0/25965 >> 192.168.4.117:6789/0 pipe(0x7f8f94001d30 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-04-11 15:11:02.433987 7f8f9f8c0700  0 -- 192.168.4.118:0/25965 >> 192.168.4.117:6789/0 pipe(0x7f8f94002db0 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
2013-04-11 15:11:08.434358 7f8f9f7bf700  0 -- 192.168.4.118:0/25965 >> 192.168.4.117:6789/0 pipe(0x7f8f94004fb0 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault

At this point it’s stalled presumably waiting to talk to the other mon, so in another terminal window had to kick off a chef-client run on ceph-1 to get it into the same state as ceph-0 (knife ssh name:ceph-1.example.com -x root chef-client). This allowed both nodes to progress to the next problem:

2013-04-11 15:11:28.563438 7f8fa5e67780 -1 monclient(hunting): authenticate NOTE: no keyring found; disabled cephx authentication
2013-04-11 15:11:28.563443 7f8fa5e67780 -1 unable to authenticate as client.admin
2013-04-11 15:11:28.563814 7f8fa5e67780 -1 ceph_tool_common_init failed.
2013-04-11 15:11:29.572208 7f2369130780 -1 monclient(hunting): authenticate NOTE: no keyring found; disabled cephx authentication
2013-04-11 15:11:29.572210 7f2369130780 -1 unable to authenticate as client.admin
2013-04-11 15:11:29.572527 7f2369130780 -1 ceph_tool_common_init failed.
2013-04-11 15:11:31.380073 7f1907d18780 -1 monclient(hunting): authenticate NOTE: no keyring found; disabled cephx authentication
2013-04-11 15:11:31.380078 7f1907d18780 -1 unable to authenticate as client.admin
2013-04-11 15:11:31.380720 7f1907d18780 -1 ceph_tool_common_init failed.
2013-04-11 15:11:32.392345 7fc2bc462780 -1 monclient(hunting): authenticate NOTE: no keyring found; disabled cephx authentication
[...]

And we’re spinning again.

But that’s enough for one day.

the avatar of Will Stephenson

hackweek9: Lightweight KDE Desktop project

It’s Hack Week 9 at SUSE, and I’m working on a cracking project this time around. I’ve codenamed it ‘KLyDE’, for K Lightweight Desktop Environment, and it’s an effort to point KDE at the lightweight desktop market.  Surely some mistake, you say?  KDE and lightweight kan’t fit in the same sentence.  I think they can.

This project has been bouncing around my head for a couple of years now, starting on a train ride back from the KDE PIM meeting in Osnabrück in 2010, then I presented it at COSCUP 2012 in Taiwan last August. But work commitments and family always got in the way of completing/finishing it.  SUSE’s hack week gives me 40 hours to throw at it and this time I wasn’t going to tackle it alone, so I enlisted my bro(grammer)s Jos and Klaas.

As has been repeated on Planet KDE over the past decade, KDE is not intrinisically bloated.  At its core, it jumps through a lot of hoops for memory efficiency and speed, and is modular to a fault. But most packagings of KDE take a kitchen sink approach, and when you install your KDE distribution you get a full suite of desktop, applets and applications.  The other major criticism of KDE is that it is too configurable.  The KlyDE project applies KDE’s modularity and configurability to the challenge of making a lightweight desktop.  However, what I don’t want to do is a hatchet job where functionality is crudely chopped out of the desktop to fit some conception of light weight.

We’re approaching problem from 3 sides:

Minimal footprint

The first method of attacking this is by packaging. It involves factoring optional components of the KDE desktop out of the base installation into subpackages that the main packages only have weak dependencies upon, allowing a minimal installation without them.  This targets big lumps of ram/cpu usage and objects of user hatred like Nepomuk and Akonadi, but also smaller items like Activities and Attica (social desktop support) and non-core window decorations/styles/etc.  The actual KDE build includes everything; the optional components are always available, so those who do need one of them can just add the package and start using it.

The second approach is by configuration.  This allows different profiles of KDE desktop with the same installed packages.  We’ve collected sets of configs that represent these profiles, but I’m not entirely sure how to package this yet.  One way would be to ship default profiles as X sessions.  Another would be a first run wizard or KCModule so users can select profile and apply it to their configuration after login.

Simple config
Is a mixture of usability and perception.  A simplified configuration presents fewer choices and is therefore easier to understand.  It also looks faster and more lightweight, because people equate visual simplicity with efficiency.  This is incorrect, of course, but I’m not above exploiting this fallacy to give people what they want. For this aspect, we’re providing an alternate set of System Settings metadata to give it a cut down tree.  The full set remains available, if needed.

Fast startup

Is the most high-risk-for-reward effort.  It’s mostly a perception/first impression thing.  A working desktop shouldn’t need to be started up all the time.  But for people trying out KLyDE for the first time, a fast startup supports the claim to minimalism.  The interesting thing I note so far is that the package splitting and configuration in 1) makes very little different to startup time.  The optional components of KDE are already highly optimised to not affect startup time.  So I’m investigating alternate startup systems; refactoring startkde, Dantti’s systemk, Plasma Active’s startactive, and a systemd-managed startup.

Progress

The packaging effort is mostly done; we have packages in an Open Build Service project, that give you a bare Plasma Workspace when installed on top of a minimal X SUSE 12.3 installation with –no-recommends.

Jos has put a great effort into understanding System Settings and has produced a simple layout, I just need to complete my patch to allow it to use the alternative metadata scheme at runtime.  If we have time, we’ll also customise some KCMs to provide a simple way to control KDE’s theming.

I’ve been busy converting systemd, kdeinit and ksmserver into a native systemd startup by defining systemd unit files.  It’s a steep learning curve as it exposes a number of assumptions on both sides, but I’m getting there.  The unoptimised systemdkde.target starts up in 4s here, vs 6s for the same .kde4 started by startkde.  That might be due to legacy/fault tolerance parts of startkde being left out, so I won’t give more detailed numbers yet.

Next steps

You can see the state of the project on Trello. I’d like to see if there is a startup time  win by parallelizing kded and ksmserver starting modules and apps. I’d like to make an openSUSE pattern for existing installations, and an iso or a disk image for testers.  I’ve also submitted a talk on the subject for Akademy, so I’d like to work on that and get some real data to support this work.

 

the avatar of Will Stephenson

hackweek9: Lightweight KDE Desktop project - updated

It's Hack Week 9 at SUSE, and I'm working on a cracking project this time around. I've codenamed it 'KLyDE', for K Lightweight Desktop Environment, and it's an effort to point KDE at the lightweight desktop market.  Surely some mistake, you say?  KDE and lightweight kan't fit in the same sentence.  I think they can.

This project has been bouncing around my head for a couple of years now, starting on a train ride back from the KDE PIM meeting in Osnabrück in 2010, then I presented it at COSCUP 2012 in Taiwan last August. But work commitments and family always got in the way of completing/finishing it.  SUSE's hack week gives me 40 hours to throw at it and this time I wasn't going to tackle it alone, so I enlisted my bro(grammer)s Jos and Klaas.

As has been repeated on Planet KDE over the past decade, KDE is not intrinsically bloated.  At its core, it jumps through a lot of hoops for memory efficiency and speed, and is modular to a fault. But most packagings of KDE take a kitchen sink approach, and when you install your KDE distribution you get a full suite of desktop, applets and applications.  The other major criticism of KDE is that it is too configurable.  The KlyDE project applies KDE's modularity and configurability to the challenge of making a lightweight desktop.  However, what I don't want to do is a hatchet job where functionality is crudely chopped out of the desktop to fit some conception of light weight. Read on after the break to see how we're doing it.

We're approaching problem from 3 sides:

Minimal footprint

The first method of attacking this is by packaging. It involves factoring optional components of the KDE desktop out of the base installation into subpackages that the main packages only have weak dependencies upon, allowing a minimal installation without them.  This targets big lumps of ram/cpu usage and objects of user hatred like Nepomuk and Akonadi, but also smaller items like Activities and Attica (social desktop support) and non-core window decorations/styles/etc.  The actual KDE build includes everything; the optional components are always available, so those who do need one of them can just add the package and start using it.

The second approach is by configuration.  This allows different profiles of KDE desktop with the same installed packages.  We've collected sets of configs that represent these profiles, but I'm not entirely sure how to package this yet.  One way would be to ship default profiles as X sessions.  Another would be a first run wizard or KCModule so users can select profile and apply it to their configuration after login.

Simple config Is a mixture of usability and perception.  A simplified configuration presents fewer choices and is therefore easier to understand.  It also looks faster and more lightweight, because people equate visual simplicity with efficiency.  This is incorrect, of course, but I'm not above exploiting this fallacy to give people what they want. For this aspect, we're providing an alternate set of System Settings metadata to give it a cut down tree.  The full set remains available, if needed.

Fast startup

Is the most high-risk-for-reward effort.  It's mostly a perception/first impression thing.  A working desktop shouldn't need to be started up all the time.  But for people trying out KLyDE for the first time, a fast startup supports the claim to minimalism.  The interesting thing I note so far is that the package splitting and configuration in 1) makes very little different to startup time.  The optional components of KDE are already highly optimised to not affect startup time.  So I'm investigating alternate startup systems; refactoring startkde, Dantti's systemk, Plasma Active's startactive, and a systemd-managed startup.

Progress

The packaging effort is mostly done; we have packages in an Open Build Service project, that give you a basic 'KlyDE' Plasma Workspace when installed on top of a minimal X SUSE 12.3 installation with --no-recommends.

Jos has put a great effort into understanding System Settings and has produced a simple layout, I just need to complete my patch to allow it to use the alternative metadata scheme at runtime.  If we have time, we'll also customise some KCMs to provide a simple way to control KDE's theming.

I've been busy converting systemd, kdeinit and ksmserver into a native systemd startup by defining systemd unit files.  It's a steep learning curve as it exposes a number of assumptions on both sides, but I'm getting there.  The unoptimised systemdkde.target starts up in 4s here, vs 6s for the same .kde4 started by startkde.  That might be due to legacy/fault tolerance parts of startkde being left out, so I won't give more detailed numbers yet.

Update Thursday 9pm The freedesktop.org sprint is happening at the SUSE offices this weekend so I had a long discussion with Lennart Pöttering about the systemd session effort. It left us with a number of open questions such as how to perform XSMP session restore and XDG autostart of apps, tasks which align with what systemd does, and I got some useful tips on how to start up a real session without losing configurability

Next steps

You can see the state of the project on Trello. I'd like to see if there is a startup time  win by parallelizing kded and ksmserver starting modules and apps. I'd like to make an openSUSE pattern for existing installations, and an iso or a disk image for testers.  I've also submitted a talk on the subject for Akademy, so I'd like to work on that and get some real data to support this work.