ownCloud Nightly Builds
as you might have heard, we’re providing nightly builds of the parts of ownCloud, which is ownCloud community server as well as the ownCloud client on its supported platforms Linux, Windows and Mac.
The nightly builds of the client for Windows and Mac and the source balls can be downloaded from the nightly directory on our download server. They are named after the last release, followed by a date time stamp, which is simply a concatentation of the year, month and day the nightly was built on. For linux we maintain a so called nightly repository in the openSUSE Buildservice which builds for various distributions. That can be added to the linux package management system and that way every morning, an update is there for you. The ownCloud server is built only in the nightly repository from this daily source.
What are nightlies for?
To test ownCloud is a very difficult task. ownCloud is a complex system consisting of variuos parts (anybody thought ownCloud is just a “LAMP web app”?) which supports a huge variety of environments. That multiplies to a huge variety that needs to be tested. We had issues with that in the past and as a result, providing easy access to ready-to-run testing versions is one important action we are taking to improve quality as that enables more people to participate on tests.
Of course nightly builds are not released software, nor got they extensive testing. Nightly builds must never be used on production installations, please, really don’t do.
But maybe you have a test server or virtual machine somewhere in spare. So it would be great if you could install a nightly version on that, combine it with a nightly built client, and see if everything runs as you would expect it.
Another great purpose of the nightly builds is to use it for bug verification. In client development we often comment on bugs with sentences like this is fixed now, please verify with the current nightly build. That way bug reporter get a very quick way to verify the fix of their issue and developers can be sure that their fix really is sufficient.
How are nightlies built?
We are doing the nightly builds through an instance of Jenkins, a continous integration server. With a little script we wrote Jenkins feeds the OBS every night, so this is completely automatted. But more on that in a subsequent blog.
ownCloud Client Release 1.2.5
Today is release day!
We just release ownCloud Client 1.2.5, hurrah! It’s a small bug fix release, but it will make more people happy users! It fixes three ugly crashes we found with the help of encouraged bug hunters. One of them was an race between our thread which runs csync and the Qt SSL stack we’re obviously using. That caused a crash which happened after quite some run time, or also never… Hard to reproduce and debug, but finally Danimo and his mighty friends were able to nail that.
The new client can be found through it’s download page, here is the Changelog.
If you find issues, please help us to improve ownCloud by commenting or reporting the bugs on our bugtracker. Thanks!
Overview of the openSUSE ARM Hackathon
My openSUSE 12 Journal 13: NetworkManager Config for Cisco LEAP Wireless
In short, I wasted a few days and a weekend to get my shiny new 12.3 to connect with my company's Cisco LEAP (henceforth referred to as just LEAP) wireless network.
For those who do not have a requirement to connect to wifi via LEAP but are curious anyways, please see this link for LEAP.
To save you the same grief, here is the answer (see screenshot below) on how to configure NetworkManager:
![]() |
| The one on the Left WORKED; the one on the Right did NOT work for me |
Read more »
My openSUSE 12 Journal 12: Hello 12.3!
Hello openSUSE 12.3! You were released into the big bad world on 13 March 2013 and you are fantastic!
Two weeks back, I installed 12.3 onto a new partition on my primary workstation/laptop in one hour. By the hour thereafter, and I do need to install quite a handful of additional software, I was working on my day job without skipping a beat. One of the software that is critical to my job is IBM Notes 9 Social Edition. As covered in my previous blog entry on the beta, IBM Notes 9 has been officially released on 2 March 2013. Look out for my next blog entry on installing the Generally Available(GA) version of IBM Notes 9 Social Edition on openSUSE 12.3.
Hand on my heart, 12.3 is a very worthy successor to 12.2. I am very happy with it, and if you know me, that says alot... BUT... there is one little usage/design point with the GUI of NetworkManager that got me confused and I was unable to get onto my office's Cisco LEAP wireless network. This was resolved eventually (after 3 days and a weekend) and it turns out to be a non-issue... or a usage issue... or... anyway, look out for this in my next entry.
Installation and Partitioning
Read more »
Crystals Wallpaper
Here is another wallpaper that I have been working on recently. I want to make this one part of the supplemental wallpaper package for openSUSE 13.1. I hope it strikes the idea of simplicity and keeping the dark colors introduced on openSUSE 12.2.
The idea here is to show simple crystals with some light effects. Looking at it from far away makes it stand out and I think that it enhances your work being unobtrusive and "blingy" at the same time.
Please remember that you can also turn in your submissions for the next release of openSUSE. Everyone is invited to work on an image that they would like be considered for the wallpaper. Something to remember is that if you are making an image proposal for the "DEFAULT" wallpaper, your work will have to be done in SVG format. If the image proposal you are making is intended for the "SUPPLEMENTAL" wallpaper package, you can do it on raster as well as svg. If done in raster, you will have to prove copies of your work in 4 different resolutions, 2560 × 1440, 2560x1600, 2560x2048 and 2048x1536.
Thank you for you work.
Anditosan
Apache Subversion 1.8 preview packages
RPM packages of what will become Apache Subversion 1.8 fairly soon are now available for testing on all current releases of openSUSE and SLE 11.
Note that in this release, serf will replace neon as the default HTTP library, to the extend that the latter is removed completely. I wrote about ra_serf before and added support for it in recent packages. You can test this now with either 1.7 or 1.8 if you are concerned about performance in your network. Please note that for servers running httpd and mod_dav_svn, increasing MaxKeepAliveRequests is highly recommended.
Update: Apache Subversion 1.8 is now released. You can find maintained packages via the software search in the devel:tools:scm:svn project. This will be part of the next release of openSUSE.
One More chef-client Run
Carrying on from my last post, the failed chef-client run came down to the init script in ceph 0.56 not yet knowing how to iterate /var/lib/ceph/{mon,osd,mds} and automatically start the appropriate daemons. This functionality seems to have been introduced in 0.58 or so by commit c8f528a. So I gave it another shot with a build of ceph 0.60.
On each of my ceph nodes, a bit of upgrading and cleanup. Note the choice of ceph 0.60 was mostly arbitrary, I just wanted the latest thing I could find an RPM for in a hurry. Also some of the rm invocations won’t be necessary, depending on what state things are actually in:
# zypper ar -f http://download.opensuse.org/repositories/home:/dalgaaf:/ceph:/extra/openSUSE_12.3/home:dalgaaf:ceph:extra.repo # zypper ar -f http://gitbuilder.ceph.com/ceph-rpm-opensuse12-x86_64-basic/ref/next/x86_64/ ceph.com-next_openSUSE_12_x86_64 # zypper in ceph-0.60 # kill $(pidof ceph-mon) # rm /etc/ceph/* # rm /var/run/ceph/* # rm -r /var/lib/ceph/*/*
That last gets rid of any half-created mon directories.
I also edited the Ceph environment to only have one mon (one of my colleagues rightly pointed out that you need an odd number of mons, and I had declared two previously, for no good reason). That’s knife environment edit Ceph on my desktop, and set "mon_initial_members": "ceph-0" instead of "ceph-0,ceph-1".
I also had to edit each of the nodes, to add an osd_devices array to each node, and remove the mon role from ceph-1. That’s knife node edit ceph-0.example.com then insert:
"normal": {
...
"ceph": {
"osd_devices": [ ]
}
...
Without the osd_devices array defined, the osd recipe fails (“undefined method `each_with_index’ for nil:NilClass”). I was kind of hoping an empty osd_devices array would allow ceph to use the root partition. No such luck, the cookbook really does expect you to be doing a sensible deployment with actual separate devices for your OSDs. Oh, well. I’ll try that another time. For now at least I’ve demonstrated that ceph-0.60 does give you what appears to be a clean mon setup when using the upstream cookbooks on openSUSE 12.3:
knife ssh name:ceph-0.example.com -x root chef-client [2013-04-15T06:32:13+00:00] INFO: *** Chef 10.24.0 *** [2013-04-15T06:32:13+00:00] INFO: Run List is [role[ceph-mon], role[ceph-osd], role[ceph-mds]] [2013-04-15T06:32:13+00:00] INFO: Run List expands to [ceph::mon, ceph::osd, ceph::mds] [2013-04-15T06:32:13+00:00] INFO: HTTP Request Returned 404 Not Found: No routes match the request: /reports/nodes/ceph-0.example.com/runs [2013-04-15T06:32:13+00:00] INFO: Starting Chef Run for ceph-0.example.com [2013-04-15T06:32:13+00:00] INFO: Running start handlers [2013-04-15T06:32:13+00:00] INFO: Start handlers complete. [2013-04-15T06:32:13+00:00] INFO: Loading cookbooks [apache2, apt, ceph] [2013-04-15T06:32:13+00:00] INFO: Processing template[/etc/ceph/ceph.conf] action create (ceph::conf line 6) [2013-04-15T06:32:13+00:00] INFO: template[/etc/ceph/ceph.conf] updated content [2013-04-15T06:32:13+00:00] INFO: template[/etc/ceph/ceph.conf] mode changed to 644 [2013-04-15T06:32:13+00:00] INFO: Processing service[ceph_mon] action nothing (ceph::mon line 23) [2013-04-15T06:32:13+00:00] INFO: Processing execute[ceph-mon mkfs] action run (ceph::mon line 40) creating /var/lib/ceph/tmp/ceph-ceph-0.mon.keyring added entity mon. auth auth(auid = 18446744073709551615 key=AQC8umZRaDlKKBAAqD8li3u2JObepmzFzDPM3g== with 0 caps) ceph-mon: mon.noname-a 192.168.4.118:6789/0 is local, renaming to mon.ceph-0 ceph-mon: set fsid to f80aba97-26c5-4aa3-971e-09c5a3afa32f ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph-0 for mon.ceph-0 [2013-04-15T06:32:14+00:00] INFO: execute[ceph-mon mkfs] ran successfully [2013-04-15T06:32:14+00:00] INFO: execute[ceph-mon mkfs] sending start action to service[ceph_mon] (immediate) [2013-04-15T06:32:14+00:00] INFO: Processing service[ceph_mon] action start (ceph::mon line 23) [2013-04-15T06:32:15+00:00] INFO: service[ceph_mon] started [2013-04-15T06:32:15+00:00] INFO: Processing ruby_block[tell ceph-mon about its peers] action create (ceph::mon line 64) mon already active; ignoring bootstrap hint [2013-04-15T06:32:16+00:00] INFO: ruby_block[tell ceph-mon about its peers] called [2013-04-15T06:32:16+00:00] INFO: Processing ruby_block[get osd-bootstrap keyring] action create (ceph::mon line 79) 2013-04-15 06:32:16.872040 7fca8e297780 -1 monclient(hunting): authenticate NOTE: no keyring found; disabled cephx authentication 2013-04-15 06:32:16.872042 7fca8e297780 -1 unable to authenticate as client.admin 2013-04-15 06:32:16.872400 7fca8e297780 -1 ceph_tool_common_init failed. [2013-04-15T06:32:18+00:00] INFO: ruby_block[get osd-bootstrap keyring] called [2013-04-15T06:32:18+00:00] INFO: Processing package[gdisk] action upgrade (ceph::osd line 37) [2013-04-15T06:32:27+00:00] INFO: package[gdisk] upgraded from uninstalled to [2013-04-15T06:32:27+00:00] INFO: Processing service[ceph_osd] action nothing (ceph::osd line 48) [2013-04-15T06:32:27+00:00] INFO: Processing directory[/var/lib/ceph/bootstrap-osd] action create (ceph::osd line 67) [2013-04-15T06:32:27+00:00] INFO: Processing file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] action create (ceph::osd line 76) [2013-04-15T06:32:27+00:00] INFO: entered create [2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] owner changed to 0m [2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] group changed to 0 [2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] mode changed to 440 [2013-04-15T06:32:27+00:00] INFO: file[/var/lib/ceph/bootstrap-osd/ceph.keyring.raw] created file /var/lib/ceph/bootstrap-osd/ceph.keyring.raw [2013-04-15T06:32:27+00:00] INFO: Processing execute[format as keyring] action run (ceph::osd line 83) creating /var/lib/ceph/bootstrap-osd/ceph.keyring added entity client.bootstrap-osd auth auth(auid = 18446744073709551615 key=AQAOl2tR0M4bMRAAatSlUh2KP9hGBBAP6u5AUA== with 0 caps) [2013-04-15T06:32:27+00:00] INFO: execute[format as keyring] ran successfully [2013-04-15T06:32:28+00:00] INFO: Chef Run complete in 14.479108446 seconds [2013-04-15T06:32:28+00:00] INFO: Running report handlers [2013-04-15T06:32:28+00:00] INFO: Report handlers complete
Witness:
ceph-0:~ # rcceph status
=== mon.ceph-0 ===
mon.ceph-0: running {"version":"0.60-468-g98de67d"}
On the note of building an easy-to-deploy Ceph appliance, assuming you’re not using Chef and just want something to play with, I reckon the way to go is use config pretty similar to what would be deployed by this Chef cookbook, i.e. an absolute minimal /etc/ceph/ceph.conf, specifying nothing other than initial mons, then use the various Ceph CLI tools to create mons and osds on each node and just rely on the init script in Ceph >= 0.58 to do the right thing with what it finds (having to explicitly specify each mon, osd and mds in the Ceph config by name always bugged me). Bonus points for using csync2 to propagate /etc/ceph/ceph.conf across the cluster.
Docker and openSUSE: a container full of Geekos
SUSE’s Hackweek #9 is over. It has been an awesome week during which I worked hard to make docker a first class citizen on openSUSE. I also spent some time working on an openSUSE container that could be used by docker’s users.
The project has been tracked on this page of hackweek’s wiki, this is a detailed report of what I achieved.
Installing docker on openSUSE 12.3
Docker has been packaged inside of this OBS project.
So installing it requires just two commands:
sudo zypper ar http://download.opensuse.org/repositories/home:/flavio_castelli:/docker/openSUSE_12.3 docker
sudo zypper in docker
There’s also a 1 Click Install for the lazy ones :)
Zypper will install docker and its dependencies which are:
-
lxc: docker’s “magic” is built on top of LXC. -
bridge-utils: is used to setup the bridge interface used by docker’s containers. -
dnsmasq: is used to start the dhcp server used by the containers. -
iptables: is used to get containers’ networking work. -
bsdtar: is used by docker to compress/extract the containers. - aufs3 kernel module: is used by docker to track the changes made to the containers.
The aufs3 kernel module is not part of the official kernel and is not
available on the official repositories. Hence adding docker will trigger the
installation of a new kernel package on your machine.
Both the kernel and the aufs3 packages are going to be installed from the home:flavio_castelli:docker repository but they are in fact links to the packages created by Michal Hrusecky inside of his aufs project on OBS.
Note well: docker works only on 64bit hosts. That’s why there are no 32bit packages.
Docker appliance
If you don’t want to install docker on your system or you are just curious and want to jump straight into action there’s a SUSE Studio appliance ready for you. You can find it here.
If you are not familiar with SUSE Gallery let me tell you two things about it:
- you can download the appliance on your computer and play with it or…
- you can clone the appliance on SUSE Studio and customize it even further or…
- you can test the appliance from your browser using SUSE Studio’s Testdrive feature (no SUSE Studio account required!).
The latter option is really cool, because it will allow you to play with docker immediately. There’s just one thing to keep in mind about Testdrive: outgoing connections are disabled, so you won’t be able to install new stuff (or download new docker images). Fortunately this appliance comes with the busybox container bundled, so you will be able to play a bit with docker.
Running docker
The docker daemon must be running in order to use your containers. The openSUSE package comes with a init script which can be used to manage it.
The script is /etc/init.d/docker, but there’s also the usual symbolic link
called /usr/sbin/rcdocker.
To start the docker daemon just type:
sudo /usr/sbin/rcdocker start
This will trigger the following actions:
- The
docker0bridge interface is created. This interface is bridged witheth0. - A
dnsmasqinstance listening on thedocker0interface is started. - IP forwarding and IP masquerading are enabled.
- Docker daemon is started.
All the containers will get an IP on the 10.0.3.0/24 network.
Playing with docker
Now is time to play with docker.
First of all you need to download an image: docker pull base
This will fetch the official Ubuntu-based image created by the dotCloud guys.
You will be able to run the Ubuntu container on your openSUSE host without any problem, that’s LXC’s “magic” ;)
If you want to use only “green” products just pull the openSUSE 12.3 container I created for you:
docker pull flavio/opensuse-12-3
Please experiment a lot with this image and give me your feedback. The dotCloud guys proposed me to promote it to top-level base image, but I want to be sure everything works fine before doing that.
Now you can go through the examples reported on the official docker’s documentation.
Create your own openSUSE images with SUSE Studio
I think it would be extremely cool to create docker’s images using SUSE Studio. As you might know I’m part of the SUSE Studio team, so I looked a bit into how to add support to this new format.
– personal opinion –
There are some technical challenges to solve, but I don’t think it would be hard to address them.
– personal opinion –
If you are interested in adding the docker format to SUSE Studio please create a new feature request on openFATE and vote it!
In the meantime there’s another way to create your custom docker images, just keep reading.
Create your own openSUSE images with KIWI
KIWI is the amazing tool at the heart of SUSE Studio and can be used to create LXC containers.
As said earlier docker runs LXC containers, so we are going to follow these instructions.
First of all install KIWI from the Virtualization:Appliances project on OBS:
sudo zypper ar http://download.opensuse.org/repositories/Virtualization:/Appliances/openSUSE_12.3 virtualization:appliances
sudo zypper in kiwi kiwi-doc
We are going to use the configuration files of a simple LXC container shipped
the kiwi-doc package:
cp -r /usr/share/doc/packages/kiwi/examples/suse-11.3/suse-lxc-guest ~/openSUSE_12_3_docker
The openSUSE_12_3_docker directory contains two configuration files used by
KIWI (config.sh and config.xml) plus the root directory.
The contents of this directory are going to be added to the resulting container.
It’s really important to create the /etc/resolv.conf file inside of the
final image since docker is going to mount the resol.conf file of the host
system inside of the running guest. If the file is not found docker won’t be able
to start our container.
An empty file is enough:
touch ~/openSUSE_12_3_docker/root/etc/resolv.conf
Now we can create the rootfs of the container using KIWI:
sudo /usr/sbin/kiwi --prepare ~/openSUSE_12_3_docker --root /tmp/openSUSE_12_3_docker_rootfs
We can skip the next step reported on KIWI’s documentation, that’s not needed with docker because it will produce an invalid container. Just execute the following command:
sudo tar cvjpf openSUSE_12_3_docker.tar.bz2 -C /tmp/openSUSE_12_3_docker_rootfs/ .
This will produce a tarball containing the rootfs of your container.
Now you can import it inside of docker, there are two ways to achieve that:
- from a web server.
- from a local file.
Importing the image from a web server is really convenient if you ran KIWI on a different machine.
Just move the tarball to a directory which is exposed by the web server. If you don’t have one installed just move to the directory containing the tarball and type the following command:
python -m SimpleHTTPServer 8080
This will start a simple http server listening on port 8080 of your machine.
On the machine running docker just type:
docker import http://mywebserver/openSUSE_12_3_docker.tar.bz2 my_openSUSE_image latest
If the tarball is already on the machine running docker you just need to type:
cat ~/openSUSE_12_3_docker.tar.bz2 | docker import - my_openSUSE_image latest
Docker will download (just in the 1st case) and import the tarball. The resulting image will be named ‘my_openSUSE_image’ and it will have a tag named ‘latest’.
The name of the tag is really important since docker tries to run the image with the ‘latest’ tag unless you explicitly specify a different value.
Conclusion
Hackweek #9 has been both productive and fun for me. I hope you will have fun too using docker on openSUSE.
As usual, feedback is welcome.

