Skip to main content

the avatar of Andres Silva

openSUSE Summit

it is now official, the USA is going to have an openSUSE party all of its own. It is the first time that there is a conference of this type in the U.S. and I am happy to report that I have already asked my boss to give me some time off around the dates of this conference.

So far, the openSUSE Facebook page reports only a handful of attendants. Hopefully as the time nears, more and more people sign up for this summit.

There is always a good thing to note about online communities and software efforts like the ones we are part of. Most of the times we are far apart from each other and our only methods of communication are email and IRC. Our gatherings are on the net and rarely do we get to see everyone that is a participant in this project in person. Surely a lot of misunderstanding would be done away with if somehow we were to see each other face to face. There is always a sense of empathy that rises above the online encounters that we have as we try to work on a new release of openSUSE.

A new sense of friendship and collaboration is something we always strive for. This is something that makes me think that it is always a great idea to get together and work on the projects we love. Also, the location is wonderful. Orlando is a very fun city with a lot of things to do. Surely our weekend there will not be wasted. I am actually planning to have my vacation time while I am there!

The idea is simple, tell your friends about this meeting. Even if they are not part of the community, they can still come and see what this is all about. After all, it is the first time that people from these latitudes get to be together. Most of the times, the community has to travel far into Europe to gather as a team. Now we are promoting a time for people who can't travel overseas and can make it to Florida.

Good luck, and the best wishes for an awesome conference.

a silhouette of a person's head and shoulders, used as a default avatar

XDC2012 - Nuernberg (Germany) September 19 thru 21.

Hah, it's out - now it is official!
libv has been talking about it for ages already, but yesterday I now made the official announcement: XDC2012 - read the X.Org developers conference 2012 - will take place in Nuernberg (Germany) from September 19 through 21 and will be hosted at the company headquarters of SUSE at Maxfeldstrasse 5.
Please also check out the official announcement.
XDC takes place once a year, traditionally altering between locations in North America and Europe. It's a technical conference where people involved in technologies around the graphics stack on Open Source operating systems (like Mesa3D, Wayland, DRM, the X Window System ...) meet and discuss future plans and directions.
As a note on the side: When there were still two events each year, to distinguish both events the one taking place in North America was named XDC while the one held in Europe was called XDS. Those names stuck even when X.Org switched to one event per year. Now we decided to drop this distinction and named the European event XDC. 
The X.Org Foundation Board decided for Nuernberg last December already, it took a while until we found a date where both a suitable venue and accommodations are available (Nuernberg hosts quite a few major trade fairs, and right after the vacation season it's quite busy there).
SUSE is sponsoring the conference by providing its main meeting room including infrastructure to us free of charge. This venue is  just a few minutes away from the city center, hotels are in walking distance - the preferred conference hotel which will give us a special rate (which is still under negotiation) is just 5 minutes away from the venue.
Nuernberg is one of a few towns whose medieval city walls have not been destroyed in the 19th century and thus are still almost fully in place. Its castle and its historic inner city is just a few minutes away from the venue and definitely worth a visit. The city center offers many opportunities for after conference activities: you will also find quite many places (restaurants, pubs, bars) to gather and have conversations with others. Thus everything should be in place for a great conference.
This year marks a special occasion as on September 15th we will be celebrating the 25th anniversary of version 11 of the X Window System. (if you are wondering now how long X has been around if version 11 has now existed for 25 years now - the first 10 versions were released between May 1984 and December 1986 -  you may want to check out the excellent history on Wikipedia.
To celebrate this we will organize a beer hiking trip thru the scenic "Fränkische Schweiz" the Franconian countryside on the day after the conference where we will stop at several local brewery beer gardens. People will have the opportunity to sample a wide selection of good Franconian beers there.
Thus if you'd like to come, or even have something to present, please register! You will find details on the announcement page.
a silhouette of a person's head and shoulders, used as a default avatar

Mass Management Configuration Tool Puppet on openSUSE 12.1

What is "Puppet"?

Puppet is one of the open-source solutions for mass-management configuration with centralized storage of recipes. Recipes describe the resulting state of a specific system including how to get to that state, e.g., by installing packages, running scripts, enabling services, uploading whole files with configuration, etc.

This Example

I'll describe a simple example how adjust an NTP client to use pool.ntp.org for time synchronization including the Puppet server configuration.

Ingredients

  • One system acting as a server with openSUSE 12.1 (can be virtual), e.g., server.example.com
  • One or more systems acting as clients with openSUSE 12.1 (can be virtual), e.g., client.example.com
  • Text-file editor
  • Package zypper installed on server and client(s) - it's usually installed by default anyway
  • Patience :) The actual amount depends on the current level of sunspots visible

Preparation time: depends on how much you want to play with a recipe.
Cook time: several seconds
Important: before configuring both server and client, make sure that they have approximately the same date and time set (commands: date, date -s YYYY-MM-DD; date -s HH:MM:SS) otherwise you might hit some certificate issues

Configuring a Puppet Server

In this section, we'll install the needed server software, configure firewall, configure and start Puppet server.

Log into the server and do these changes:
  • Run:  zypper in puppet-server yast2-firewall  # to install all required packages
  • Run:  yast2 firewall services add service=service:puppetmasterd zone=EXT  # to open needed port in firewall
  • Run:  mkdir -pv /var/lock/subsys/  # to workaround a bug in Puppet package
  • Add client.example.com or even *.example.com to /etc/puppet/autosign.conf # to workaround a bug in Puppet
  • Start Puppet server:  rcpuppetmasterd start 
  • Enable Puppet server during boot process:  insserv puppetmasterd 
Bugs mentioned above might not actually be real bugs but I've done these steps to make my life easier. These were not reported yet.

Client Configuration

Now, we'll configure a client machine to have all the required packages installed and to use the correct Puppet server.
  • Run:  zypper patch  # to install the latest updates - especially patch for systemd package
  • Run:  zypper in puppet rubygems  # to install all required packages
  • Add server=server.example.com into the [main] section in /etc/puppet/puppet.conf if your server name is not puppet.example.com - then it would work without entering this line
  • Run:  mkdir -pv /var/lock/subsys/  # to workaround a bug in Puppet
  • Start Puppet client service:  rcpuppet start 
  • Enable Puppet client during boot process:  insserv puppet 
Let's check the the connection to server. Run  puppetd --test  that will contact the Puppet server and try to get recipes for your client.

Creating a Recipe on the Server

Create a recipe on the server for your client.example.com in /etc/puppet/manifests/site.pp containing:

  class ntp {
package { 'ntp': ensure => installed }
package { 'grep': ensure => installed }
package { 'coreutils': ensure => installed }

exec { 'add_ntp_server':
      subscribe  => Package['ntp'],
command    => 'echo "server pool.ntp.org" >> /etc/ntp.conf',
path       => '/sbin:/usr/bin:/bin:/usr/sbin',
logoutput  => 'on_failure',
unless     => 'grep --quiet "^server pool.ntp.org$" /etc/ntp.conf',
}

service { 'ntp':
      subscribe  => Exec['add_ntp_server'],
ensure     => running,
enable     => true,
hasrestart => true,
}
  }
  
  node 'client.example.com' {
include ntp
  }

All the sections are quite self-descriptive, anyway...
  • ntp class is a simple service definition that describes required packages, service configuration and service handling
  • package type makes sure the required package gets installed - some packages might be required by exec later
  • exec runs a script under some specific conditions (onlyif, unless), here it would maybe make sense to use augeas instead
  • node defines which classes should be applied to a particular system

Applying a Recipe on the Client

Simply run  puppetd --test  to apply the configuration. You should get something similar to this:

info: Caching catalog for client.example.com
info: Applying configuration version '1333044101'
notice: /Stage[main]/Ntp/Package[ntp]/ensure: created
info: /Stage[main]/Ntp/Package[ntp]: Scheduling refresh of Exec[add_ntp_server]
notice: /Stage[main]/Ntp/Exec[add_ntp_server]/returns: executed successfully
info: /Stage[main]/Ntp/Exec[add_ntp_server]: Scheduling refresh of Service[ntp]
notice: /Stage[main]/Ntp/Exec[add_ntp_server]: Triggered 'refresh' from 1 events
info: /Stage[main]/Ntp/Exec[add_ntp_server]: Scheduling refresh of Service[ntp]
notice: /Stage[main]/Ntp/Service[ntp]/ensure: ensure changed 'stopped' to 'running'
notice: /Stage[main]/Ntp/Service[ntp]: Triggered 'refresh' from 2 events
notice: Finished catalog run in 8.43 seconds

Otherwise Puppet should give you enough information on what went wrong. Checking /var/log/messages on the client and  /var/log/puppet/puppet.log sounds like a good idea.

Conclusion

Puppet is a powerful tool for managing large computer networks, including clouds, but there are still some glitches (at least in openSUSE packages) that need to be fixed.

a silhouette of a person's head and shoulders, used as a default avatar

Gloves (Systems Management Library AKA YaST++) Running on Ubuntu

About

You might have seen Jiri's blog post YaST++: next step in system management already. This article describes new systems management library that could be used by YaST and WebYast in the future or by any other application that can connect to its Ruby API.

Although this project is still in a research phase, we'd like to have support for other Linux distributions as well. This blog describes how to start with Gloves on Ubuntu.

Installing the System

If you already have Ubuntu installed, you can skip this part.

  • Download Ubuntu Desktop 11.10 for instance from here
  • Install it
  • Log into the system as user
  • Install additional updates if available

Preparing the System for Gloves


Gloves is a library written in Ruby, that uses Augeas for parsing configuration files and D-Bus for authorization if called by a non-root user. That's why Gloves need some additional libraries to be installed.

  • Open xterm or any other shell and continue there...
  • Run: sudo apt-get install git rake libaugeas0 libaugeas-dev rubygems libopen4-ruby libaugeas-ruby
  • Download the latest .gem file from package rubygem-packaging_rake_tasks in openSUSE build service
  • Rename the downloaded gem: mv packaging_rake_tasks*.gem* packaging_rake_tasks.gem
  • Install the downloaded .gem using: sudo gem install packaging_rake_tasks.gem

Gloves from the Sources



  • Download the sources
    Read-only: git clone https://github.com/yast/yast--.git
    Or read/write: git clone git@github.com:yast/yast--.git
  • Install the sources
    cd yast--
    sudo rake install

Running an Example


I've tried the root access only, omitting D-Bus for now. I'd be glad if somebody tried that and written another blog post describing the required steps to make it work.

  • sudo su
  • cd yast--
  • cd yast++lib-users/examples
  • ./users_read
    This should print a Ruby map containing all system users depending on your configuration. So, for instance something similar to this:

    {...,
      "root"=>{"name"=>"root", "gid"=>"0", "uid"=>"0",
        "shell"=>"/bin/bash", "home"=>"/root", "password"=>"x"},
      ...,
      "george"=>{"name"=>"George,,,", "gid"=>"1000",
        "uid"=>"1000", "shell"=>"/bin/bash",
        "home"=>"/home/george", "password"=>"x"},
    ...}
Congratulations! :) The basic Gloves now work on your Ubuntu.

Where to go next? Explore the Gloves source code. Check out the documentation. Read more about the library architecture. Have more fun with the openSUSE tutorial. Give us your feedback at yast-devel@opensuse.org mailing-list.

a silhouette of a person's head and shoulders, used as a default avatar

Heads up: OBS SLES 11 PHP stuff migrates to SP2! – server:php:applications/SLE11 repository

Companies maintaining production servers often have a desire for stable, long-living Operating System and Software distributions. The idea is that once set up, things should not break and major upgrades should only be needed in long intervals and at points in time which best suit business operation. In the SUSE family of operating systems, openSUSE is the young and adventurous (fast-paced, community driven) branch while the SUSE Linux Enterprise (SLE Server and SLE Desktop) crowd is a stable, paid-for building block you can depend your mission critical data on. Enterprise platforms have a certain drawback: To maintain stability, they don’t ship the latest-greatest software packages and they try to stick to a certain set of core options and extensions.

For PHP Software, the Open Buildservice server:php:applications repository is one of the primary sources for up to date versions of software like PHPUnit, phpmyadmin, Horde Groupware or wordpress. While it is acceptable and desirable to have a stable and well-maintained (though old) version of apache or mysql, you don’t want to sit on an aged version of web applications or libraries and frameworks for developers.

Today, this repository switched to build against the new SLE11 SP2 code base. SLE11 comes with two PHP options: The php5-* packages which ship PHP 5.2 and the php53-* packages which ship PHP 5.3
Currently, some packages will not build as they require „php-{something}“ which is provided by multiple packages. We are working on resolving this.

The switch was needed because more and more packages rely on a more recent version of PEAR or PHP 5.3 features which simply cannot be provided in Service Pack 1 installations without adding extra repositories like server:php and server:php:extensions (unsupported, recent PHP 5.3 built against SLE11).

Please keep in mind that software from OBS is generally not supported by your SLE license.

a silhouette of a person's head and shoulders, used as a default avatar

Change Tracking for the iPad.

Change Tracking for the iPad.

Change Tracking (also known as "tracked changes" or "red lining") is a feature which is very important for business users. For a lot of people reviewing change tracked documents is their daily business. Losing the change tracking information on the iPad make the iPad useless (or of limited use) for business users.

Therefore we decided that the Native OOXML layout engine should have change tracking support. That pretty much screwed up our schedules and release dates, but we felt that change tracking is so essential for business users that its worth doing. Six month later we are happy to announce change tracking support for text insertions, text deletions and comments. Here is what is looks like on the iPad:



You can quickly review all changes made to the document in the side bar. A simple tap on a side bar item brings you to the corresponding change and vice versa.
There is even an additional feature badly missing in Word. You can assign a color to an author by simply tapping on the colored square:



The assignment will be permanent across documents which makes it much quicker to review a document.

We will start a public beta test soon.

the avatar of Klaas Freitag

CSync and Mirall Development Setup

[caption id=“attachment_48” align=“alignleft” width=“92” caption=“Build it!”]A KDE bagger[/caption] people were asking how to set up a development setup for the syncing client we are working on to sync local files to ownCloud and vice versa currently, work title mirall. While a website about it is not yet finished, I try to summarize it here. There are some hacks here and there but that’s how I do it today. It will improve over time. Note that this is about a real development setup, not for a production build.

Linux and Windows development should go in parallel as easy as possible.

Edit: Please also refer to the official build documentation.

Building CSync

To build mirall, csync must be built first. CSync is hosted in its upstream git repo and there is also my development branch which holds the latest changes.

Overview of Build Directory setup.Clone the csync branch into a directory. In parallel to the cloned csync dir, create a new directory buildcsync as a cmake build dir. Change into buildcsync and call cmake like this:

cmake -DCMAKE_BUILD_TYPE="Debug" ../csync

and watch its output. You probably have to fulfill some dependencies, make sure to install all the needed devel packages. You will need log4c, iniparser, sqlite3 and for the modules libssh, libsmbclient and neon for the ownCloud module. Once cmake succeeds, call make to build it. So far relaxed for Linux.

To build csync for Windows, there are a couple of possibilities. The one I chose was to cross compile with mingw under openSUSE. That way I can build for all on one devel machine under my prefered system.

For that, I installed the cross compile and mingw32 packages from the openSUSE Build Service, which really demonstrates power here. I used the mingw repository. Kudos at this point to Dominik Schmidt, a Tomahawk developer, who helped me a lot to set all up and to all people who work in OBS to maintain the mingw repo.

Basically the cross compiler and libs (eg. packages mingw32-cross-gcc, mingw32-gcc-c++ and mingw32-cross-cpp) and the dependencies for the software to build have to be installed from the mingw repo. An action item is left to dig which in detail.

After installation you should have some mingw32-tools such as mingw32-cmake which should be used to build for win.

Now create a directory win and within that again buildcsync. In there, start cmake with

mingw32-cmake -DCMAKE_BUILD_TYPE="Debug" -DWITH_LOG4C=OFF ../../csync

That should do it. I did not find log4c for Win32, so I disabled it in the cmake call. Now build it with mingw32-make and see if it creates a dll in the src subdir and csync.exe in the client dir.

Building mirall

For mirall, it works similar. Mirall uses Qt and is C++, so again a lot of packages to install. Make again sure to have the mingw32-qt packages, for example mingw32-libqt4-devel and more.

However, there are two caveats with mirall:

  • the current development state of mirall needs the latest devel version of csync which we just built. I tweaked the CMakefile that way that if the mirall- and csync and build* folders are in the same directory, the csync is found by mirall cmake in the parallel dir. So I do not have to install the devel version of csync in my system.

  • to build mirall for windows, it must be made sure that cmake finds the mingw32 Qt tools like moc. Since there is also the Linux moc in the system, this can confuse. Domme pointed me to a script that sets some variables correct values to prevent mixing: cat ../docmake.sh # %_mingw32_qt4_platform win32-g++-cross export QT_BINDIR=/usr/bin export BIN_PRE=i686-w64-mingw32 /usr/bin/mingw32-cmake \ -DCMAKE_BUILD_TYPE="Debug" \ -DQMAKESPEC=win32-g++-cross \ -DQT_MKSPECS_DIR:PATH=/usr/i686-w64-mingw32/sys-root/mingw/share/qt4/mkspecs \ -DQT_QT_INCLUDE_DIR=/usr/i686-w64-mingw32/sys-root/mingw/include \ -DQT_PLUGINS_DIR=/usr/i686-w64-mingw32/sys-root/mingw/lib/qt4/plugins \ -DQT_QMAKE_EXECUTABLE=${QT_BINDIR}/${BIN_PRE}-qmake \ -DQT_MOC_EXECUTABLE=${QT_BINDIR}/${BIN_PRE}-moc \ -DQT_RCC_EXECUTABLE=${QT_BINDIR}/${BIN_PRE}-rcc \ -DQT_UIC_EXECUTABLE=${QT_BINDIR}/${BIN_PRE}-uic \ -DQT_DBUSXML2CPP_EXECUTABLE=${QT_BINDIR}/qdbusxml2cpp \ -DQT_DBUSCPP2XML_EXECUTABLE=${QT_BINDIR}/qdbuscpp2xml ../../mirall

With that setup I can build both the Linux and Windows version quite easily. There is still a lot to be solved, such as automatted packaging and such. CMake as usual is a great help.

the avatar of Jim Fehlig

libvirt sanlock integration in openSUSE Factory

A few weeks back I found some time to package sanlock for openSUSE Factory, which subsequently allowed enabling the libvirt sanlock driver.  And how might this be useful?  When running qemu/kvm virtual machines on a pool of hosts that are not cluster-aware, it may be possible to start a virtual machine on more than one host, potentially corrupting the guest filesystem.  To prevent such an unpleasant scenario, libvirt+sanlock can be used to protect the virtual machine’s disk images, ensuring we never have two qemu/kvm processes writing to an image concurrently.  libvirt+sanlock provides protection against starting the same virtual machine on different hosts, or adding the same disk to different virtual machines.

In this blog post I’ll describe how to install and configure sanlock and the libvirt sanlock plugin.  I’ll briefly cover lockspace and resource creation, and show some examples of specifying disk leases in libvirt, but users should become familiar with the wdmd (watchdog multiplexing daemon) and sanlock man pages, as well as the lease element specification in libvirt domainXML.  I’ve used SLES11 SP2 hosts and guests for this example, but have also tested a similar configuration on openSUSE 12.1.

The sanlock and sanlock-enabled libvirt packages can be retrieved from a Factory repository or a repository from the OBS Virtualization project.  (As a side note, for those that didn’t know, Virtualization is the development project for virtualization-related packages in Factory.  Packages are built, tested, and staged in this project before submitting to Factory.)

After configuring the appropriate repository for the target host, update libvirt and install sanlock and libvirt-lock-sanlock.
# zypper up libvirt libvirt-client libvirt-python
# zypper in sanlock libsanlock1 libvirt-lock-sanlock

Enable watchdog daemon and sanlock daemons.
# insserv wdmd
# insserv sanlock

Specify the sanlock lock manager in /etc/libvirt/qemu.conf.
lock_manager = “sanlock”

The suggested libvirt sanlock configuration uses NFS for shared lock space storage.  Mount a share at the default mount point.
# mount -t nfs nfs-server:/export/path /var/lib/libvirt/sanlock

These installation steps need to be performed on each host participating in the sanlock-protected environment.

libvirt provides two modes for configuring sanlock.  The default mode requires a user or management application to manually define the sanlock lockspace and resource leases, and then describe those leases with a lease element in the virtual machine XML configuration.  libvirt also supports an auto disk lease mode, where libvirt will automatically create a lockspace and lease for each fully qualified disk path in the virtual machine XML configuration.  The latter mode removes the administrator burden of configuring lockspaces and leases, but only works if the administrator can ensure stable and unique disk paths across all participating hosts.  I’ll describe both modes here, starting with the manual configuration.

Manual Configuration:
First we need to reserve and initialize host_id leases.  Each host that wants to participate in the sanlock-enabled environment must first acquire a lease on its host_id number within the lockspace.  The lockspace requirements for 2000 leases (2000 possible host_id’s) is 1MB (8MB for 4k sectors).  On one host, create a 1M lockspace file in the default lease directory (/var/lib/libvirt/sanlock/).
# truncate -s 1M /var/lib/libvirt/sanlock/TEST_LS

And then initialize the lockspace for storing host_id leases.
# sanlock direct init -s TEST_LS:0:/var/lib/libvirt/sanlock/TEST_LS:0

On each participating host, start the watchdog and sanlock daemons and restart libvirtd.
# rcwdmd start; rcsanlock start; rclibvirtd restart

On each participating host, we’ll need to tell the sanlock daemon to acquire its host_id in the lockspace, which will subsequently allow resources to be acquired in the lockspace.
host1:
# sanlock client add_lockspace -s TEST_LS:1:var/lib/libvirt/sanlock/TEST_LS:0
host2:
# sanlock client add_lockspace -s TEST_LS:2:var/lib/libvirt/sanlock/TEST_LS:0
hostN:
# sanlock client add_lockspace -s TEST_LS:<hostidN>:var/lib/libvirt/sanlock/TEST_LS:0

To see the state of host_id leases read during the last renewal
# sanlock client host_status -s TEST_LS
1 timestamp 50766
2 timestamp 327323

Now that we have the hosts configured, time to move on to configuring a virtual machine resource lease and defining it in the virtual machine XML configuration.  First we need to reserve and initialize a resource lease for the virtual machine disk image.
# truncate -s 1M /var/lib/libvirt/sanlock/sles11sp2-disk-resource-lock
# sanlock direct init -r TEST_LS:sles11sp2-disk-resource-lock:/var/lib/libvirt/sanlock/sles11sp2-disk-resource-lock:0

Then add the lease information to the virtual machine XML configuration
# virsh edit sles11sp2

<lease>
<lockspace>TEST_LS</lockspace>
<key>sles11sp2-disk-resource-lock</key>
<target path=’/var/lib/libvirt/sanlock/sles11sp2-disk-resource-lock’/>
</lease>

Finally, start the virtual machine!
# virsh start sles11sp2
Domain sles11sp2 started

Trying to start same virtual machine on different host will fail since the resource lock is already leased to another host
other-host:~ # virsh start sles11sp2
error: Failed to start domain sles11sp2
error: internal error Failed to acquire lock: error -243

Automatic disk lease configuration:
As can be seen even with the trivial example above, manual disk lease configuration puts quite a burden on the user, particularly in an adhoc environment with only a few hosts and no central management service to coordinate all of the lockspace and resource configuration.  To ease this burden, Daniel Berrange adding support in libvirt for automatically creating sanlock disk leases.  Once the environment is configured for automatic disk leases, libvirt will handle the details of creating lockspace and resource leases.

On each participating host, edit /etc/libvirt/qemu-sanlock.conf, setting auto_disk_leases to 1 and assigning a unique host_id.
auto_disk_leases = 1
host_id = 1

Then restart libvirtd
# rclibvirtd restart

Now libvirtd+sanlock is configured to automatically acquire a resource lease for each virtual machine disk.  No lease configuration is required in the virtual machine XML configuration.  We can simply start the virtual machine and libvirt will handle all the details for us.

host1 # virsh start sles11sp2
Domain sles11sp2 started

libvirt creates a host lease lockspace named __LIBVIRT__DISKS__.  Disk resource leases are named using the MD5 checksum of the fully qualified disk path.  After staring the above virtual machine, the lease directory contained
host1 # ls -l /var/lib/libvirt/sanlock/
total 2064
-rw——-  1 root root 1048576 Mar 13 01:35 3ab0d33a35403d03e3ad10b485c7b593
-rw——-  1 root root 1048576 Mar 13 01:35 __LIBVIRT__DISKS__

Finally, try to start the virtual machine on another participating host
host2 # virsh start sles11sp2
error: Failed to start domain sles11sp2
error: internal error Failed to acquire lock: error -243

Feel free to try the sanlock and sanlock-enabled libvirt packages from openSUSE Factory or our OBS Virtualization project. One thing to keep in mind is that the sanlock daemon protects resources for some process, in this case qemu/kvm.  If the sanlock daemon is terminated, it can no longer protect those resources and kills the processes for which it holds leases.  In other words, restarting the sanlock daemon will terminate your virtual machines!  If the sanlock daemon is SIGKILL’ed, then the watchdog daemon intervenes by resetting the entire host.  With this in mind, it would be wise to consider an appropriate disk cache mode such as ‘none’ or ‘writethrough’ to improve the integrity of your disk images in the event of a mass virtual machine kill off.

a silhouette of a person's head and shoulders, used as a default avatar

Google Summer of Code at LibreOffice: become famous and maybe rich!

So, just before the weekend, we received the great news that Google chose LibreOffice as a mentoring organisation for Google Summer of Code again this year. Some of you might remember that last year we had several extremely successful Google Summer of Code projects and that two of our successful students are currently employed working on free and opensource software as a direct consequence of their participation in the program. I had a priviledge to mentor Eilidh McAdam and we implemented a Visio import filter that is one of the flagship features of LibreOffice 3.5. Eilidh is now employed by Lanedo.

This year, we proposed with Valek Filippov two projects related to reverse-engineered file-formats. The first is the implementation of MS Publisher import filter for LibreOffice and the second is to help to improve and extend the Corel Draw import filter that will be part of LibreOffice 3.6 release. Both projects require working knowledge of C++ and a lot of good will. Each of the import filters consists of a standalone library and a glue that plugs the library into LibreOffice. These libraries can be built as system libraries and LibreOffice can use them from the system. The advantage of this approach for a student participating at the development is that there is only a minimum need of recompiling LibreOffice if some substantial part of the glue (that is rather small) changes. Therefore, I encourage all of you who are considering applying with LibreOffice for this year's Google Summer of Code to have a close look at those two projects. As a bonus is that if you are successful, you become famous and eventually rich.

You can have a look at libcdr, the horsepower behind the Corel Draw import filter and at the skeleton of libmspub, that will be the basis of the Publisher import filter. And don't hesitate to become rich and famous with Google Summer of Code at LibreOffice

the avatar of Jeffrey Stedfast

Introducing MonoTouch.SQLite

I've been working on a personal side-project writing an app for the iPad that makes use of SQLite-Net, displaying that data in a UITableView.

Up until this past week, I had been using Miguel de Icaza's wonderful MonoTouch.Dialog library for displaying my data. Unfortunately, I wanted search filtering to be persistent, which means that I really needed to use Apple's UISearchDisplayController but I couldn't find an easy way to retrofit that onto MonoTouch.Dialog's DialogViewController to replace the simpler UISearchBar API that it currently uses. Since I had to look at creating an alternate solution, I figured I might as well solve the other potential problem I had with MonoTouch.Dialog, which is that my app really needed to be able to handle tables with a massive number of items. This brings us to...

MonoTouch.SQLite

MonoTouch.Dialog really made using UITableViews in iPhone and iPad apps trivial and I wanted to try and repeat at least some of that with MonoTouch.SQLite.

The first thing I had to do was to figure out a way of modeling the data in such a way as to allow a generic class to do most of the work for the developer. After a few sleepless nights of hacking last weekend, I figured out a fairly simple approach that seems to work pretty well. I started off thinking that I wouldn't be able to get around having to have a subclassable model, so I made most everything virtual. This is the public API that I came up with:

public class SQLiteTableModel<T> : IDisposable where T : new ()
{
    public SQLiteTableModel (SQLiteConnection sqlitedb, int pageSize, SQLiteOrderBy orderBy, string sectionExpr);

    // 2 ways of setting the search criteria
    public SQLiteWhereExpression SearchExpression { get; set; }
    public string SearchText { get; set; }

    // Gets the total number of table rows
    public int Count { get; }

    // Gets the number of table sections
    public int SectionCount { get; }

    // Gets the section titles
    public string[] SectionTitles { get; }

    // Gets the row count for a particular section
    public int GetRowCount (int section);

    // Get the index of an item
    public int IndexOf (T item, IComparer<t> comparer);

    // Convert item index into a section and row
    public bool IndexToSectionAndRow (int index, out int section, out int row);

    // Convert section and row into an item index
    public int SectionAndRowToIndex (int section, int row);

    // 2 ways of getting an item
    public T GetItem (int section, int row);
    public T GetItem (int index);

    // Reset the state of the model
    public void ReloadData ();
}

You can see the full class implementation here.

It turns out, though, that it really isn't necessary to subclass my model unless you want to have finer control over the specific SQL query commands that it makes (all of those methods are virtual).

SQLiteTableViewController<T>

As I started porting my iPad app to use my SQLiteTableModel class, I started to realize that I could abstract a lot of my usage of the model into a reusable base class. What I came up with will blow your mind.

Are you ready?

In order to populate a UITableView with the contents of an SQLite table, all you have to do is subclass SQLiteTableViewController<T> and implement 1 method:

protected UITableViewCell GetCell (UITableView tableView, NSIndexPath path, T item)

That's it.

I've written up a sample iPhone app that illustrates just how easy this is.

But wait! There's more!

If you order in the next 30 minutes, you can also get this free complimentary Sham-Wow!

Okay, just kidding about that Sham-Wow! bit, but I wasn't kidding about there being more:

Remember when I said one of the problems I wanted to solve was persistent search filtering? Yea, well, I did it. SQLiteTableViewController handles all of that for you as well. In fact, give searching a try in that sample above.

At this point I bet you're thinking, "wow, how could this get any better?"

I'll tell you. Remember how SQLiteTableModel had 2 methods for setting the search criteria? Well, the one that takes a string parses it to create a SQLiteWhereExpression allowing the user to match against specific fields. For example, if you had the following data item:

public class Contact {
    public string FirstName;
    public string LastName;
    public string PhoneNumber;
    public string Address;
    public string Comments;
}

...the user could type:

address:"Newton, MA"

and SQLiteTableModel would construct a query to match "Newton, MA" against only the Address field.

If the user, instead, types:

address:"Newton, MA" firstname:Jane

then the matches that would display in the list would be limited to contacts with a first name of "Jane" who live also in "Newton, MA".

I've also taken the liberty of implementing a SQLiteSearchAliasAttribute that allows you to specify aliases for your fields (or even the same alias to multiple fields!). For example, you could do this:

public class Contact {
    [SQLiteSearchAlias ("first")][SQLiteSearchAlias ("name")]
    public string FirstName;
    [SQLiteSearchAlias ("last")][SQLiteSearchAlias ("name")]
    public string LastName;
    [SQLiteSearchAlias ("phone")]
    public string PhoneNumber;
    public string Address;
    public string Comments;
}

This would allow your users to use "name" to match against either FirstName or LastName!

It also means they can type "first" instead of "firstname" to match against only the first name, as in the above example.

Where Can I Find This Awesome Library?

Glad you asked! You can find it on my GitHub page: MonoTouch.SQLite.

Well? What are you waiting for? Get hacking!