Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

Mass Management Configuration Tool Puppet on openSUSE 12.1

What is "Puppet"?

Puppet is one of the open-source solutions for mass-management configuration with centralized storage of recipes. Recipes describe the resulting state of a specific system including how to get to that state, e.g., by installing packages, running scripts, enabling services, uploading whole files with configuration, etc.

This Example

I'll describe a simple example how adjust an NTP client to use pool.ntp.org for time synchronization including the Puppet server configuration.

Ingredients

  • One system acting as a server with openSUSE 12.1 (can be virtual), e.g., server.example.com
  • One or more systems acting as clients with openSUSE 12.1 (can be virtual), e.g., client.example.com
  • Text-file editor
  • Package zypper installed on server and client(s) - it's usually installed by default anyway
  • Patience :) The actual amount depends on the current level of sunspots visible

Preparation time: depends on how much you want to play with a recipe.
Cook time: several seconds
Important: before configuring both server and client, make sure that they have approximately the same date and time set (commands: date, date -s YYYY-MM-DD; date -s HH:MM:SS) otherwise you might hit some certificate issues

Configuring a Puppet Server

In this section, we'll install the needed server software, configure firewall, configure and start Puppet server.

Log into the server and do these changes:
  • Run:  zypper in puppet-server yast2-firewall  # to install all required packages
  • Run:  yast2 firewall services add service=service:puppetmasterd zone=EXT  # to open needed port in firewall
  • Run:  mkdir -pv /var/lock/subsys/  # to workaround a bug in Puppet package
  • Add client.example.com or even *.example.com to /etc/puppet/autosign.conf # to workaround a bug in Puppet
  • Start Puppet server:  rcpuppetmasterd start 
  • Enable Puppet server during boot process:  insserv puppetmasterd 
Bugs mentioned above might not actually be real bugs but I've done these steps to make my life easier. These were not reported yet.

Client Configuration

Now, we'll configure a client machine to have all the required packages installed and to use the correct Puppet server.
  • Run:  zypper patch  # to install the latest updates - especially patch for systemd package
  • Run:  zypper in puppet rubygems  # to install all required packages
  • Add server=server.example.com into the [main] section in /etc/puppet/puppet.conf if your server name is not puppet.example.com - then it would work without entering this line
  • Run:  mkdir -pv /var/lock/subsys/  # to workaround a bug in Puppet
  • Start Puppet client service:  rcpuppet start 
  • Enable Puppet client during boot process:  insserv puppet 
Let's check the the connection to server. Run  puppetd --test  that will contact the Puppet server and try to get recipes for your client.

Creating a Recipe on the Server

Create a recipe on the server for your client.example.com in /etc/puppet/manifests/site.pp containing:

  class ntp {
package { 'ntp': ensure => installed }
package { 'grep': ensure => installed }
package { 'coreutils': ensure => installed }

exec { 'add_ntp_server':
      subscribe  => Package['ntp'],
command    => 'echo "server pool.ntp.org" >> /etc/ntp.conf',
path       => '/sbin:/usr/bin:/bin:/usr/sbin',
logoutput  => 'on_failure',
unless     => 'grep --quiet "^server pool.ntp.org$" /etc/ntp.conf',
}

service { 'ntp':
      subscribe  => Exec['add_ntp_server'],
ensure     => running,
enable     => true,
hasrestart => true,
}
  }
  
  node 'client.example.com' {
include ntp
  }

All the sections are quite self-descriptive, anyway...
  • ntp class is a simple service definition that describes required packages, service configuration and service handling
  • package type makes sure the required package gets installed - some packages might be required by exec later
  • exec runs a script under some specific conditions (onlyif, unless), here it would maybe make sense to use augeas instead
  • node defines which classes should be applied to a particular system

Applying a Recipe on the Client

Simply run  puppetd --test  to apply the configuration. You should get something similar to this:

info: Caching catalog for client.example.com
info: Applying configuration version '1333044101'
notice: /Stage[main]/Ntp/Package[ntp]/ensure: created
info: /Stage[main]/Ntp/Package[ntp]: Scheduling refresh of Exec[add_ntp_server]
notice: /Stage[main]/Ntp/Exec[add_ntp_server]/returns: executed successfully
info: /Stage[main]/Ntp/Exec[add_ntp_server]: Scheduling refresh of Service[ntp]
notice: /Stage[main]/Ntp/Exec[add_ntp_server]: Triggered 'refresh' from 1 events
info: /Stage[main]/Ntp/Exec[add_ntp_server]: Scheduling refresh of Service[ntp]
notice: /Stage[main]/Ntp/Service[ntp]/ensure: ensure changed 'stopped' to 'running'
notice: /Stage[main]/Ntp/Service[ntp]: Triggered 'refresh' from 2 events
notice: Finished catalog run in 8.43 seconds

Otherwise Puppet should give you enough information on what went wrong. Checking /var/log/messages on the client and  /var/log/puppet/puppet.log sounds like a good idea.

Conclusion

Puppet is a powerful tool for managing large computer networks, including clouds, but there are still some glitches (at least in openSUSE packages) that need to be fixed.

a silhouette of a person's head and shoulders, used as a default avatar

How-to Install Eclipse 3.7 (Indigo SR1) in openSUSE 12.1 (GNOME 3) only within 9 Steps

This is my  blog post for  “How-to Install Eclipse 3.7 (Indigo SR1) in openSUSE 12.1 (GNOME 3) only within 9 Steps ”

In my case i installed Eclipse 3.7 (Indigo SR1) due to i have to use the Eclipse Marketplace. Eclipse Marketplace offers a thousand of plug-ins and extra features available for Eclipse platform. It is a truth that i searched a lot in order to find a package or a “one-click install” file  but the result was an installation of  3.6.2 version. As we all know in FLOSS always there is a way to overcome problems and also fix them. Furthermore as a developer  i have to use the “Eclipse IDE for Java EE Developers“.  Here are the instructions on how-to install (via terminal) this edition of Eclipse. (Some images are in greek language , due to the fact that i use the greek language as System language)

1st Step : You have to dowload the version you wish to install from Eclipse Official Website. In our case we choose Linux 32-bit.

2st Step : We download the .tar.gz file and  i suggest  saving it at  /home/your-user-name/Downloads.

3st Step : We open the terminal and then type

cd ~/Downloads/

tar -xvf eclipse-jee-indigo-SR1-linux-gtk.tar.gz

In order to un-compress the file which have been downloaded.

4st Step : We search the “Alacarte

5st Step : We click on  it , so as to open the application.

6th Step :  After the 5st Step we click at right so as to add a new ” Application launcher” .

7th Step : We fill the fields and add the Eclipse image (we have to search in ~/Downloads/eclipse/icon.xpm by using the “Browse” button).  Here is shown the “result of this process .

8th Step : Then we click on “Activities -> Applications” , and we see the  “Eclipse” so as to access it.

9th Step : Just enjoy  Eclipse!

This How-to is formed by 9 steps , 1 less than the KDE’s how to :).

a silhouette of a person's head and shoulders, used as a default avatar

Gloves (Systems Management Library AKA YaST++) Running on Ubuntu

About

You might have seen Jiri's blog post YaST++: next step in system management already. This article describes new systems management library that could be used by YaST and WebYast in the future or by any other application that can connect to its Ruby API.

Although this project is still in a research phase, we'd like to have support for other Linux distributions as well. This blog describes how to start with Gloves on Ubuntu.

Installing the System

If you already have Ubuntu installed, you can skip this part.

  • Download Ubuntu Desktop 11.10 for instance from here
  • Install it
  • Log into the system as user
  • Install additional updates if available

Preparing the System for Gloves


Gloves is a library written in Ruby, that uses Augeas for parsing configuration files and D-Bus for authorization if called by a non-root user. That's why Gloves need some additional libraries to be installed.

  • Open xterm or any other shell and continue there...
  • Run: sudo apt-get install git rake libaugeas0 libaugeas-dev rubygems libopen4-ruby libaugeas-ruby
  • Download the latest .gem file from package rubygem-packaging_rake_tasks in openSUSE build service
  • Rename the downloaded gem: mv packaging_rake_tasks*.gem* packaging_rake_tasks.gem
  • Install the downloaded .gem using: sudo gem install packaging_rake_tasks.gem

Gloves from the Sources



  • Download the sources
    Read-only: git clone https://github.com/yast/yast--.git
    Or read/write: git clone git@github.com:yast/yast--.git
  • Install the sources
    cd yast--
    sudo rake install

Running an Example


I've tried the root access only, omitting D-Bus for now. I'd be glad if somebody tried that and written another blog post describing the required steps to make it work.

  • sudo su
  • cd yast--
  • cd yast++lib-users/examples
  • ./users_read
    This should print a Ruby map containing all system users depending on your configuration. So, for instance something similar to this:

    {...,
      "root"=>{"name"=>"root", "gid"=>"0", "uid"=>"0",
        "shell"=>"/bin/bash", "home"=>"/root", "password"=>"x"},
      ...,
      "george"=>{"name"=>"George,,,", "gid"=>"1000",
        "uid"=>"1000", "shell"=>"/bin/bash",
        "home"=>"/home/george", "password"=>"x"},
    ...}
Congratulations! :) The basic Gloves now work on your Ubuntu.

Where to go next? Explore the Gloves source code. Check out the documentation. Read more about the library architecture. Have more fun with the openSUSE tutorial. Give us your feedback at yast-devel@opensuse.org mailing-list.

a silhouette of a person's head and shoulders, used as a default avatar

Heads up: OBS SLES 11 PHP stuff migrates to SP2! – server:php:applications/SLE11 repository

Companies maintaining production servers often have a desire for stable, long-living Operating System and Software distributions. The idea is that once set up, things should not break and major upgrades should only be needed in long intervals and at points in time which best suit business operation. In the SUSE family of operating systems, openSUSE is the young and adventurous (fast-paced, community driven) branch while the SUSE Linux Enterprise (SLE Server and SLE Desktop) crowd is a stable, paid-for building block you can depend your mission critical data on. Enterprise platforms have a certain drawback: To maintain stability, they don’t ship the latest-greatest software packages and they try to stick to a certain set of core options and extensions.

For PHP Software, the Open Buildservice server:php:applications repository is one of the primary sources for up to date versions of software like PHPUnit, phpmyadmin, Horde Groupware or wordpress. While it is acceptable and desirable to have a stable and well-maintained (though old) version of apache or mysql, you don’t want to sit on an aged version of web applications or libraries and frameworks for developers.

Today, this repository switched to build against the new SLE11 SP2 code base. SLE11 comes with two PHP options: The php5-* packages which ship PHP 5.2 and the php53-* packages which ship PHP 5.3
Currently, some packages will not build as they require „php-{something}“ which is provided by multiple packages. We are working on resolving this.

The switch was needed because more and more packages rely on a more recent version of PEAR or PHP 5.3 features which simply cannot be provided in Service Pack 1 installations without adding extra repositories like server:php and server:php:extensions (unsupported, recent PHP 5.3 built against SLE11).

Please keep in mind that software from OBS is generally not supported by your SLE license.

a silhouette of a person's head and shoulders, used as a default avatar

Change Tracking for the iPad.

Change Tracking for the iPad.

Change Tracking (also known as "tracked changes" or "red lining") is a feature which is very important for business users. For a lot of people reviewing change tracked documents is their daily business. Losing the change tracking information on the iPad make the iPad useless (or of limited use) for business users.

Therefore we decided that the Native OOXML layout engine should have change tracking support. That pretty much screwed up our schedules and release dates, but we felt that change tracking is so essential for business users that its worth doing. Six month later we are happy to announce change tracking support for text insertions, text deletions and comments. Here is what is looks like on the iPad:



You can quickly review all changes made to the document in the side bar. A simple tap on a side bar item brings you to the corresponding change and vice versa.
There is even an additional feature badly missing in Word. You can assign a color to an author by simply tapping on the colored square:



The assignment will be permanent across documents which makes it much quicker to review a document.

We will start a public beta test soon.

a silhouette of a person's head and shoulders, used as a default avatar

Projetos na Copel

Terminei mais uma versão do gerador de roteiros p/ a Copel, agora com plugins. Criei uns scripts Groovy para o Freemind que gerassem templates para facilitar a vida de quem está começando a usar a ferramenta. Ainda, desenvolvi uma aplicação web Django p/ gerenciar as atividades dos desenvolvedores, algo que já existe lá, mas não é muito legal, já que você é obrigado a fazer as contas de duração

a silhouette of a person's head and shoulders, used as a default avatar

Facebook, κωδικοί και … συνεντεύξεις

Αντιδρώντας στις πρόσφατες αναφορές ότι ορισμένες οργανώσεις απαιτούν από τα άτομα που αναζητούν εργασία να παραδώσουν τους κωδικούς πρόσβασής τους του Facebook, ένα κοινωνικό δίκτυο, την Παρασκευή, επέκρινε την πράξη για την υπονόμευση των προσδοκιών των μελών για ιδιωτικότητα και ασφάλεια. Επίσης, τόνισε ότι μία τέτοια κίνηση θα μπορούσε να εκθέσει τους εργοδότες που απαιτούν να είναι ασφαλής οι κωδικοί και που έχει δεσμευτεί με το να παρθούν μέτρα για την προστασία της ιδιωτικότητας και της ασφάλειας κάθε μέλους. Όμως, αρκετοί είναι αυτοί που έχουν επιφυλάξεις για τα κίνητρα και τις “κινήσεις” του Facebook.

Για παράδειγμα, “Το Facebook, στο παρελθόν, δεν ήταν και ο καλύτερος φύλακας των πληροφοριών και των προσωπικών δεδομένων των χρηστών του, καθώς και έχουν ακουστεί αρκετά για ορισμένα πράγματα που έχει(έχουν) κάνει”.Ο Paul Stephens, διευθυντής της πολιτικής και της υπεράσπισης προσωπικών δεδομένων του Privacy Rights Clearinghouse, ανέφερε στο TechNewsWorld: “Πιστεύω ότι αυτό τους δίνει την ευκαιρία να μετατραπούν σε ο σωτήρες της ιδιωτικότητας.”

Ωστόσο, ο John Simpson, συνήγορος του καταναλωτή στο Consumer Watchdog, επικρότησε την κίνηση του Facebook. “Πολύ συχνά κριτικάρω τις πολιτικές του Facebook για διάφορα θέματα, ιδιαίτερα για ότι αφορά την υπεροπτική προσέγγιση της προστασίας προσωπικών δεδομένων των χρηστών”, τόνισε στο TechNewsWorld. “Είναι απόλυτα σωστοί σε αυτό το θέμα, και υποστηρίζω την θέση τους”.

Περί τίνος ο λόγος

Το Τμήμα Διορθώσεων του Μέρυλαντ (Maryland’s Department of Corrections) ζήτησε από τα άτομα που ζητούσαν εργασία να συνδεθούν στο Facebook και να “κινηθούν” μέσω αυτού, ώστε να μπορούν οι υπεύθυνοι-κριτές συνέντευξης να δουν τι ακριβώς έχουν δημοσιεύσει στα προφίλ τους, για ποια θέματα και ποιους έχουν φίλους.

Επίσης, πολλά σχολεία απαιτούν από τους αθλητές-μαθητές να δίνουν στους προπονητές ή στους αρμόδιους συμμόρφωσής τους ένα “βαθμό” πρόσβασης στις σελίδες τους στο Facebook. Αυτό επιτυγχάνεται με την βοήθεια του “αιτήματος φιλίας” ή του λογισμικού που προμηθεύονται από κοινωνικές επιχειρήσεις παρακολούθησης, όπως Varsity Monitor, ώστε να μπορούν να τους παρακολουθούν.

 Η έντονη κατακραυγή για τις πράξεις αυτές οδήγησαν τον Γερουσιαστής της Κονέκτικατ, βορειοανατολική πολιτεία των ΗΠΑ,RichardBlumenthal, να αρχίσει να συντάσσει ένα νομοσχέδιο για την απαγόρευση των εργοδοτών να ζητάνε από τους υποψήφιους εργαζόμενους την δικαίωμα πρόσβασης στα προσωπικά τους προφίλ του Facebook.

Τι είπε το Facebook

Το Facebook ανακοίνωσε ότι οι χρήστες δεν πρέπει ποτέ να μοιράζονται τους κωδικούς τους, ούτε να αφήνουν κανέναν να έχει πρόσβαση στους λογαριασμούς τους, ή να κάνει οτιδήποτε που θα μπορούσε να θέσει σε κίνδυνο την ασφάλεια των λογαριασμών τους ή την ιδιωτικότητα των φίλων τους.

Επίσης, το Facebook προειδοποιεί ότι οι εργοδότες δεν μπορούν να έχουν τις κατάλληλες πολιτικές και την κατάρτιση να αφήσουν τους κριτικούς να χειριστούν τις ιδιωτικές πληροφορίες. Και αν υποθέσουμε ότι γίνει αυτό, ενδέχεται να χρειαστεί να αναλάβουν την ευθύνη για την προστασία των πληροφοριών που έχουν ήδη δει οι κριτικοί.

Λόγω, λοιπόν, των παραπάνω “υποθέσεων” το Facebook αποφάσισε να λάβει τα μέτρα του και να προστατεύσει την ιδιωτικότητα των χρηστών του, είτε με την συμμετοχή πολιτικών υπευθύνων είτε με την έναρξη νομικών δράσεων.

Η δυσπιστία κυριαρχεί!

Ειρωνική θα μπορούσε να ήταν μία υποτίμηση”, δήλωσε στο TechNewsWorld ο CharlesKing, κύριος αναλυτής της Pund-IT, σχετικά με την ανακοίνωση του Facebook για την παραβίαση της ιδιωτικότητας των χρηστών. “Φαίνεται ότι αποτελεί ένα κλασσικό παράδειγμα του κάνε ότι σου λέω, όχι όπως το κάνω, ειδικά αν εμβαθύνουμε σε μία συζήτηση-διαμάχη που έγινε στις αρχές της εβδομάδας για την αλλαγή της επίσημης γλώσσας του Facebook γύρω από αυτό που χρησιμοποιείται ώστε να είναι η πολιτική προστασίας της ιδιωτικότητας.”

Το Facebook, την περασμένη εβδομάδα, δημοσίευσε προτεινόμενες αλλαγές στο Statement of Rights and Responsibilities, αντικαθιστώντας τον όρο “χρήση δεδομένων” με την “πολιτική προστασίας της ιδιωτικότητας”, με σκοπό της “επιστράτευση” των υποστηρικτών της προστασίας δεδομένων στις Ηνωμένες Πολιτείες και την Ευρωπαϊκή Ένωση.

Ο CharlesKing συνέχισε λέγοντας ότι, “Έχω παρατηρήσει … μια συντονισμένη προσπάθεια για την αναδιατύπωση του MarkZuckerberg και την εταιρεία του ως ώριμη, που αφορά ηγέτες, αντί για άπειρους απροσάρμοστους [απεικονίζεται στην ταινία] «TheSocialNetwork»”.

Οι πράξεις περί προστασίας ιδιωτικότητας των μεγάλων εταιριών κοινωνικής δικτύωσης είναι, επίσης, υπό την διερεύνηση από την Αντιπροσωπεία Ενέργειας των ΗΠΑ και την Επιτροπή Εμπορίου.

Εν κατακλείδι, Facebook και Twitter έχουν κάνει “πάταγο” με την εμφάνισή τους. Αλλά, πλέον, υπάρχουν αμφιβολίες για τα υπάρχοντά τους. Πρέπει όλοι να προσέχουμε τα προσωπικά δεδομένα μας, γιατί μπορεί να είναι αρκετά ευάλωτα μέσω του Ίντερνετ. Επειδή την προηγούμενη εβδομάδα έκανα μία εργασία για την Ιδιωτικότητα, μέσω του Διαδικτύου, διαπίστωσα ότι τέτοιου είδους ιστοσελίδες και γενικά τέτοιου είδους αμφιβολίες αρχίζουν να κάνουν την εμφάνισή τους όλο και πιο συχνά είτε για καλό(Anonymous) είτε για “κακό” σκοπό. Η προστασία των δεδομένων μας είναι πολύ σημαντική και θα πρέπει να προσέχουμε τι προβάλουμε. Είτε είναι μια φωτογραφία, είτε είναι ένα βίντεο κοκ. Μία εταιρία, όπως ανέφερα και στην εργασία μου, είχε δημιουργήσει μία εφαρμογή σε smartphone, η οποία με μία φωτογραφία μπορούσε να αναγνωρίσει την ταυτότητα ορισμένων προσώπων. Προσοχή, λοιπόν, και στο “φοβερό” facebook. Τα ποσταρίσματα, όμως, να γίνονται ελεύθερα. Δεν χρειάζεται να φτάσουμε στο σημείο να φοβόμαστε να εκφραστούμε (αυτό θέλουν οι Μεγάλοι).

(Ναι σε αυτό το άρθρο θα βάλω και υστερόγραφο…)

Υ.Γ: Μία φορά πριν αρκετό καιρό, προσπάθησα ο καημένος να διαγράψω το προφίλ μου στο Facebook. Ύστερα από πολύ ψάξιμο… κατάφερα να το κάνω “απενεργοποίηση”. Ναι, όσο και να σας φαίνεται περίεργο. Απενεργοποίηση! Δεν το είχα δώσει και πολύ σημασία, αφού έκανα αναζήτηση και δεν έβρισκε το προφίλ μου, οπότε θεώρησα ότι διαγράφτηκε. Ως αναποφάσιστος, μετά από μερικές μέρες ήθελα να ξανακάνω προφίλ. Αυτή την φορά, όμως, δεν χρειαζόταν να το ξανακάνω από την αρχή. Με ένα κλικ στο “ενεργοποίηση” οι περισσότερες φωτογραφίες μου εμφανίστηκαν και γενικά επανήλθε το “διεγραμμένο” μου προφίλ. Τελικά δεν διαγράφτηκε τίποτα… Τα συμπεράσματα δικά σας!

Πληροφορίες πάρθηκαν από άρθρο του site www.technewsworld.com


the avatar of Klaas Freitag

CSync and Mirall Development Setup

[caption id=“attachment_48” align=“alignleft” width=“92” caption=“Build it!”]A KDE bagger[/caption] people were asking how to set up a development setup for the syncing client we are working on to sync local files to ownCloud and vice versa currently, work title mirall. While a website about it is not yet finished, I try to summarize it here. There are some hacks here and there but that’s how I do it today. It will improve over time. Note that this is about a real development setup, not for a production build.

Linux and Windows development should go in parallel as easy as possible.

Edit: Please also refer to the official build documentation.

Building CSync

To build mirall, csync must be built first. CSync is hosted in its upstream git repo and there is also my development branch which holds the latest changes.

Overview of Build Directory setup.Clone the csync branch into a directory. In parallel to the cloned csync dir, create a new directory buildcsync as a cmake build dir. Change into buildcsync and call cmake like this:

cmake -DCMAKE_BUILD_TYPE="Debug" ../csync

and watch its output. You probably have to fulfill some dependencies, make sure to install all the needed devel packages. You will need log4c, iniparser, sqlite3 and for the modules libssh, libsmbclient and neon for the ownCloud module. Once cmake succeeds, call make to build it. So far relaxed for Linux.

To build csync for Windows, there are a couple of possibilities. The one I chose was to cross compile with mingw under openSUSE. That way I can build for all on one devel machine under my prefered system.

For that, I installed the cross compile and mingw32 packages from the openSUSE Build Service, which really demonstrates power here. I used the mingw repository. Kudos at this point to Dominik Schmidt, a Tomahawk developer, who helped me a lot to set all up and to all people who work in OBS to maintain the mingw repo.

Basically the cross compiler and libs (eg. packages mingw32-cross-gcc, mingw32-gcc-c++ and mingw32-cross-cpp) and the dependencies for the software to build have to be installed from the mingw repo. An action item is left to dig which in detail.

After installation you should have some mingw32-tools such as mingw32-cmake which should be used to build for win.

Now create a directory win and within that again buildcsync. In there, start cmake with

mingw32-cmake -DCMAKE_BUILD_TYPE="Debug" -DWITH_LOG4C=OFF ../../csync

That should do it. I did not find log4c for Win32, so I disabled it in the cmake call. Now build it with mingw32-make and see if it creates a dll in the src subdir and csync.exe in the client dir.

Building mirall

For mirall, it works similar. Mirall uses Qt and is C++, so again a lot of packages to install. Make again sure to have the mingw32-qt packages, for example mingw32-libqt4-devel and more.

However, there are two caveats with mirall:

  • the current development state of mirall needs the latest devel version of csync which we just built. I tweaked the CMakefile that way that if the mirall- and csync and build* folders are in the same directory, the csync is found by mirall cmake in the parallel dir. So I do not have to install the devel version of csync in my system.

  • to build mirall for windows, it must be made sure that cmake finds the mingw32 Qt tools like moc. Since there is also the Linux moc in the system, this can confuse. Domme pointed me to a script that sets some variables correct values to prevent mixing: cat ../docmake.sh # %_mingw32_qt4_platform win32-g++-cross export QT_BINDIR=/usr/bin export BIN_PRE=i686-w64-mingw32 /usr/bin/mingw32-cmake \ -DCMAKE_BUILD_TYPE="Debug" \ -DQMAKESPEC=win32-g++-cross \ -DQT_MKSPECS_DIR:PATH=/usr/i686-w64-mingw32/sys-root/mingw/share/qt4/mkspecs \ -DQT_QT_INCLUDE_DIR=/usr/i686-w64-mingw32/sys-root/mingw/include \ -DQT_PLUGINS_DIR=/usr/i686-w64-mingw32/sys-root/mingw/lib/qt4/plugins \ -DQT_QMAKE_EXECUTABLE=${QT_BINDIR}/${BIN_PRE}-qmake \ -DQT_MOC_EXECUTABLE=${QT_BINDIR}/${BIN_PRE}-moc \ -DQT_RCC_EXECUTABLE=${QT_BINDIR}/${BIN_PRE}-rcc \ -DQT_UIC_EXECUTABLE=${QT_BINDIR}/${BIN_PRE}-uic \ -DQT_DBUSXML2CPP_EXECUTABLE=${QT_BINDIR}/qdbusxml2cpp \ -DQT_DBUSCPP2XML_EXECUTABLE=${QT_BINDIR}/qdbuscpp2xml ../../mirall

With that setup I can build both the Linux and Windows version quite easily. There is still a lot to be solved, such as automatted packaging and such. CMake as usual is a great help.

the avatar of Jim Fehlig

libvirt sanlock integration in openSUSE Factory

A few weeks back I found some time to package sanlock for openSUSE Factory, which subsequently allowed enabling the libvirt sanlock driver.  And how might this be useful?  When running qemu/kvm virtual machines on a pool of hosts that are not cluster-aware, it may be possible to start a virtual machine on more than one host, potentially corrupting the guest filesystem.  To prevent such an unpleasant scenario, libvirt+sanlock can be used to protect the virtual machine’s disk images, ensuring we never have two qemu/kvm processes writing to an image concurrently.  libvirt+sanlock provides protection against starting the same virtual machine on different hosts, or adding the same disk to different virtual machines.

In this blog post I’ll describe how to install and configure sanlock and the libvirt sanlock plugin.  I’ll briefly cover lockspace and resource creation, and show some examples of specifying disk leases in libvirt, but users should become familiar with the wdmd (watchdog multiplexing daemon) and sanlock man pages, as well as the lease element specification in libvirt domainXML.  I’ve used SLES11 SP2 hosts and guests for this example, but have also tested a similar configuration on openSUSE 12.1.

The sanlock and sanlock-enabled libvirt packages can be retrieved from a Factory repository or a repository from the OBS Virtualization project.  (As a side note, for those that didn’t know, Virtualization is the development project for virtualization-related packages in Factory.  Packages are built, tested, and staged in this project before submitting to Factory.)

After configuring the appropriate repository for the target host, update libvirt and install sanlock and libvirt-lock-sanlock.
# zypper up libvirt libvirt-client libvirt-python
# zypper in sanlock libsanlock1 libvirt-lock-sanlock

Enable watchdog daemon and sanlock daemons.
# insserv wdmd
# insserv sanlock

Specify the sanlock lock manager in /etc/libvirt/qemu.conf.
lock_manager = “sanlock”

The suggested libvirt sanlock configuration uses NFS for shared lock space storage.  Mount a share at the default mount point.
# mount -t nfs nfs-server:/export/path /var/lib/libvirt/sanlock

These installation steps need to be performed on each host participating in the sanlock-protected environment.

libvirt provides two modes for configuring sanlock.  The default mode requires a user or management application to manually define the sanlock lockspace and resource leases, and then describe those leases with a lease element in the virtual machine XML configuration.  libvirt also supports an auto disk lease mode, where libvirt will automatically create a lockspace and lease for each fully qualified disk path in the virtual machine XML configuration.  The latter mode removes the administrator burden of configuring lockspaces and leases, but only works if the administrator can ensure stable and unique disk paths across all participating hosts.  I’ll describe both modes here, starting with the manual configuration.

Manual Configuration:
First we need to reserve and initialize host_id leases.  Each host that wants to participate in the sanlock-enabled environment must first acquire a lease on its host_id number within the lockspace.  The lockspace requirements for 2000 leases (2000 possible host_id’s) is 1MB (8MB for 4k sectors).  On one host, create a 1M lockspace file in the default lease directory (/var/lib/libvirt/sanlock/).
# truncate -s 1M /var/lib/libvirt/sanlock/TEST_LS

And then initialize the lockspace for storing host_id leases.
# sanlock direct init -s TEST_LS:0:/var/lib/libvirt/sanlock/TEST_LS:0

On each participating host, start the watchdog and sanlock daemons and restart libvirtd.
# rcwdmd start; rcsanlock start; rclibvirtd restart

On each participating host, we’ll need to tell the sanlock daemon to acquire its host_id in the lockspace, which will subsequently allow resources to be acquired in the lockspace.
host1:
# sanlock client add_lockspace -s TEST_LS:1:var/lib/libvirt/sanlock/TEST_LS:0
host2:
# sanlock client add_lockspace -s TEST_LS:2:var/lib/libvirt/sanlock/TEST_LS:0
hostN:
# sanlock client add_lockspace -s TEST_LS:<hostidN>:var/lib/libvirt/sanlock/TEST_LS:0

To see the state of host_id leases read during the last renewal
# sanlock client host_status -s TEST_LS
1 timestamp 50766
2 timestamp 327323

Now that we have the hosts configured, time to move on to configuring a virtual machine resource lease and defining it in the virtual machine XML configuration.  First we need to reserve and initialize a resource lease for the virtual machine disk image.
# truncate -s 1M /var/lib/libvirt/sanlock/sles11sp2-disk-resource-lock
# sanlock direct init -r TEST_LS:sles11sp2-disk-resource-lock:/var/lib/libvirt/sanlock/sles11sp2-disk-resource-lock:0

Then add the lease information to the virtual machine XML configuration
# virsh edit sles11sp2

<lease>
<lockspace>TEST_LS</lockspace>
<key>sles11sp2-disk-resource-lock</key>
<target path=’/var/lib/libvirt/sanlock/sles11sp2-disk-resource-lock’/>
</lease>

Finally, start the virtual machine!
# virsh start sles11sp2
Domain sles11sp2 started

Trying to start same virtual machine on different host will fail since the resource lock is already leased to another host
other-host:~ # virsh start sles11sp2
error: Failed to start domain sles11sp2
error: internal error Failed to acquire lock: error -243

Automatic disk lease configuration:
As can be seen even with the trivial example above, manual disk lease configuration puts quite a burden on the user, particularly in an adhoc environment with only a few hosts and no central management service to coordinate all of the lockspace and resource configuration.  To ease this burden, Daniel Berrange adding support in libvirt for automatically creating sanlock disk leases.  Once the environment is configured for automatic disk leases, libvirt will handle the details of creating lockspace and resource leases.

On each participating host, edit /etc/libvirt/qemu-sanlock.conf, setting auto_disk_leases to 1 and assigning a unique host_id.
auto_disk_leases = 1
host_id = 1

Then restart libvirtd
# rclibvirtd restart

Now libvirtd+sanlock is configured to automatically acquire a resource lease for each virtual machine disk.  No lease configuration is required in the virtual machine XML configuration.  We can simply start the virtual machine and libvirt will handle all the details for us.

host1 # virsh start sles11sp2
Domain sles11sp2 started

libvirt creates a host lease lockspace named __LIBVIRT__DISKS__.  Disk resource leases are named using the MD5 checksum of the fully qualified disk path.  After staring the above virtual machine, the lease directory contained
host1 # ls -l /var/lib/libvirt/sanlock/
total 2064
-rw——-  1 root root 1048576 Mar 13 01:35 3ab0d33a35403d03e3ad10b485c7b593
-rw——-  1 root root 1048576 Mar 13 01:35 __LIBVIRT__DISKS__

Finally, try to start the virtual machine on another participating host
host2 # virsh start sles11sp2
error: Failed to start domain sles11sp2
error: internal error Failed to acquire lock: error -243

Feel free to try the sanlock and sanlock-enabled libvirt packages from openSUSE Factory or our OBS Virtualization project. One thing to keep in mind is that the sanlock daemon protects resources for some process, in this case qemu/kvm.  If the sanlock daemon is terminated, it can no longer protect those resources and kills the processes for which it holds leases.  In other words, restarting the sanlock daemon will terminate your virtual machines!  If the sanlock daemon is SIGKILL’ed, then the watchdog daemon intervenes by resetting the entire host.  With this in mind, it would be wise to consider an appropriate disk cache mode such as ‘none’ or ‘writethrough’ to improve the integrity of your disk images in the event of a mass virtual machine kill off.

a silhouette of a person's head and shoulders, used as a default avatar

Google Summer of Code at LibreOffice: become famous and maybe rich!

So, just before the weekend, we received the great news that Google chose LibreOffice as a mentoring organisation for Google Summer of Code again this year. Some of you might remember that last year we had several extremely successful Google Summer of Code projects and that two of our successful students are currently employed working on free and opensource software as a direct consequence of their participation in the program. I had a priviledge to mentor Eilidh McAdam and we implemented a Visio import filter that is one of the flagship features of LibreOffice 3.5. Eilidh is now employed by Lanedo.

This year, we proposed with Valek Filippov two projects related to reverse-engineered file-formats. The first is the implementation of MS Publisher import filter for LibreOffice and the second is to help to improve and extend the Corel Draw import filter that will be part of LibreOffice 3.6 release. Both projects require working knowledge of C++ and a lot of good will. Each of the import filters consists of a standalone library and a glue that plugs the library into LibreOffice. These libraries can be built as system libraries and LibreOffice can use them from the system. The advantage of this approach for a student participating at the development is that there is only a minimum need of recompiling LibreOffice if some substantial part of the glue (that is rather small) changes. Therefore, I encourage all of you who are considering applying with LibreOffice for this year's Google Summer of Code to have a close look at those two projects. As a bonus is that if you are successful, you become famous and eventually rich.

You can have a look at libcdr, the horsepower behind the Corel Draw import filter and at the skeleton of libmspub, that will be the basis of the Publisher import filter. And don't hesitate to become rich and famous with Google Summer of Code at LibreOffice