Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE with sudo – but convenient!

If you are used to handling activities with administrator rights (“root”) like I am from the Debian world, you will have some difficulties with openSUSE in the beginning. With two users it is still possible, because you can set the same password for both user and root. But at the latest with more user accounts this is already over, unless you give the root password to everyone. Both solutions are certainly somehow practicable, but it’s not very nice. Especially since sudo would actually be installed – but only halfway through.

So I started with my current openSUSE Tumbleweed to teach the system a reasonable sudo concept and then apply it to YaST. It was a bit nasty to find out, but in the end it worked well.

Let’s go!

visudo

By default sudo asks for the root password. This is pretty nonsensical, so let’s change it!

  1. In the first part we still work as normal users. The line details may vary depending on the age of the file/system version and previous changes to it.
    sudo visudo
  2. The parameters in line 43 starting with env_keep = “LANG… at the end within the quotation marks:
    DISPLAY XAUTHORITY
  3. Comment out lines 68 and 69 completely, so that the password of the “target user” is no longer requested:
    #Defaults       targetpw
    #ALL    ALL = (ALL) ALL
  4. Additionally you uncomment line 81, so delete the comment character #:
    %wheel ALL=(ALL) ALL
  5. Save, close and then add your user(s) to the group “wheel” either via YaST or directly in the terminal:
    gpasswd -a <dein-username> wheel

    By logging out and in again, the change will be applied and sudo wants to have your user password in the terminal from now on.

YaST

For the graphical version of YaST, PolicyKit is used for authentication, a little more work is needed here. From here on, you work as root, so change the account with su –.

  1. Create a PolicyKit Action for YaST
    vim /usr/share/polkit-1/actions/org.opensuse.pkexec.yast2.policy
  2. Insert the following XML block into the file. Please pay attention to line breaks when copying/pasting.
    <?xml version="1.0" encoding="UTF-8"?><!DOCTYPE policyconfig PUBLIC "-//freedesktop//DTD PolicyKit Policy Configuration 1.0//EN" "http://www.freedesktop.org/software/polkit/policyconfig-1.dtd">
    <policyconfig>
    
      <action id="org.opensuse.pkexec.yast2">
        <message>Authentication is required to run YaST2</message>
        <icon_name>yast2</icon_name>
        <defaults>
          <allow_any>auth_self</allow_any>
          <allow_inactive>auth_self</allow_inactive>
          <allow_active>auth_self</allow_active>
        </defaults>
        <annotate key="org.freedesktop.policykit.exec.path">/usr/sbin/yast2</annotate>
        <annotate key="org.freedesktop.policykit.exec.allow_gui">true</annotate>
      </action>
    
    </policyconfig>

    Save, close – the success can be checked as a regular user with pkexec /usr/sbin/yast2.

  3. Save the default rights configuration and replace it with the system configuration. Our file will not be overwritten during an upgrade.
    mv /etc/polkit-default-privs.local /etc/polkit-default-privs.local.bkup
    cp /etc/polkit-default-privs.standard /etc/polkit-default-privs.local

    The necessary adjustment is to replace auth_admin with auth_self everywhere. You can also do this by hand, but with sed it is more convenient and faster:

    sed -i 's/auth_admin/auth_self/g' /etc/polkit-default-privs.local
  4. To make the authentication via PolicyKit work, create a short shell script that will be called from the menu in the future:
    vim /usr/local/sbin/yast2_polkit
  5. The script looks like this, just add it to the yast2_polkit file:
    #!/bin/bash
    
    if [ $(which pkexec) ]; then
            pkexec --disable-internal-agent "/usr/sbin/yast2" "$@"
    else
            /usr/sbin/yast2 "$@"
    fi
  6. Save and close. Finally you make the script executable:
    chmod +x /usr/local/sbin/yast2_polkit
  7. Finally, you create a .desktop file. This will make the modified YaST starter appear directly in the main menu, system-wide for all users. For example, in Xfce it is listed under “Settings”. I have not tested other desktops, but I assume that the starter will end up in a useful place, since it is only a customized copy of the original.
    Of course you could also edit the original file for YaST (YaST.desktop) but it will be overwritten during an upgrade. And a copy in /usr/local/share/applications ignores both the application and whisker menus.
    So:
    vim /usr/share/applications/YaST2.desktop
  8. Insert and save:
    [Desktop Entry]
    X-SuSE-translate=true
    Type=Application
    Categories=Settings;System;X-SuSE-Core-System;X-SuSE-ControlCenter-System;X-GNOME-SystemSettings;
    Name=YaST2
    Icon=yast
    GenericName=Administrator Settings
    Exec=/usr/local/sbin/yast2_polkit
    Encoding=UTF-8
    Comment=Manage system-wide settings
    Comment[DE]=Systemweite administrative Einstellungen
    NoDisplay=false

That’s all. With this, a login as root is no longer necessary or can be done comfortably via sudo su – with your user password. Whether the concept of openSUSE is now worse or better, I don’t want to decide. That is a matter of taste, I think.
What I liked in any case is the clear adherence to standards. This makes finding solutions much easier and faster. Thanks to good documentation and helpful forum posts I was able to finish everything within about an hour – and the great knowledge of PolicyKit!

a silhouette of a person's head and shoulders, used as a default avatar

FlightGear fun

How to die in Boeing 707, quick and easy. Take off, realize that you should set up fuel heating, select Help|, aim for checklists.. and hit auto startup/shutdown. Instantly lose all the engines. Fortunately, you are at 6000', so you start looking for the airport. Then you
realize "hmm, perhaps I can do the startup thing now", and hit the menu item once again. But instead of running engines, you get fire warnings on all the engines. That does not look good. Confirm fire, extinguish all four engines, and resume looking for airport in range. Trim for best glide. Then number 3 comes up. Then number 4. Number one and you know it will be easy. Number two as you fly over the runway... go around and do normal approach.

the avatar of Sankar P

Containers 101

The term "containers" became popular in the recent times, thanks to Docker. However, the idea of containers is there for long, through things like: Solaris Zones Linux Containers, etc. (even though the underlying implementations are different). In this post, I try to give a small overview of the containers ecosystem (as it stands in 2017), from my perspective.

This post is written in response to a question by,  hacker extraordinaire, Varun on what one should know about Containers as of today. Though the document is mostly generic, some lines of it are India specific, which I have highlighted clearly. Please mention in comments, if there is anything else that should have been covered, or if I have made any mistakes or if you have any opinions.

So, What exactly are Containers ?

Containers are an unit of packaging and deployment, that will guarantee repeatability and isolation. Let us see what each part of that sentence means.

Containers are a packaging tool like RPMs or EARs in the sense that they offer you a way to bundle up your binaries (or sources in case of interpreted languages). But instead of merely archiving your sources, Containers provide a way to even deploy your archive, repeatably too.

Anyone who has done packaging, knows, how much of a pain dependency-hell can cause. For example, An application A needs a library L of version 0.1, whereas another application B needs the same library L but of version 0.3  Just to screw up the life of packagers, the versions 0.1 and 0.3 may be conflicting each other and may not co-exist in a system, even in different installation paths. Containerising your application puts each of these applications A and B into their own bundle, with their own library dependencies. However, the real power of containerising is that for each of your application, A and B, they get a view of isolation that they are running in a private environment and so L1 0.1 and 0.3 may never share any runtime data.


One may be reminded about Virtual Machines (VMs) while reading the above text. Even VMs solve the above isolation problem, but they are very heavy. The fundamental difference between a VM and a Container is: a VM virutalizes/abstracts a hardware/operating-system and gives you a machine abstraction, while a Container virtualizes/abstracts an application of your choice. Containers are thus very lightweight and far more approachable.

The Ecosystem

Docker is the most used container technology today. There are other container runtimes such as rkt too. There is an Open Containers Initiative to create standards for container runtimes. All these container runtimes make use of linux kernel features, especially, cgroups to provide process isolation. Microsoft has been making a lot of efforts to support containers natively in the Windows kernel, to support Containers natively as part of their Azure cloud offering for quite some time now.

Container Orchestration is a way for deploying different containers on a bunch of machines. While Docker is arguably the champion of container runtimes, Kubernetes is unarguably the King/Queen of container orchestration. Google has been using containers in production, for much long before it became fashionable. In fact the first patch of cgroups support in the linux kernel was submitted to LKML by Google as far back as 2006. Google had/s a large scale cluster management system named Borg which deployed containers (not docker containers) across the humongous google cloud farm. Kubernetes is an open source evolution of Borg, supporting Docker containers natively. Docker-Swarm is an attempt by Docker (the company behind the Docker project) to achieve container orchestration across machines, but there simply is no competition in terms of quality or documentation or feature coverage, compared to Kubernetes (in my limited experience).

Also, in addition to these, There are some poorly implemented, company-specific tools that try to emulate Kubernetes, but these are mostly technical debt and it is wise (imho) for companies to ditch such efforts and move to open projects backed by companies like Google, Red Hat and Microsoft. A distinguished engineer once told me, There is no compression algorithm for experience and there is no need for us to repeat the mistakes made by these companies, decades ago. If you are a startup focussing on solving an user problem, you should focus on your business problem and a container orchestration software should be the last thing that you need to implement.

Kubernetes, though initially a Google project, has now attracted a lot of contributors from a variety of companies such as Red Hat, Microsoft etc. Red Hat have built OpenShift, a platform that provides a lot of useful features such as, Pipelines, Blue-Green deployments, etc. on top of Kubernetes. They even offer a hosted version.  Tectonic (on top of Kubernetes) by Core OS is also a big (at least in terms of developer mindshare) player in this ecosystem.

SUSE has come up recently with the Kubic project for containers (even though I have not played with it myself).

Microsoft have hired some high profile names in the container ecosystem for working on the Kubernetes + Azure  (Including people like: Brendan Burns, Jess Frazelle, etc.) cloud. Azure is definitely way ahead of Google in India, when it comes to cloud business. Their pricing page is localised for India, while Google does not even support Indian currency yet and charges in USD (leading to jokes like the oil/dollar conspiracy, among the Indian startup ecosystem ;) ). AWS and Azure definitely have a bigger developer mindshare in India than Google Cloud Platform (as of 2017).

The founding team of kubernetes (Xooglers) have started a company named Heptio. While I have no doubts on their engineering prowess, I am skeptical if relying on these companies may be risky for startups in India (lack of same timezone support, etc.). If you are in the west, these options (and others such as rancher) may be interesting.

Kubernetes Basics

In Kubernetes, the unit of deployment is a Pod. A pod is merely a collection of Docker containers which will be deployed together always. For example, if your application is a API server that makes use of a Redis cache, before hitting the database for each request, you create a Pod with two containers, a API server container and a Redis container and you deploy them together.

Kubernetes refers to an umbrella of projects that run on a cloud, to manage a cloud. It has various components, such as an API server to interact with the kubernetes system, an agent software named kubelet that runs on each machine in the cloud, a fluentd type of daemon to accumulate logs from various containers and provide a single point of access, a web dashboard, a CLI tool named kubectl to perform various options, etc. In addition to these kubernetes specific components, there are also other services, such as the distributed hashstore etcd (originally from coreos) that you need to setup a basic kubernetes cluster. However,  If you are a small company, It'll be wise to make use of GKE or Azure hosting or OpenShift hosting instead of deploying your own kubernetes system managed by your own admins. It is not worth the hassle.

If you want to play with kubernetes in your development laptop (unless you can afford to treat production as your test box), there is a tool named minikube to help you with that. If you are an application developer and considering to dockerizing and deploying your application, then minikube is definitely the best place to start.

There are quite a few meetups happening for kubernetes all around the world. Visiting some of these may be enlightening. The webinar series by Janakiram was good, but it is a little too long to my taste and I lost interest halfway. The persistent ones among you may find it very useful.

Docker Compose

One of the tools from the Docker project that I love a lot is the handy Docker Compose. It is a tool to work with multiple containers, in a sense it is somewhat like your kubernetes Pods, but without having to install / manage the heavyweight kubernetes ecosystem. I use Docker Compose extensively in CI, where it is the perfect fit for doing end-to-end testing of a webstack, if your sources are in a monolithic repository. In your CI system, you can bring up all your components (say, an API server, a database, a front end node server) and perform an end-to-end testing (say, via selenium). In fact, I cannot fathom how I was doing CI earlier without docker-compose, (just like how I cannot fathom how I used cvs before git, etc.)

AWS

No blog post on cloud technologies will be complete, without mentioning the 800 pound gorilla, Amazon Web Services. Amazon supports containers natively. You can deploy either a single container or multi-container images natively, via Amazon Beanstalk. It is very much similar to the Google Appengine (if you have used it). Beanstalk is a PaaS offering and it takes a Container image and scales it automagically depending on various factors (such as CPU usage, HTTP usage, etc.). I've run Beanstalk and is very satisfied with it (perhaps not as much as with AppEngine though). It is very reliable, performant and scales well (tested for a few hundred users in my limited experience).

For the larger workloads and those who want more control, Amazon offers Elastic Container Service. You can create a bunch of EC2 instances and a bunch of Containers, and ask ECS to run these containers on these VMs in a way that you prefer. This, however locks you to the AWS platform (unlike k8s).

Both Beanstalk and ECS do not cost anything extra other than the price of VMs, which you already pay.

I, however, wish that Amazon starts supporting kubernetes natively. There are other ways to make use of kubernetes in AWS. The most enterprisey is probably Tectonic by Core OS, but we also have projects like kube-aws and kops.

Conclusion:

If you have actually read until this point, Thanks a lot :-)  I could have written a little bit in detail about the nuts and bolts of the containers technology, but I believe that this post, as is, will be a good material for a 101 type of introduction.  Also, there are people with far more working knowledge than me, who are more equipped to write on the details. So, I have left it as an exercise to the readers to find such talks, blogs or books :)

the avatar of Federico Mena-Quintero

How glib-rs works, part 3: Boxed types

(First part of the series, with index to all the articles)

Now let's get on and see how glib-rs handles boxed types.

Boxed types?

Let's say you are given a sealed cardboard box with something, but you can't know what's inside. You can just pass it on to someone else, or burn it. And since computers are magic duplication machines, you may want to copy the box and its contents... and maybe some day you will get around to opening it.

That's a boxed type. You get a pointer to something, who knows what's inside. You can just pass it on to someone else, burn it — I mean, free it — or since computers are magic, copy the pointer and whatever it points to.

That's exactly the API for boxed types.

typedef gpointer (*GBoxedCopyFunc) (gpointer boxed);
typedef void (*GBoxedFreeFunc) (gpointer boxed);

GType g_boxed_type_register_static (const gchar   *name,
                                    GBoxedCopyFunc boxed_copy,
                                    GBoxedFreeFunc boxed_free);

Simple copying, simple freeing

Imagine you have a color...

typedef struct {
    guchar r;
    guchar g;
    guchar b;
} Color;

If you had a pointer to a Color, how would you copy it? Easy:

Color *copy_color (Color *a)
{
    Color *b = g_new (Color, 1);
    *b = *a;
    return b;
}

That is, allocate a new Color, and essentially memcpy() the contents.

And to free it? A simple g_free() works — there are no internal things that need to be freed individually.

Complex copying, complex freeing

And if we had a color with a name?

typedef struct {
    guchar r;
    guchar g;
    guchar b;
    char *name;
} ColorWithName;

We can't just *a = *b here, as we actually need to copy the string name. Okay:

ColorWithName *copy_color_with_name (ColorWithName *a)
{
    ColorWithName *b = g_new (ColorWithName, 1);
    b->r = a->r;
    b->g = a->g;
    b->b = a->b;
    b->name = g_strdup (a->name);
    return b;
}

The corresponding free_color_with_name() would g_free(b->name) and then g_free(b), of course.

Glib-rs and boxed types

Let's look at this by parts. First, a BoxedMemoryManager trait to define the basic API to manage the memory of boxed types. This is what defines the copy and free functions, like above.

pub trait BoxedMemoryManager<T>: 'static {
    unsafe fn copy(ptr: *const T) -> *mut T;
    unsafe fn free(ptr: *mut T);
}

Second, the actual representation of a Boxed type:

pub struct Boxed<T: 'static, MM: BoxedMemoryManager<T>> {
    inner: AnyBox<T>,
    _dummy: PhantomData<MM>,
}

This struct is generic over T, the actual type that we will be wrapping, and MM, something which must implement the BoxedMemoryManager trait.

Inside, it stores inner, an AnyBox, which we will see shortly. The _dummy: PhantomData<MM> is a Rust-ism to indicate that although this struct doesn't actually store a memory manager, it acts as if it does — it does not concern us here.

The actual representation of boxed data

Let's look at that AnyBox that is stored inside a Boxed:

enum AnyBox<T> {
    Native(Box<T>),
    ForeignOwned(*mut T),
    ForeignBorrowed(*mut T),
}

We have three cases:

  • Native(Box<T>) - this boxed value T comes from Rust itself, so we know everything about it!

  • ForeignOwned(*mut T) - this boxed value T came from the outside, but we own it now. We will have to free it when we are done with it.

  • ForeignBorrowed(*mut T) - this boxed value T came from the outside, but we are just borrowing it temporarily: we don't want to free it when we are done with it.

For example, if we look at the implementation of the Drop trait for the Boxed struct, we will indeed see that it calls the BoxedMemoryManager::free() only if we have a ForeignOwned value:

impl<T: 'static, MM: BoxedMemoryManager<T>> Drop for Boxed<T, MM> {
    fn drop(&mut self) {
        unsafe {
            if let AnyBox::ForeignOwned(ptr) = self.inner {
                MM::free(ptr);
            }
        }
    }
}

If we had a Native(Box<T>) value, it means it came from Rust itself, and Rust knows how to Drop its own Box<T> (i.e. a chunk of memory allocated in the heap).

But for external resources, we must tell Rust how to manage them. Again: in the case where the Rust side owns the reference to the external boxed data, we have a ForeignOwned and Drop it by free()ing it; in the case where the Rust side is just borrowing the data temporarily, we have a ForeignBorrowed and don't touch it when we are done.

Copying

When do we have to copy a boxed value? For example, when we transfer from Rust to Glib with full transfer of ownership, i.e. the to_glib_full() pattern that we saw before. This is how that trait method is implemented for Boxed:

impl<'a, T: 'static, MM: BoxedMemoryManager<T>> ToGlibPtr<'a, *const T> for Boxed<T, MM> {
    fn to_glib_full(&self) -> *const T {
        use self::AnyBox::*;
        let ptr = match self.inner {
            Native(ref b) => &**b as *const T,
            ForeignOwned(p) | ForeignBorrowed(p) => p as *const T,
        };
        unsafe { MM::copy(ptr) }
    }
}

See the MM:copy(ptr) in the last line? That's where the copy happens. The lines above just get the appropriate pointer to the data data from the AnyBox and cast it.

There is extra boilerplate in boxed.rs which you can look at; it's mostly a bunch of trait implementations to copy the boxed data at the appropriate times (e.g. the FromGlibPtrNone trait), also an implementation of the Deref trait to get to the contents of a Boxed / AnyBox easily, etc. The trait implementations are there just to make it as convenient as possible to handle Boxed types.

Who implements BoxedMemoryManager?

Up to now, we have seen things like the implementation of Drop for Boxed, which uses BoxedMemoryManager::free(), and the implementation of ToGlibPtr which uses ::copy().

But those are just the trait's "abstract" methods, so to speak. What actually implements them?

Glib-rs has a general-purpose macro to wrap Glib types. It can wrap boxed types, shared pointer types, and GObjects. For now we will just look at boxed types.

Glib-rs comes with a macro, glib_wrapper!(), that can be used in different ways. You can use it to automatically write the boilerplate for a boxed type like this:

glib_wrapper! {
    pub struct Color(Boxed<ffi::Color>);

    match fn {
        copy => |ptr| ffi::color_copy(mut_override(ptr)),
        free => |ptr| ffi::color_free(ptr),
        get_type => || ffi::color_get_type(),
    }
}

This expands to an internal glib_boxed_wrapper!() macro that does a few things. We will only look at particularly interesting bits.

First, the macro creates a newtype around a tuple with 1) the actual data type you want to box, and 2) a memory manager. In the example above, the newtype would be called Color, and it would wrap an ffi:Color (say, a C struct).

        pub struct $name(Boxed<$ffi_name, MemoryManager>);

Aha! And that MemoryManager? The macro defines it as a zero-sized type:

        pub struct MemoryManager;

Then it implements the BoxedMemoryManager trait for that MemoryManager struct:

        impl BoxedMemoryManager<$ffi_name> for MemoryManager {
            #[inline]
            unsafe fn copy($copy_arg: *const $ffi_name) -> *mut $ffi_name {
                $copy_expr
            }

            #[inline]
            unsafe fn free($free_arg: *mut $ffi_name) {
                $free_expr
            }
        }

There! This is where the copy/free methods are implemented, based on the bits of code with which you invoked the macro. In the call to glib_wrapper!() we had this:

        copy => |ptr| ffi::color_copy(mut_override(ptr)),
        free => |ptr| ffi::color_free(ptr),

In the impl aboe, the $copy_expr will expand to ffi::color_copy(mut_override(ptr)) and $free_expr will expand to ffi::color_free(ptr), which defines our implementation of a memory manager for our Color boxed type.

Zero-sized what?

Within the macro's definition, let's look again at the definitions of our boxed type and the memory manager object that actually implements the BoxedMemoryManager trait. Here is what the macro would expand to with our Color example:

        pub struct Color(Boxed<ffi::Color, MemoryManager>);

        pub struct MemoryManager;

        impl BoxedMemoryManager<ffi::Color> for MemoryManager {
            unsafe fn copy(...) -> *mut ffi::Color { ... }
            unsafe fn free(...) { ... }
        }

Here, MemoryManager is a zero-sized type. This means it doesn't take up any space in the Color tuple! When a Color is allocated in the heap, it is really as if it contained an ffi::Color (the C struct we are wrapping) and nothing else.

All the knowledge about how to copy/free ffi::Color lives only in the compiler thanks to the trait implementation. When the compiler expands all the macros and monomorphizes all the generic functions, the calls to ffi::color_copy() and ffi::color_free() will be inlined at the appropriate spots. There is no need to have auxiliary structures taking up space in the heap, just to store function pointers to the copy/free functions, or anything like that.

Next up

You may have seen that our example call to glib_wrapper!() also passed in a ffi::color_get_type() function. We haven't talked about how glib-rs wraps Glib's GType, GValue, and all of that. We are getting closer and closer to being able to wrap GObject.

Stay tuned!

a silhouette of a person's head and shoulders, used as a default avatar

Ceph Meetup Berlin

I will give a presentation at the next Ceph Meetup in Berlin on the 18th of September. It will be about a exciting project we work on, at Deutsche Telekom, since a while. The goal of this open source project called librmb is to store emails directly in Ceph RADOS instead of using e.g. a NAS system. 

For an short abstract checkout out the Meetup description. The Meetup is already full, but there is a waiting list you can subscribe to. The slides from the talk will be available afterwards if you are not able to join. Otherwise see you in Berlin.

the avatar of Federico Mena-Quintero

The Magic of GObject Introspection

Before continuing with the glib-rs architecture, let's take a detour and look at GObject Introspection. Although it can seem like an obscure part of the GNOME platform, it is an absolutely vital part of it: it is what lets people write GNOME applications in any language.

Let's start with a bit of history.

Brief history of language bindings in GNOME

When we started GNOME in 1997, we didn't want to write all of it in C. We had some inspiration from elsewhere.

Prehistory: GIMP and the Procedural Database

There was already good precedent for software written in a combination of programming languages. Emacs, the flagship text editor of the GNU project, was written with a relatively small core in C, and the majority of the program in Emacs Lisp.

In similar fashion, we were very influenced by the design of the GIMP, which was very innovative at that time. The GIMP has a large core written in C. However, it supports plug-ins or scripts written in a variety of languages. Initially the only scripting language available for the GIMP was Scheme.

The GIMP's plug-ins and scripts run as separate processes, so they don't have immediate access to the data of the image being edited, or to the core functions of the program like "paint with a brush at this location". To let plug-ins and scripts access these data and these functions, the GIMP has what it calls a Procedural Database (PDB). This is a list of functions that the core program or plug-ins wish to export. For example, there are functions like gimp-scale-image and gimp-move-layer. Once these functions are registered in the PDB, any part of the program or plug-ins can call them. Scripts are often written to automate common tasks — for example, when one wants to adjust the contrast of photos and scale them in bulk. Scripts can call functions in the PDB easily, irrespective of the programming language they are written in.

We wanted to write GNOME's core libraries in C, and write a similar Procedural Database to allow those libraries to be called from any programming language. Eventually it turned out that a PDB was not necessary, and there were better ways to go about enabling different programming languages.

Enabling sane memory management

GTK+ started out with a very simple scheme for memory management: a container owned its child widgets, and so on recursively. When you freed a container, it would be responsible for freeing its children.

However, consider what happens when a widget needs to hold a reference to another widget that is not one of its children. For example, a GtkLabel with an underlined mnemonic ("_N_ame:") needs to have a reference to the GtkEntry that should be focused when you press Alt-N. In the very earliest versions of GTK+, how to do this was undefined: C programmers were already used to having shared pointers everywhere, and they were used to being responsible for managing their memory.

Of course, this was prone to bugs. If you have something like

typedef struct {
    GtkWidget parent;

    char *label_string;
    GtkWidget *widget_to_focus;
} GtkLabel;

then if you are writing the destructor, you may simply want to

static void
gtk_label_free (GtkLabel *label)
{
    g_free (label_string);
    gtk_widget_free (widget_to_focus);          /* oops, we don't own this */

    free_parent_instance (&label->parent);
}

Say you have a GtkBox with the label and its associated GtkEntry. Then, freeing the GtkBox would recursively free the label with that gtk_label_free(), and then the entry with its own function. But by the time the entry gets freed, the line gtk_widget_free (widget_to_focus) has already freed the entry, and we get a double-free bug!

Madness!

That is, we had no idea what we were doing. Or rather, our understanding of widgets had not evolved to the point of acknowledging that a widget tree is not a simply tree, but rather a directed graph of container-child relationships, plus random-widget-to-random-widget relationships. And of course, other parts of the program which are not even widget implementations may need to keep references to widgets and free them or not as appropriate.

I think Marius Vollmer was the first person to start formalizing this. He came from the world of GNU Guile, a Scheme interpreter, and so he already knew how garbage collection and seas of shared references ought to work.

Marius implemented reference-counting for GTK+ — that's where gtk_object_ref() and gtk_object_unref() come from; they eventually got moved to the base GObject class, so we now have g_object_ref() and g_object_unref() and a host of functions to have weak references, notification of destruction, and all the things required to keep garbage collectors happy.

The first language bindings

The very first language bindings were written by hand. The GTK+ API was small, and it seemed feasible to take

void gtk_widget_show (GtkWidget *widget);
void gtk_widget_hide (GtkWidget *widget);

void gtk_container_add (GtkContainer *container, GtkWidget *child);
void gtk_container_remove (GtkContainer *container, GtkWidget *child);

and just wrap those functions in various languages, by hand, on an as-needed basis.

Of course, there is a lot of duplication when doing things that way. As the C API grows, one needs to do more and more manual work to keep up with it.

Also, C structs with public fields are problematic. If we had

typedef struct {
    guchar r;
    guchar g;
    guchar b;
} GdkColor;

and we expect program code to fill in a GdkColor by hand and pass it to a drawing function like

void gdk_set_foreground_color (GdkDrawingContext *gc, GdkColor *color);

then it is no problem to do that in C:

GdkColor magenta = { 255, 0, 255 };

gdk_set_foreground_color (gc, &magenta);

But to do that in a high level language? You don't have access to C struct fields! And back then, libffi wasn't generally available.

Authors of language bindings had to write some glue code, in C, by hand, to let people access a C struct and then pass it on to GTK+. For example, for Python, they would need to write something like

PyObject *
make_wrapped_gdk_color (PyObject *args, PyObject *kwargs)
{
    GdkColor *g_color;
    PyObject *py_color;

    g_color = g_new (GdkColor, 1);
    /* ... fill in g_color->r, g, b from the Python args */

    py_color = wrap_g_color (g_color);
    return py_color;
}

Writing that by hand is an incredible amount of drudgery.

What language bindings needed was a description of the API in a machine-readable format, so that the glue code could be written by a code generator.

The first API descriptions

I don't remember if it was the GNU Guile people, or the PyGTK people, who started to write descriptions of the GNOME API by hand. For ease of parsing, it was done in a Scheme-like dialect. A description may look like

(class GtkWidget
       ;;; void gtk_widget_show (GtkWidget *widget);
       (method show
               (args nil)
               (retval nil))

       ;;; void gtk_widget_hide (GtkWidget *widget);
       (method hide
               (args nil)
               (retval nil)))

(class GtkContainer
       ;;; void gtk_container_add (GtkContainer *container, GtkWidget *child);
       (method add
               (args GtkWidget)
               (retval nil)))

(struct GdkColor
        (field r (type 'guchar))
        (field g (type 'guchar))
        (field b (type 'guchar))) 

Again, writing those descriptions by hand (and keeping up with the C API) was a lot of work, but the glue code to implement the binding could be done mostly automatically. The generated code may need subsequent tweaks by hand to deal with details that the Scheme-like descriptions didn't contemplate, but it was better than writing everything by hand.

Glib gets a real type system

Tim Janik took over the parts of Glib that implement objects/signals/types, and added a lot of things to create a good type system for C. This is where things like GType, GValue, GParamSpec, and fundamental types come from.

For example, a GType is an identifier for a type, and a GValue is a type plus, well, a value of that type. You can ask a GValue, "are you an int? are you a GObject?".

You can register new types: for example, there would be code in Gdk that registers a new GType for GdkColor, so you can ask a value, "are you a color?".

Registering a type involves telling the GObject system things like how to copy values of that type, and how to free them. For GdkColor this may be just g_new() / g_free(); for reference-counted objects it may be g_object_ref() / g_object_unref().

Objects can be queried about some of their properties

A widget can tell you when you press a mouse button mouse on it: it will emit the button-press-event signal. When GtkWidget's implementation registers this signal, it calls something like

    g_signal_new ("button-press-event",
        gtk_widget_get_type(), /* type of object for which this signal is being created */
        ...
        G_TYPE_BOOLEAN,  /* type of return value */
        1,               /* number of arguments */
        GDK_TYPE_EVENT); /* type of first and only argument */

This tells GObject that GtkWidget will have a signal called button-press-event, with a return type of G_TYPE_BOOLEAN, and with a single argument of type GDK_TYPE_EVENT. This lets GObject do the appropriate marshalling of arguments when the signal is emitted.

But also! You can query the signal for its argument types! You can run g_signal_query(), which will then tell you all the details of the signal: its name, return type, argument types, etc. A language binding could run g_signal_query() and generate a description of the signal automatically to the Scheme-like description language. And then generate the binding from that.

Not all of an object's properties can be queried

Unfortunately, although GObject signals and properties can be queried, methods can't be. C doesn't have classes with methods, and GObject does not really have any provisions to implement them.

Conventionally, for a static method one would just do

void
gtk_widget_set_flags (GtkWidget *widget, GtkWidgetFlags flags)
{
    /* modify a struct field within "widget" or whatever */
    /* repaint or something */
}

And for a virtual method one would put a function pointer in the class structure, and provide a convenient way to call it:

typedef struct {
    GtkObjectClass parent_class;

    void (* draw) (GtkWidget *widget, cairo_t *cr);
} GtkWidgetClass;

void
gtk_widget_draw (GtkWidget *widget, cairo_t *cr)
{
    GtkWidgetClass *klass = find_widget_class (widget);

    (* klass->draw) (widget, cr);
}

And GObject has no idea about this method — there is no way to query it; it just exists in C-space.

Now, historically, GTK+'s header files have been written in a very consistent style. It is quite possible to write a tool that will take a header file like

/* gtkwidget.h */
typedef struct {
    GtkObject parent_class;

    void (* draw) (GtkWidget *widget, cairo_t *cr);
} GtkWidgetClass;

void gtk_widget_set_flags (GtkWidget *widget, GtkWidgetFlags flags);
void gtk_widget_draw (GtkWidget *widget, cairo_t *cr);

and parse it, even if it is with a simple parser that does not completely understand the C language, and have heuristics like

  • Is there a class_name_foo() function prototype with no corresponding foo field in the Class structure? It's probably a static method.

  • Is there a class_name_bar() function with a bar field in the Class structure? It's probably a virtual method.

  • Etc.

And in fact, that's what we had. C header files would get parsed with those heuristics, and the Scheme-like description files would get generated.

Scheme-like descriptions get reused, kind of

Language binding authors started reusing the Scheme-like descriptions. Sometimes they would cannibalize the descriptions from PyGTK, or Guile (again, I don't remember where the canonical version was maintained) and use them as they were.

Other times they would copy the files, modify them by hand some more, and then use them to generate their language binding.

C being hostile

From just reading/parsing a C function prototype, you cannot know certain things. If one function argument is of type Foo *, does it mean:

  • the function gets a pointer to something which it should not modify ("in" parameter)

  • the function gets a pointer to uninitialized data which it will set ("out" parameter)

  • the function gets a pointer to initialized data which it will use and modify ("inout" parameter)

  • the function will copy that pointer and hold a reference to the pointed data, and not free it when it's done

  • the function will take over the ownership of the pointed data, and free it when it's done

  • etc.

Sometimes people would include these annotations in the Scheme-like description language. But wouldn't it be better if those annotations came from the C code itself?

GObject Introspection appears

For GNOME 3, we wanted a unified solution for language bindings:

  • Have a single way to extract the machine-readable descriptions of the C API.

  • Have every language binding be automatically generated from those descriptions.

  • In the descriptions, have all the information necessary to generate a correct language binding...

  • ... including documentation.

We had to do a lot of work to accomplish this. For example:

  • Remove C-isms from the public API. Varargs functions, those that have foo (int x, ...), can't be easily described and called from other languages. Instead, have something like foov (int x, int num_args, GValue *args_array) that can be easily consumed by other languages.

  • Add annotations throughout the code so that the ad-hoc C parser can know about in/out/inout arguments, and whether pointer arguments are borrowed references or a full transfership of ownership.

  • Take the in-line documentation comments and store them as part of the machine-readable description of the API.

  • When compiling a library, automatically do all the things like g_signal_query() and spit out machine-readable descriptions of those parts of the API.

So, GObject Introspection is all of those things.

Annotations

If you have looked at the C code for a GNOME library, you may have seen something like this:

/**
 * gtk_widget_get_parent:
 * @widget: a #GtkWidget
 *
 * Returns the parent container of @widget.
 *
 * Returns: (transfer none) (nullable): the parent container of @widget, or %NULL
 **/
GtkWidget *
gtk_widget_get_parent (GtkWidget *widget)
{
    ...
}

See that "(transfer none) (nullable)" in the documentation comments? The (transfer none) means that the return value is a pointer whose ownership does not get transferred to the caller, i.e. the widget retains ownership. Finally, the (nullable) indicates that the function can return NULL, when the widget has no parent.

A language binding will then use this information as follows:

  • It will not unref() the parent widget when it is done with it.

  • It will deal with a NULL pointer in a special way, instead of assuming that references are not null.

Every now and then someone discovers a public function which is lacking an annotation of that sort — for GNOME's purposes this is a bug; fortunately, it is easy to add that annotation to the C sources and regenerate the machine-readable descriptions.

Machine-readable descriptions, or repository files

So, what do those machine-readable descriptions actually look like? They moved away from a Scheme-like language and got turned into XML, because early XXIst century.

The machine-readable descriptions are called GObject Introspection Repository files, or GIR for short.

Let's look at some parts of Gtk-3.0.gir, which your distro may put in /usr/share/gir-1.0/Gtk-3.0.gir.

<repository version="1.2" ...>

  <namespace name="Gtk"
             version="3.0"
             shared-library="libgtk-3.so.0,libgdk-3.so.0"
             c:identifier-prefixes="Gtk"
             c:symbol-prefixes="gtk">

For the toplevel "Gtk" namespace, this is what the .so library is called. All identifiers have "Gtk" or "gtk" prefixes.

A class with methods and a signal

Let's look at the description for GtkEntry...

    <class name="Entry"
           c:symbol-prefix="entry"
           c:type="GtkEntry"
           parent="Widget"
           glib:type-name="GtkEntry"
           glib:get-type="gtk_entry_get_type"
           glib:type-struct="EntryClass">

      <doc xml:space="preserve">The #GtkEntry widget is a single line text entry
widget. A fairly large set of key bindings are supported
by default. If the entered text is longer than the allocation
...
       </doc>

This is the start of the description for GtkEntry. We already know that everything is prefixed with "Gtk", so the name is just given as "Entry". Its parent class is Widget and the function which registers it against the GObject type system is gtk_entry_get_type.

Also, there are the toplevel documentation comments for the Entry class.

Onwards!

      <implements name="Atk.ImplementorIface"/>
      <implements name="Buildable"/>
      <implements name="CellEditable"/>
      <implements name="Editable"/>

GObject classes can implement various interfaces; this is the list that GtkEntry supports.

Next, let's look at a single method:

      <method name="get_text" c:identifier="gtk_entry_get_text">
        <doc xml:space="preserve">Retrieves the contents of the entry widget. ... </doc>

        <return-value transfer-ownership="none">
          <type name="utf8" c:type="const gchar*"/>
        </return-value>

        <parameters>
          <instance-parameter name="entry" transfer-ownership="none">
            <type name="Entry" c:type="GtkEntry*"/>
          </instance-parameter>
        </parameters>
      </method>

The method get_text and its corresponding C symbol. Its return value is an UTF-8 encoded string, and ownership of the memory for that string is not transferred to the caller.

The method takes a single parameter which is the entry instance itself.

Now, let's look at a signal:

      <glib:signal name="activate" when="last" action="1">
        <doc xml:space="preserve">The ::activate signal is emitted when the user hits
the Enter key. ...</doc>

        <return-value transfer-ownership="none">
          <type name="none" c:type="void"/>
        </return-value>
      </glib:signal>

    </class>

The "activate" signal takes no arguments, and has a return value of type void, i.e. no return value.

A struct with public fields

The following comes from Gdk-3.0.gir; it's the description for GdkRectangle.

    <record name="Rectangle"
            c:type="GdkRectangle"
            glib:type-name="GdkRectangle"
            glib:get-type="gdk_rectangle_get_type"
            c:symbol-prefix="rectangle">

      <field name="x" writable="1">
        <type name="gint" c:type="int"/>
      </field>
      <field name="y" writable="1">
        <type name="gint" c:type="int"/>
      </field>
      <field name="width" writable="1">
        <type name="gint" c:type="int"/>
      </field>
      <field name="height" writable="1">
        <type name="gint" c:type="int"/>
      </field>

    </record>

So that's the x/y/width/height fields in the struct, in the same order as they are defined in the C code.

And so on. The idea is for the whole API exported by a GObject library to be describable by that format. If something can't be described, it's a bug in the library, or a bug in the format.

Making language bindings start up quickly: typelib files

As we saw, the GIR files are the XML descriptions of GObject APIs. Dynamic languages like Python would prefer to generate the language binding on the fly, as needed, instead of pre-generating a huge binding.

However, GTK+ is a big API: Gtk-3.0.gir is 7 MB of XML. Parsing all of that just to be able to generate gtk_widget_show() on the fly would be too slow. Also, there are GTK+'s dependencies: Atk, Gdk, Cairo, etc. You don't want to parse everything just to start up!

So, we have an extra step that compiles the GIR files down to binary .typelib files. For example, /usr/lib64/girepository-1.0/Gtk-3.0.typelib is about 600 KB on my machine. Those files get mmap()ed for fast access, and can be shared between processes.

How dynamic language bindings use typelib files

GObject Introspection comes with a library that language binding implementors can use to consume those .typelib files. The libgirepository library has functions like "list all the classes available in this namespace", or "call this function with these values for arguments, and give me back the return value here".

Internally, libgirepository uses libffi to actually call the C functions in the dynamically-linked libraries.

So, when you write foo.py and do

import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk
win = Gtk.Window()

what happens is that pygobject calls libgirepository to mmap() the .typelib, and sees that the constructor for Gtk.Window is a C function called gtk_window_new(). After seeing how that function wants to be called, it calls the function using libffi, wraps the result with a PyObject, and that's what you get on the Python side.

Static languages

A static language like Rust prefers to have the whole language binding pre-generated. This is what the various crates in gtk-rs do.

The gir crate takes a .gir file (i.e. the XML descriptions) and does two things:

  • Reconstructs the C function prototypes and C struct declarations, but in a way Rust can understand them. This gets output to the sys crate.

  • Creates idiomatic Rust code for the language binding. This gets output to the various crates; for example, the gtk one.

When reconstructing the C structs and prototypes, we get stuff like

#[repr(C)]
pub struct GtkWidget {
    pub parent_instance: gobject::GInitiallyUnowned,
    pub priv_: *mut GtkWidgetPrivate,
}

extern "C" {
    pub fn gtk_entry_new() -> *mut GtkWidget;
}

And the idiomatic bindings? Stay tuned!

a silhouette of a person's head and shoulders, used as a default avatar

z/VM SSI: make your Linux relocatable

Virtualization systems like KVM, Xen, or z/VM offer the possibility to move running guests from one physical server to another one without service interruption. This is a cool feature for the system administrator because he can service a virtualization host without shutting down any workload on his cluster.

Before doing such a migration, the system perform a number of checks. Especially z/VM is very strict with this, but also gives high confidence that nothing bad happens with the workload. Unfortunately, the default system you get when running linux on z/VM has a number of devices attached, that prevent z/VM from relocating that guest to a different node. A typical message would look like this:

VMRELO TEST LINUX001 ZVM002
HCPRLH1940E LINUX001 is not relocatable for the following reason(s):
...
HCPRLI1996I LINUX001: Virtual machine device 0191 is a link to a local minidisk
...

For some of the devices it is obvious to the experienced z/VM admin that he can detach them. However some of the devices might also be in use by Linux, and it would definitly confuse the system when just removing the device. Therefore, the z/VM admin has to ask the person reponsible for Linux if it is ok to remove that device. When talking about 10 guests, this might be ok, but when talking about lots and lots of servers and many different Stakeholders, this can get quite painful.

Starting with SLES12 SP2, a new service called “virtsetup” sneaked into the system that can ease this task a lot. When enabled, it removes all the unneeded CMS disks from the guest and thus prepares the guest for live guest relocation.

How to run this service:
# systemctl enable virtsetup
# systemctl start virtsetup

Thats basically everything you have to do for a default setup. If you want some specific disk untouched, just have a look at “/etc/sysconfig/virtsetup”. This is the file where this service is configured.

Enabling this service is not a big deal for the single machine, but it makes a big difference for the z/VM admin. When this is enabled, most machines will simply be eligible for relocation without further action and thus allowing for continuous operation during service of a z/VM Node.

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE.Asia Summit 2017 Tokyo, Japan. I’m Coming!

Image from https://events.opensuse.org

Yesterday, on September 3rd. Mr. Fuminobu Takeyama has contacted me about my proposal in openSUSE.Asia Summit 2017. I’m so happy when I read the email my proposal was accepted with openSUSE.Asia Summit committees. Furthermore, my happiness didn’t stop there. Two out of my 3 proposals got a High Score! There are “Implement Active Directory Server for Single Sign-on Login Using openSUSE” and “Have Fun Claim Control your Docker Images with Portus on openSUSE Leap“.

openSUSE.Asia Summit itself is one of the great events for openSUSE community (i.e., both contributors, and users) in Asia. Those who usually communicate online can get together from all over the world, talk face to face, and have fun. Members of the community will share their most recent knowledge, experiences, and learn FLOSS technologies surrounding openSUSE.

This is the fourth of openSUSE Asia Summit. It held every year in different countries. Beforehand, it held in Beijing on 2014, Taipei on 2015, Yogyakarta on 2016 and in this years, openSUSE Asia Summit will be held in Chofu, Tokyo, Japan.

So, Mr. Takeyama asks me about which talk do I wanna choose. Indeed, both of the proposals got a high score but I have to choose one of them. Then, I choose Portus as my talk in openSUSE.Asia Summit 2017. Because he said to me, many Japan Engineers interested with Docker.

In this trip, I will fly away to Japan with different airlines. For departure, i will use Garuda Indonesia and Return with AirAsia and stay in a homestay with my friends from Indonesia. Mas Moko, Mas Ary, Mas Kukuh, Mbak Alin and Mbak Umul.

The main aim for me is to share with people how we can have a lot of fun with openSUSE. Get acquainted with a new people, contributor in Asia and many others. Finding a new idea for openSUSE Indonesia and recharging a spirit. I think this event will be awesome. I can’t wait for attending this event as a speaker since last years I’m only become to be a volunteer in openSUSE Asia Summit 2016, Yogyakarta. This will be a new amazing experience for me.

After taking a long process. Finally, I’m so happy my proposal has been accepted. Thank you openSUSE Indonesia, Om Edwin Zakaria, as one of the committees for openSUSE.Asia Summit for struggling Indonesian people papers and everyone who gives me for motivation. I realized my contribution in openSUSE is not enough, but i will give my best for the things what i love. I will give the best for openSUSE and openSUSE.Asia Summit 2017.

If you want to attend it, prepare your self. Registration will be open soon. Stay tune in https://events.opensuse.org and See you in Tokyo!

 

The post openSUSE.Asia Summit 2017 Tokyo, Japan. I’m Coming! appeared first on dhenandi.com.

the avatar of Greg Kroah-Hartman

4.14 == This Years LTS Kernel

As the 4.13 release has now happened, the merge window for the 4.14 kernel release is now open. I mentioned this many weeks ago, but as the word doesn’t seem to have gotten very far based on various emails I’ve had recently, I figured I need to say it here as well.

So, here it is officially, 4.14 should be the next LTS kernel that I’ll be supporting with stable kernel patch backports for at least two years, unless it really is a horrid release and has major problems. If so, I reserve the right to pick a different kernel, but odds are, given just how well our development cycle has been going, that shouldn’t be a problem (although I guess I just doomed it now…)