Skip to main content

the avatar of Greg Kroah-Hartman

4.14 == This Years LTS Kernel

As the 4.13 release has now happened, the merge window for the 4.14 kernel release is now open. I mentioned this many weeks ago, but as the word doesn’t seem to have gotten very far based on various emails I’ve had recently, I figured I need to say it here as well.

So, here it is officially, 4.14 should be the next LTS kernel that I’ll be supporting with stable kernel patch backports for at least two years, unless it really is a horrid release and has major problems. If so, I reserve the right to pick a different kernel, but odds are, given just how well our development cycle has been going, that shouldn’t be a problem (although I guess I just doomed it now…)

the avatar of Federico Mena-Quintero

Librsvg's build infrastructure: Autotools and Rust

Today I released librsvg 2.41.1, and it's a big release! Apart from all the Rust goodness, and the large number of bug fixes, I am very happy with the way the build system works these days. I've found it invaluable to have good examples of Autotools incantations to copy&paste, so hopefully this will be useful to someone else.

There are some subtleties that a "good" autotools setup demands, and so far I think librsvg is doing well:

  • The configure script checks for cargo and rustc.

  • "make distcheck" works. This means that the build can be performed with builddir != srcdir, and also that make check runs the available tests and they all pass.

  • The rsvg_internals library is built with Rust, and our Makefile.am calls cargo build with the correct options. It is able to handle debug and release builds.

  • "make clean" cleans up the Rust build directories as well.

  • If you change a .rs file and type make, only the necessary stuff gets rebuilt.

  • Etcetera. I think librsvg feels like a normal autotool'ed library. Let's see how this is done.

Librsvg's basic autotools setup

Librsvg started out with a fairly traditional autotools setup with a configure.ac and Makefile.am. For historical reasons the .[ch] source files live in the toplevel librsvg/ directory, not in a src subdirectory or something like that.

librsvg
├ configure.ac
├ Makefile.am
├ *.[ch]
├ src/
├ doc/
├ tests/
└ win32/

Adding Rust to the build

The Rust source code lives in librsvg/rust; that's where Cargo.toml lives, and of course there is the conventional src subdirectory with the *.rs files.

librsvg
├ configure.ac
├ Makefile.am
├ *.[ch]
├ src/
├ rust/         <--- this is new!
│ ├ Cargo.toml
│ └ src/
├ doc/
├ tests/
└ win32/

Detecting the presence of cargo and rustc in configure.ac

This goes in configure.ac:

AC_CHECK_PROG(CARGO, [cargo], [yes], [no])
AS_IF(test x$CARGO = xno,
    AC_MSG_ERROR([cargo is required.  Please install the Rust toolchain from https://www.rust-lang.org/])
)
AC_CHECK_PROG(RUSTC, [rustc], [yes], [no])
AS_IF(test x$RUSTC = xno,
    AC_MSG_ERROR([rustc is required.  Please install the Rust toolchain from https://www.rust-lang.org/])
)

These two try to execute cargo and rustc, respectively, and abort with an error message if they are not present.

Supporting debug or release mode for the Rust build

One can call cargo like "cargo build --release" to turn on expensive optimizations, or normally like just "cargo build" to build with debug information. That is, the latter is the default: if you don't pass any options, cargo does a debug build.

Autotools and C compilers normally work a bit differently; one must call the configure script like "CFLAGS='-g -O0' ./configure" for a debug build, or "CFLAGS='-O2 -fomit-frame-pointer' ./configure" for a release build.

Linux distros already have all the infrastructure to pass the appropriate CFLAGS to configure. We need to be able to pass the appropriate flag to Cargo. My main requirement for this was:

  • Distros shouldn't have to substantially change their RPM specfiles (or whatever) to accomodate the Rust build.
  • I assume that distros will want to make release builds by default.
  • I as a developer am comfortable with passing extra options to make debug builds on my machine.

The scheme in librsvg lets you run "configure --enable-debug" to make it call a plain cargo build, or a plain "configure" to make it use cargo build --release instead. The CFLAGS are passed as usual through an environment variable. This way, distros don't have to change their packaging to keep on making release builds as usual.

This goes in configure.ac:

dnl Specify --enable-debug to make a development release.  By default,
dnl we build in public release mode.

AC_ARG_ENABLE(debug,
              AC_HELP_STRING([--enable-debug],
                             [Build Rust code with debugging information [default=no]]),
              [debug_release=$enableval],
              [debug_release=no])

AC_MSG_CHECKING(whether to build Rust code with debugging information)
if test "x$debug_release" = "xyes" ; then
    AC_MSG_RESULT(yes)
    RUST_TARGET_SUBDIR=debug
else
    AC_MSG_RESULT(no)
    RUST_TARGET_SUBDIR=release
fi
AM_CONDITIONAL([DEBUG_RELEASE], [test "x$debug_release" = "xyes"])

AC_SUBST([RUST_TARGET_SUBDIR])

This defines an Automake conditional called DEBUG_RELEASE, which we will use in Makefile.am later.

It also causes @RUST_TARGET_SUBDIR@ to be substituted in Makefile.am with either debug or release; we will see what these are about.

Adding Rust source files

The librsvg/rust/src directory has all the *.rs files, and cargo tracks their dependencies and whether they need to be rebuilt if one changes. However, since that directory is not tracked by make, it won't rebuild things if a Rust source file changes! So, we need to tell our Makefile.am about those files:

RUST_SOURCES =                   \
        rust/build.rs            \
        rust/Cargo.toml          \
        rust/src/aspect_ratio.rs \
        rust/src/bbox.rs         \
        rust/src/cnode.rs        \
        rust/src/color.rs        \
        ...

RUST_EXTRA =                     \
        rust/Cargo.lock

EXTRA_DIST += $(RUST_SOURCES) $(RUST_EXTRA)

It's a bit unfortunate that the change tracking is duplicated in the Makefile, but we are already used to listing all the C source files in there, anyway.

Most notably, the rust subdirectory is not listed in the SUBDIRS in Makefile.am, since there is no rust/Makefile at all!

Cargo release or debug build?

if DEBUG_RELEASE
CARGO_RELEASE_ARGS=
else
CARGO_RELEASE_ARGS=--release
endif

We will call cargo build with that argument later.

Verbose or quiet build?

Librsvg uses AM_SILENT_RULES([yes]) in configure.ac. This lets you just run "make" for a quiet build, or "make V=1" to get the full command lines passed to the compiler. Cargo supports something similar, so let's add it to Makefile.am:

CARGO_VERBOSE = $(cargo_verbose_$(V))
cargo_verbose_ = $(cargo_verbose_$(AM_DEFAULT_VERBOSITY))
cargo_verbose_0 =
cargo_verbose_1 = --verbose

This expands the V variable to empty, 0, or 1. The result of expanding that gives us the final command-line argument in the CARGO_VERBOSE variable.

What's the filename of the library we are building?

RUST_LIB=@abs_top_builddir@/rust/target/@RUST_TARGET_SUBDIR@/librsvg_internals.a

Remember our @RUST_TARGET_SUBDIR@ from configure.ac? If you call plain "cargo build", it will put the binaries in rust/target/debug. But if you call "cargo build --release", it will put the binaries in rust/target/release.

With the bit above, the RUST_LIB variable now has the correct path for the built library. The @abs_top_builddir@ makes it work when the build directory is not the same as the source directory.

Okay, so how do we call cargo?

@abs_top_builddir@/rust/target/@RUST_TARGET_SUBDIR@/librsvg_internals.a: $(RUST_SOURCES)
    cd $(top_srcdir)/rust && \
    CARGO_TARGET_DIR=@abs_top_builddir@/rust/target cargo build $(CARGO_VERBOSE) $(CARGO_RELEASE_ARGS)

We make the funky library filename depend on $(RUST_SOURCES). That's what will cause make to rebuild the Rust library if one of the Rust source files changes.

We override the CARGO_TARGET_DIR with Automake's preference, and call cargo build with the correct arguments.

Linking into the main C library

librsvg_@RSVG_API_MAJOR_VERSION@_la_LIBADD = \
        $(LIBRSVG_LIBS)                      \
        $(LIBM)                              \
        $(RUST_LIB)

This expands our $(RUST_LIB) from above into our linker line, along with librsvg's other dependencies.

make check

This is our hook so that make check will cause cargo test to run:

check-local:
        cd $(srcdir)/rust && \
        CARGO_TARGET_DIR=@abs_top_builddir@/rust/target cargo test

make clean

Same thing for make clean and cargo clean:

clean-local:
        cd $(top_srcdir)/rust && \
        CARGO_TARGET_DIR=@abs_top_builddir@/rust/target cargo clean

Vendoring dependencies

Linux distros probably want Rust packages to come bundled with their dependencies, so that they can replace them later with newer/patched versions.

Here is a hook so that make dist will cause cargo vendor to be run before making the tarball. That command will creates a rust/vendor directory with a copy of all the Rust crates that librsvg depends on.

RUST_EXTRA += rust/cargo-vendor-config

dist-hook:
    (cd $(distdir)/rust && \
    cargo vendor -q && \
    mkdir .cargo && \
    cp cargo-vendor-config .cargo/config)

The tarball needs to have a rust/.cargo/config to know where to find the vendored sources (i.e. the embedded dependencies), but we don't want that in our development source tree. Instead, we generate it from a rust/cargo-vendor-config file in our source tree:

# This is used after `cargo vendor` is run from `make dist`.
#
# In the distributed tarball, this file should end up in
# rust/.cargo/config

[source.crates-io]
registry = 'https://github.com/rust-lang/crates.io-index'
replace-with = 'vendored-sources'

[source.vendored-sources]
directory = './vendor'

One last thing

If you put this in your Cargo.toml, release binaries will be a lot smaller. This turns on link-time optimizations (LTO), which removes unused functions from the binary.

[profile.release]
lto = true

Summary and thanks

I think the above is some good boilerplate that you can put in your configure.ac / Makefile.am to integrate a Rust sub-library into your C code. It handles make-y things like make clean and make check; debug and release builds; verbose and quiet builds; builddir != srcdir; all the goodies.

I think the only thing I'm missing is to check for the cargo-vendor binary. I'm not sure how to only check for that if I'm the one making tarballs... maybe an --enable-maintainer-mode flag?

This would definitely not have been possible without prior work. Thanks to everyone who figured out Autotools before me, so I could cut&paste your goodies:

Update 2017/Nov/11: Fixed the initialization of RUST_EXTRA; thanks to Tobias Mueller for catching this.

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE release party at FrOSCon

We had a nice weekend at FrOSCon with a lot of fun. This atmosphere has gone over to our neighbours, so some Fedora Ambassadors wanted to change to openSUSE. That was the last time at the Fedora booth for them and their booth became green.

You can see here a Fedora Ambassador who wants to have openSUSE marketing material for students of the university Marburg. He has green glasses as a signal for his change. He’ll give Linux workshops with openSUSE and wants to become a openSUSE Hero.

We had many visitors the first day. Our release party took place at our booth at 5 o’clock. We were surprised about so many people. The cake was away after a quarter hour. It wasn’t enough for all interested guests. All were happy and toasted the new Leap release with the champagne.

After that we had our first tombola with a big chameleon. What for a surprise! Last year a family of LPI won 2 chameleons. This year a small LPI girl won the first one again. That shows us the partnership between LPI and openSUSE. 🙂

 

Sunday I went to some interesting presentations. We shared our service at the openSUSE booth. Additional to that we spoke about the OpenRheinRuhr organization, what we want to improve and how we can realize all with new German Advocates. Second day we had a second tombola. This chameleon went to invis server.

Debian and Ubuntu didn’t have any booth. Some Debian users asked us for Debian Contributors. I sent them to Open Office. After this visit they came back and talked with us about openSUSE and what is new. They were really interested.

That was a successful weekend for openSUSE with a lot of fun. Thanks for all the sponsoring at FrOSCon!

The post openSUSE release party at FrOSCon first appeared on Sarah Julia Kriesch.
the avatar of Jigish Gohil

New blog – cyberorg.wordpress.com

I have not been actively participating in openSUSE project for some time now, as a result there has not been much to blog about on openSUSE Lizards blog, there is a new blog at https://cyberorg.wordpress.com to blog about what I have been and will be up to with Li-f-e: Linux for Education project among other things. I am also now “Member Emeritus” of the openSUSE community due to lack of participation, so cyberorg@opensuse.org email address will no longer work, please use @cyberorg.info if you need to get in touch with me.

After almost a decade of bringing you Li-f-e: Linux for Education based on openSUSE, it is now based on Ubuntu MATE LTS releases. I hope to provide the same excellent user experience that you have come to expect. Download it from here. Reason for this change is mentioned in previous post and it’s discussion(lack of interest/time/skills by anyone for maintaining live installer). You can of course easily enable Education Build Service repository to install packages on standard openSUSE or use susestudio to create your own spin with Education packages.

To new beginnings…

the avatar of Federico Mena-Quintero

How Glib-rs works, part 2: Transferring lists and arrays

(First part of the series, with index to all the articles)

In the first part, we saw how glib-rs provides the FromGlib and ToGlib traits to let Rust code convert from/to Glib's simple types, like to convert from a Glib gboolean to a Rust bool and vice-versa. We also saw the special needs of strings; since they are passed by reference and are not copied as simple values, we can use FromGlibPtrNone and FromGlibPtrFull depending on what kind of ownership transfer we want, none for "just make it look like we are using a borrowed reference", or full for "I'll take over the data and free it when I'm done". Going the other way around, we can use ToGlibPtr and its methods to pass things from Rust to Glib.

In this part, we'll see the tools that glib-rs provides to do conversions of more complex data types. We'll look at two cases:

And one final case just in passing:

Passing arrays from Glib to Rust

We'll look at the case for transferring null-terminated arrays of strings, since it's an interesting one. There are other traits to convert from Glib arrays whose length is known, not implied with a NULL element, but for now we'll only look at arrays of strings.

Null-terminated arrays of strings

Look at this function for GtkAboutDialog:

/**
 * gtk_about_dialog_add_credit_section:
 * @about: A #GtkAboutDialog
 * @section_name: The name of the section
 * @people: (array zero-terminated=1): The people who belong to that section
 * ...
 */
void
gtk_about_dialog_add_credit_section (GtkAboutDialog  *about,
                                     const gchar     *section_name,
                                     const gchar    **people)

You would use this like

const gchar *translators[] = {
    "Alice <alice@example.com>",
    "Bob <bob@example.com>",
    "Clara <clara@example.com>",
    NULL
};

gtk_about_dialog_add_credit_section (my_about_dialog, _("Translators"), translators);

The function expects an array of gchar *, where the last element is a NULL. Instead of passing an explicit length for the array, it's done implicitly by requiring a NULL pointer after the last element. The gtk-doc annotation says (array zero-terminated=1). When we generate information for the GObject-Introspection Repository (GIR), this is what comes out:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
<method name="add_credit_section"
        c:identifier="gtk_about_dialog_add_credit_section"
        version="3.4">
  ..
    <parameter name="people" transfer-ownership="none">
      <doc xml:space="preserve">The people who belong to that section</doc>
      <array c:type="gchar**">
        <type name="utf8" c:type="gchar*"/>
      </array>
    </parameter>

You can see the transfer-ownership="none" in line 5. This means that the function will not take ownership of the passed array; it will make its own copy instead. By convention, GIR assumes that arrays of strings are NULL-terminated, so there is no special annotation for that here. If we were implementing this function in Rust, how would we read that C array of UTF-8 strings and turn it into a Rust Vec<String> or something? Easy:

let c_char_array: *mut *mut c_char = ...; // comes from Glib
let rust_translators = FromGlibPtrContainer::from_glib_none(c_char_array);
// rust_translators is a Vec<String>

Let's look at how this bad boy is implemented.

First stage: impl FromGlibPtrContainer for Vec<T>

We want to go from a "*mut *mut c_char" (in C parlance, a "gchar **") to a Vec<String>. Indeed, there is an implementation of the FromGlibPtrContainer trait for Vecs here. These are the first few lines:

impl <P: Ptr, PP: Ptr, T: FromGlibPtrArrayContainerAsVec<P, PP>> FromGlibPtrContainer<P, PP> for Vec<T> {
    unsafe fn from_glib_none(ptr: PP) -> Vec<T> {
        FromGlibPtrArrayContainerAsVec::from_glib_none_as_vec(ptr)
    }

So... that from_glib_none() will return a Vec<T>, which is what we want. Let's look at the first few lines of FromGlibPtrArrayContainerAsVec:

1
2
3
4
    impl FromGlibPtrArrayContainerAsVec<$ffi_name, *mut $ffi_name> for $name {
        unsafe fn from_glib_none_as_vec(ptr: *mut $ffi_name) -> Vec<Self> {
            FromGlibContainerAsVec::from_glib_none_num_as_vec(ptr, c_ptr_array_len(ptr))
        }

Aha! This is inside a macro, thus the $ffi_name garbage. It's done like that so the same trait can be implemented for const and mut pointers to c_char.

See the call to c_ptr_array_len() in line 3? That's what figures out where the NULL pointer is at the end of the array: it figures out the array's length.

Second stage: impl FromGlibContainerAsVec::from_glib_none_num_as_vec()

Now that the length of the array is known, the implementation calls FromGlibContainerAsVec::from_glib_none_num_as_vec()

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
    impl FromGlibContainerAsVec<$ffi_name, *const $ffi_name> for $name {
        unsafe fn from_glib_none_num_as_vec(ptr: *const $ffi_name, num: usize) -> Vec<Self> {
            if num == 0 || ptr.is_null() {
                return Vec::new();
            }

            let mut res = Vec::with_capacity(num);
            for i in 0..num {
                res.push(from_glib_none(ptr::read(ptr.offset(i as isize)) as $ffi_name));
            }
            res
        }

Lines 3/4: If the number of elements is zero, or the array is NULL, return an empty Vec.

Line 7: Allocate a Vec of suitable size.

Lines 8/9: For each of the pointers in the C array, call from_glib_none() to convert it from a *const c_char to a String, like we saw in the first part.

Done! We started with a *mut *mut c_char or a *const *const c_char and ended up with a Vec<String>, which is what we wanted.

Passing GLists to Rust

Some functions don't give you an array; they give you a GList or GSList. There is an implementation of FromGlibPtrArrayContainerAsVec that understands GList:

impl<T> FromGlibPtrArrayContainerAsVec<<T as GlibPtrDefault>::GlibType, *mut glib_ffi::GList> for T
where T: GlibPtrDefault + FromGlibPtrNone<<T as GlibPtrDefault>::GlibType> + FromGlibPtrFull<<T as GlibPtrDefault>::GlibType> {

    unsafe fn from_glib_none_as_vec(ptr: *mut glib_ffi::GList) -> Vec<T> {
        let num = glib_ffi::g_list_length(ptr) as usize;
        FromGlibContainer::from_glib_none_num(ptr, num)
    }

The impl declaration is pretty horrible, so just look at the method: from_glib_none_as_vec() takes in a GList, then calls g_list_length() on it, and finally calls FromGlibContainer::from_glib_none_num() with the length it computed.

I have a Glib container and its length

In turn, that from_glib_none_num() goes here:

impl <P, PP: Ptr, T: FromGlibContainerAsVec<P, PP>> FromGlibContainer<P, PP> for Vec<T> {
    unsafe fn from_glib_none_num(ptr: PP, num: usize) -> Vec<T> {
        FromGlibContainerAsVec::from_glib_none_num_as_vec(ptr, num)
    }

Okay, getting closer to the actual implementation.

Give me a vector already

Finally, we get to the function that walks the GList:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
impl<T> FromGlibContainerAsVec<<T as GlibPtrDefault>::GlibType, *mut glib_ffi::GList> for T
where T: GlibPtrDefault + FromGlibPtrNone<<T as GlibPtrDefault>::GlibType> + FromGlibPtrFull<<T as GlibPtrDefault>::GlibType> {

    unsafe fn from_glib_none_num_as_vec(mut ptr: *mut glib_ffi::GList, num: usize) -> Vec<T> {
        if num == 0 || ptr.is_null() {
            return Vec::new()
        }
        let mut res = Vec::with_capacity(num);
        for _ in 0..num {
            let item_ptr: <T as GlibPtrDefault>::GlibType = Ptr::from((*ptr).data);
            if !item_ptr.is_null() {
                res.push(from_glib_none(item_ptr));
            }
            ptr = (*ptr).next;
        }
        res
    }

Again, ignore the horrible impl declaration and just look at from_glib_none_num_as_vec().

Line 4: that function takes in a ptr to a GList, and a num with the list's length, which we already computed above.

Line 5: Return an empty vector if we have an empty list.

Line 8: Allocate a vector of suitable capacity.

Line 9: For each element, convert it with from_glib_none() and push it to the array.

Line 14: Walk to the next element in the list.

Passing containers from Rust to Glib

This post is getting a bit long, so I'll just mention this briefly. There is a trait ToGlibContainerFromSlice that takes a Rust slice, and can convert it to various Glib types.

  • To GSlist and GList. These have methods like to_glib_none_from_slice() and to_glib_full_from_slice()

  • To an array of fundamental types. Here, you can choose between to_glib_none_from_slice(), which gives you a Stash like we saw the last time. Or, you can use to_glib_full_from_slice(), which gives you back a g_malloc()ed array with copied items. Finally, to_glib_container_from_slice() gives you back a g_malloc()ed array of pointers to values rather than plain values themselves. Which function you choose depends on which C API you want to call.

I hope this post gives you enough practice to be able to "follow the traits" for each of those if you want to look at the implementations.

Next up

Passing boxed types, like public structs.

Passing reference-counted types.

How glib-rs wraps GObjects.

the avatar of Federico Mena-Quintero

How Glib-rs works, part 1: Type conversions

During the GNOME+Rust hackfest in Mexico City, Niko Matsakis started the implementation of gnome-class, a procedural macro that will let people implement new GObject classes in Rust and export them to the world. Currently, if you want to write a new GObject (e.g. a new widget) and put it in a library so that it can be used from language bindings via GObject-Introspection, you have to do it in C. It would be nice to be able to do this in a safe language like Rust.

How would it be done by hand?

In a C implementation of a new GObject subclass, one calls things like g_type_register_static() and g_signal_new() by hand, while being careful to specify the correct GType for each value, and being super-careful about everything, as C demands.

In Rust, one can in fact do exactly the same thing. You can call the same, low-level GObject and GType functions. You can use #[repr(C)]] for the instance and class structs that GObject will allocate for you, and which you then fill in.

You can see an example of this in gst-plugins-rs. This is where it implements a Sink GObject, in Rust, by calling Glib functions by hand: struct declarations, class_init() function, registration of type and interfaces.

How would it be done by a machine?

That's what Niko's gnome-class is about. During the hackfest it got to the point of being able to generate the code to create a new GObject subclass, register it, and export functions for methods. The syntax is not finalized yet, but it looks something like this:

gobject_gen! {
    class Counter {
        struct CounterPrivate {
            val: Cell<u32>
        }

        signal value_changed(&self);

        fn set_value(&self, v: u32) {
            let private = self.private();
            private.val.set(v);
            // private.emit_value_changed();
        }

        fn get_value(&self) -> u32 {
            let private = self.private();
            private.val.get()
        }
    }
}

I started adding support for declaring GObject signals — mainly being able to parse them from what goes inside gobject_gen!() — and then being able to call g_signal_newv() at the appropriate time during the class_init() implementation.

Types in signals

Creating a signal for a GObject class is basically like specifying a function prototype: the object will invoke a callback function with certain arguments and return value when the signal is emitted. For example, this is how GtkButton registers its button-press-event signal:

  button_press_event_id =
    g_signal_new (I_("button-press-event"),
                  ...
                  G_TYPE_BOOLEAN,    /* type of return value */
                  1,                 /* how many arguments? */
                  GDK_TYPE_EVENT);   /* type of first and only argument */

g_signal_new() creates the signal and returns a signal id, an integer. Later, when the object wants to emit the signal, it uses that signal id like this:

GtkEventButton event = ...;
gboolean return_val;

g_signal_emit (widget, button_press_event_id, 0, event, &return_val);

In the nice gobject_gen!() macro, if I am going to have a signal declaration like

signal button_press_event(&self, event: &ButtonPressEvent) -> bool;

then I will need to be able to translate the type names for ButtonPressEvent and bool into something that g_signal_newv() will understand: I need the GType values for those. Fundamental types like gboolean get constants like G_TYPE_BOOLEAN. Types that are defined at runtime, like GDK_TYPE_EVENT, get GType values generated at runtime, too, when one registers the type with g_type_register_*().

Rust type GType
i32 G_TYPE_INT
u32 G_TYPE_UINT
bool G_TYPE_BOOLEAN
etc. etc.

Glib types in Rust

How does glib-rs, the Rust binding to Glib and GObject, handle types?

Going from Glib to Rust

First we need a way to convert Glib's types to Rust, and vice-versa. There is a trait to convert simple Glib types into Rust types:

pub trait FromGlib<T>: Sized {
    fn from_glib(val: T) -> Self;
}

This means, if I have a T which is a Glib type, this trait will give you a from_glib() function which will convert it to a Rust type which is Sized, i.e. a type whose size is known at compilation time.

For example, this is how it is implemented for booleans:

impl FromGlib<glib_ffi::gboolean> for bool {
    #[inline]
    fn from_glib(val: glib_ffi::gboolean) -> bool {
        !(val == glib_ffi::GFALSE)
    }
}

and you use it like this:

let my_gboolean: glib_ffi::gboolean = g_some_function_that_returns_gboolean ();

let my_rust_bool: bool = from_glib (my_gboolean);

Booleans in glib and Rust have different sizes, and also different values. Glib's booleans use the C convention: 0 is false and anything else is true, while in Rust booleans are strictly false or true, and the size is undefined (with the current Rust ABI, it's one byte).

Going from Rust to Glib

And to go the other way around, from a Rust bool to a gboolean? There is this trait:

pub trait ToGlib {
    type GlibType;

    fn to_glib(&self) -> Self::GlibType;
}

This means, if you have a Rust type that maps to a corresponding GlibType, this will give you a to_glib() function to do the conversion.

This is the implementation for booleans:

impl ToGlib for bool {
    type GlibType = glib_ffi::gboolean;

    #[inline]
    fn to_glib(&self) -> glib_ffi::gboolean {
        if *self { glib_ffi::GTRUE } else { glib_ffi::GFALSE }
    }
}

And it is used like this:

let my_rust_bool: bool = true;

g_some_function_that_takes_gboolean (my_rust_bool.to_glib ());

(If you are thinking "a function call to marshal a boolean" — note how the functions are inlined, and the optimizer basically compiles them down to nothing.)

Pointer types - from Glib to Rust

That's all very nice for simple types like booleans and ints. Pointers to other objects are slightly more complicated.

GObject-Introspection allows one to specify how pointer arguments to functions are handled by using a transfer specifier.

(transfer none)

For example, if you call gtk_window_set_title(window, "Hello"), you would expect the function to make its own copy of the "Hello" string. In Rust terms, you would be passing it a simple borrowed reference. GObject-Introspection (we'll abbreviate it as GI) calls this GI_TRANSFER_NOTHING, and it's specified by using (transfer none) in the documentation strings for function arguments or return values.

The corresponding trait to bring in pointers from Glib to Rust, without taking ownership, is this. It's unsafe because it will be used to de-reference pointers that come from the wild west:

pub trait FromGlibPtrNone<P: Ptr>: Sized {
    unsafe fn from_glib_none(ptr: P) -> Self;
}

And you use it via this generic function:

#[inline]
pub unsafe fn from_glib_none<P: Ptr, T: FromGlibPtrNone<P>>(ptr: P) -> T {
    FromGlibPtrNone::from_glib_none(ptr)
}

Let's look at how this works. Here is the FromGlibPtrNone trait implemented for strings.

1
2
3
4
5
6
7
impl FromGlibPtrNone<*const c_char> for String {
    #[inline]
    unsafe fn from_glib_none(ptr: *const c_char) -> Self {
        assert!(!ptr.is_null());
        String::from_utf8_lossy(CStr::from_ptr(ptr).to_bytes()).into_owned()
    }
}

Line 1: given a pointer to a c_char, the conversion to String...

Line 4: check for NULL pointers

Line 5: Use the CStr to wrap the C ptr, like we looked at last time, validate it as UTF-8 and copy the string for us.

Unfortunately, there's a copy involved in the last step. It may be possible to use Cow<&str> there instead to avoid a copy if the char* from Glib is indeed valid UTF-8.

(transfer full)

And how about transferring ownership of the pointed-to value? There is this trait:

pub trait FromGlibPtrFull<P: Ptr>: Sized {
    unsafe fn from_glib_full(ptr: P) -> Self;
}

And the implementation for strings is as follows. In Glib's scheme of things, "transferring ownership of a string" means that the recipient of the string must eventually g_free() it.

1
2
3
4
5
6
7
8
impl FromGlibPtrFull<*const c_char> for String {
    #[inline]
    unsafe fn from_glib_full(ptr: *const c_char) -> Self {
        let res = from_glib_none(ptr);
        glib_ffi::g_free(ptr as *mut _);
        res
    }
}

Line 1: given a pointer to a c_char, the conversion to String...

Line 4: Do the conversion with from_glib_none() with the trait we saw before, put it in res.

Line 5: Call g_free() on the original C string.

Line 6: Return the res, a Rust string which we own.

Pointer types - from Rust to Glib

Consider the case where you want to pass a String from Rust to a Glib function that takes a *const c_char — in C parlance, a char *, without the Glib function acquiring ownership of the string. For example, assume that the C version of gtk_window_set_title() is in the gtk_ffi module. You may want to call it like this:

fn rust_binding_to_window_set_title(window: &Gtk::Window, title: &String) {
    gtk_ffi::gtk_window_set_title(..., make_c_string_from_rust_string(title));
}

Now, what would that make_c_string_from_rust_string() look like?

  • We have: a Rust String — UTF-8, known length, no nul terminator

  • We want: a *const char — nul-terminated UTF-8

So, let's write this:

1
2
3
4
5
fn make_c_string_from_rust_string(s: &String) -> *const c_char {
    let cstr = CString::new(&s[..]).unwrap();
    let ptr = cstr.into_raw() as *const c_char;
    ptr
}

Line 1: Take in a &String; return a *const c_char.

Line 2: Build a CString like we way a few days ago: this allocates a byte buffer with space for a nul terminator, and copies the string's bytes. We unwrap() for this simple example, because CString::new() will return an error if the String contained nul characters in the middle of the string, which C doesn't understand.

Line 3: Call into_raw() to get a pointer to the byte buffer, and cast it to a *const c_char. We'll need to free this value later.

But this kind of sucks, because we the have to use this function, pass the pointer to a C function, and then reconstitute the CString so it can free the byte buffer:

let buf = make_c_string_from_rust_string(my_string);
unsafe { c_function_that_takes_a_string(buf); }
let _ = CString::from_raw(buf as *mut c_char);

The solution that Glib-rs provides for this is very Rusty, and rather elegant.

Stashes

We want:

  • A temporary place to put a piece of data
  • A pointer to that buffer
  • Automatic memory management for both of those

Glib-rs defines a Stash for this:

1
2
3
4
5
6
pub struct Stash<'a,                                 // we have a lifetime
                 P: Copy,                            // the pointer must be copy-able
                 T: ?Sized + ToGlibPtr<'a, P>> (     // Type for the temporary place
    pub P,                                           // We store a pointer...
    pub <T as ToGlibPtr<'a, P>>::Storage             // ... to a piece of data with that lifetime ...
);

... and the piece of data must be of of the associated type ToGlibPtr::Storage, which we will see shortly.

This struct Stash goes along with the ToGlibPtr trait:

pub trait ToGlibPtr<'a, P: Copy> {
    type Storage;

    fn to_glib_none(&'a self) -> Stash<'a, P, Self>;  // returns a Stash whose temporary storage
                                                      // has the lifetime of our original data
}

Let's unpack this by looking at the implementation of the "transfer a String to a C function while keeping ownership":

1
2
3
4
5
6
7
8
9
impl <'a> ToGlibPtr<'a, *const c_char> for String {
    type Storage = CString;

    #[inline]
    fn to_glib_none(&self) -> Stash<'a, *const c_char, String> {
        let tmp = CString::new(&self[..]).unwrap();
        Stash(tmp.as_ptr(), tmp)
    }
}

Line 1: We implement ToGlibPtr<'a *const c_char> for String, declaring the lifetime 'a for the Stash.

Line 2: Our temporary storage is a CString.

Line 6: Make a CString like before.

Line 7: Create the Stash with a pointer to the CString's contents, and the CString itself.

(transfer none)

Now, we can use ".0" to extract the first field from our Stash, which is precisely the pointer we want to a byte buffer:

let my_string = ...;
unsafe { c_function_which_takes_a_string(my_string.to_glib_none().0); }

Now Rust knows that the temporary buffer inside the Stash has the lifetime of my_string, and it will free it automatically when the string goes out of scope. If we can accept the .to_glib_none().0 incantation for "lending" pointers to C, this works perfectly.

(transfer full)

And for transferring ownership to the C function? The ToGlibPtr trait has another method:

pub trait ToGlibPtr<'a, P: Copy> {
    ...

    fn to_glib_full(&self) -> P;
}

And here is the implementation for strings:

impl <'a> ToGlibPtr<'a, *const c_char> for String {
    fn to_glib_full(&self) -> *const c_char {
        unsafe {
            glib_ffi::g_strndup(self.as_ptr() as *const c_char, 
                                self.len() as size_t)
                as *const c_char
        }
    }

We basically g_strndup() the Rust string's contents from its byte buffer and its len(), and we can then pass this on to C. That code will be responsible for g_free()ing the C-side string.

Next up

Transferring lists and arrays. Stay tuned!

a silhouette of a person's head and shoulders, used as a default avatar

Yundoo Y8 RK3399 Settop Box (Part 2)

Debug-UART

Although the Y8 has no pin header providing access to the Debug UART, its easy to add one, given you have a soldering iron at hand:

 

As the pads have a 2.0 mm spacing, I used a short piece of ribbon cable with the typical 2.54 mm pin header at the other end. The RK3399 UART uses 1.5 Mbps instead of the common 115.2 kbps.

MaskRom Mode

The RK3399 has a fixed boot order – SPI, eMMC, SD card. So to force the RK3399 to boot anything different than the Android firmware on the internal, soldered eMMC flash, one has to „disable“ the eMMC. The canonical method for doing so on any Rockchip board is to short the eMMC clock line to ground. As the bootrom code can no longer detect any usable boot code on the eMMC it will try the SD card next and will finally fall back the so called MaskRom mode. The MaskRom code provides access to the SoC via USB, allowing to bootstrap the device.

Finding the eMMC clock was a little bit more difficult – the eMMC chip is a BGA chip on the top of the PCB. Older NAND chips in TSSOP packages allowed to directly access the the chip pins, obviously this is not possible with BGAs. As there are no PCB traces near the eMMC on the top layer, there was a quite good chance to find the relevant traces on the bottom layer. Two photos and a few minutes later, spend with GIMP to to carefully align the two images, I was able to locate the traces and a few test points:

From the datasheet of the Samsung KLMBG4WEBD-B031 eMMC flash it became apparent the clock had to be on the left side of the board. Using my scope I measured the signals available on the different test points, et voilà, found a steady 200 MHz signal. Another give away was the 22 Ω resistor on this signal, which can also be found in the Firefly-RK3399 schematics.

 

The series resistor is somewhat odd, as a series resistor for signal conditioning should normally be located near the driver (i.e. the SoC). Anyway, it makes a nice spot to force the clock signal to ground.

When pressing the Power button to start the Y8, normally the red LED lights up for one to two seconds and then becomes blue. When the eMMC clock is shorted, the LED will stay red, and the device is in MaskRom mode. The Y8 can be connected to a USB host using a  USB Type-C to Type-A cable, and it will show up with the RK3399 specific Vendor/Product ID 2207:330C.

To be continued …

a silhouette of a person's head and shoulders, used as a default avatar

Developing with OpenSSL 1.1.x on openSUSE Leap

The SUSE OpenSSL maintainers are hard at work to migrate openSUSE Tumbleweed and SUSE Linux Enterprise 15 to use OpenSSL 1.1 by default. Some work remains, see boo#1042629.

Here is how you can use OpenSSL 1.1 in parallel to the 1.0 version on openSUSE Leap for development and run-time:

zypper ar --refresh --check obs://security:tls security_tls
zypper in --from security_tls openssl libopenssl-devel

This replaces the following:

The following 3 NEW packages are going to be installed:
libopenssl-1_1_0-devel libopenssl1_1_0 openssl-1_1_0

The following 2 packages are going to be upgraded:
libopenssl-devel openssl

The following 2 packages are going to change architecture:
libopenssl-devel x86_64 -> noarch
openssl x86_64 -> noarch

The following 2 packages are going to change vendor:
libopenssl-devel openSUSE -> obs://build.opensuse.org/security
openssl openSUSE -> obs://build.opensuse.org/security

This will retain a working OpenSSL 1.0 run-time, and the two versions can be used in parallel. As you can see, the openssl and libopenssl-devel turn into non-architecture specific meta-packages.

Verify the installed command line version:

openssl version
OpenSSL 1.1.0f-fips 25 May 2017

To revert:

zypper in --from update --oldpackage openssl -libopenssl1_1_0

Accept the default conflict resolutions to switch back to the distribution packages.

Have the best of success porting your code to support OpenSSL 1.1. The upstream wiki has a good article on the OpenSSL 1.1 API changes.

the avatar of Federico Mena-Quintero

Correctness in Rust: building strings

Rust tries to follow the "make illegal states unrepresentable" mantra in several ways. In this post I'll show several things related to the process of building strings, from bytes in memory, or from a file, or from char * things passed from C.

Strings in Rust

The easiest way to build a string is to do it directly at compile time:

let my_string = "Hello, world!";

In Rust, strings are UTF-8. Here, the compiler checks our string literal is valid UTF-8. If we try to be sneaky and insert an invalid character...

let my_string = "Hello \xf0";

We get a compiler error:

error: this form of character escape may only be used with characters in the range [\x00-\x7f]
 --> foo.rs:2:30
  |
2 |     let my_string = "Hello \xf0";
  |                              ^^

Rust strings know their length, unlike C strings. They can contain a nul character in the middle, because they don't need a nul terminator at the end.

let my_string = "Hello \x00 zero";
println!("{}", my_string);

The output is what you expect:

$ ./foo | hexdump -C
00000000  48 65 6c 6c 6f 20 00 20  7a 65 72 6f 0a           |Hello . zero.|
0000000d                    ^ note the nul char here
$

So, to summarize, in Rust:

  • Strings are encoded in UTF-8
  • Strings know their length
  • Strings can have nul chars in the middle

This is a bit different from C:

  • Strings don't exist!

Okay, just kidding. In C:

  • A lot of software has standardized on UTF-8.
  • Strings don't know their length - a char * is a raw pointer to the beginning of the string.
  • Strings conventionally have a nul terminator, that is, a zero byte that marks the end of the string. Therefore, you can't have nul characters in the middle of strings.

Building a string from bytes

Let's say you have an array of bytes and want to make a string from them. Rust won't let you just cast the array, like C would. First you need to do UTF-8 validation. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
fn convert_and_print(bytes: Vec<u8>) {
    let result = String::from_utf8(bytes);
    match result {
        Ok(string) => println!("{}", string),
        Err(e) => println!("{:?}", e)
    }
}

fn main() {
    convert_and_print(vec![0x48, 0x65, 0x6c, 0x6c, 0x6f]);
    convert_and_print(vec![0x48, 0x65, 0xf0, 0x6c, 0x6c, 0x6f]);
}

In lines 10 and 11, we call convert_and_print() with different arrays of bytes; the first one is valid UTF-8, and the second one isn't.

Line 2 calls String::from_utf8(), which returns a Result, i.e. something with a success value or an error. In lines 3-5 we unpack this Result. If it's Ok, we print the converted string, which has been validated for UTF-8. Otherwise, we print the debug representation of the error.

The program prints the following:

$ ~/foo
Hello
FromUtf8Error { bytes: [72, 101, 240, 108, 108, 111], error: Utf8Error { valid_up_to: 2, error_len: Some(1) } }

Here, in the error case, the Utf8Error tells us that the bytes are UTF-8 and are valid_up_to index 2; that is the first problematic index. We also get some extra information which lets the program know if the problematic sequence was incomplete and truncated at the end of the byte array, or if it's complete and in the middle.

And for a "just make this printable, pls" API? We can use String::from_utf8_lossy(), which replaces invalid UTF-8 sequences with U+FFFD REPLACEMENT CHARACTER:

fn convert_and_print(bytes: Vec<u8>) {
    let string = String::from_utf8_lossy(&bytes);
    println!("{}", string);
}

fn main() {
    convert_and_print(vec![0x48, 0x65, 0x6c, 0x6c, 0x6f]);
    convert_and_print(vec![0x48, 0x65, 0xf0, 0x6c, 0x6c, 0x6f]);
}

This prints the following:

$ ~/foo
Hello
He�llo

Reading from files into strings

Now, let's assume you want to read chunks of a file and put them into strings. Let's go from the low-level parts up to the high level "read a line" API.

Single bytes and single UTF-8 characters

When you open a File, you get an object that implements the Read trait. In addition to the usual "read me some bytes" method, it can also give you back an iterator over bytes, or an iterator over UTF-8 characters.

The Read.bytes() method gives you back a Bytes iterator, whose next() method returns Result<u8, io::Error>. When you ask the iterator for its next item, that Result means you'll get a byte out of it successfully, or an I/O error.

In contrast, the Read.chars() method gives you back a Chars iterator, and its next() method returns Result<char, CharsError>, not io::Error. This extended CharsError has a NotUtf8 case, which you get back when next() tries to read the next UTF-8 sequence from the file and the file has invalid data. CharsError also has a case for normal I/O errors.

Reading lines

While you could build a UTF-8 string one character at a time, there are more efficient ways to do it.

You can create a BufReader, a buffered reader, out of anything that implements the Read trait. BufReader has a convenient read_line() method, to which you pass a mutable String and it returns a Result<usize, io::Error> with either the number of bytes read, or an error.

That method is declared in the BufRead trait, which BufReader implements. Why the separation? Because other concrete structs also implement BufRead, such as Cursor — a nice wrapper that lets you use a vector of bytes like an I/O Read or Write implementation, similar to GMemoryInputStream.

If you prefer an iterator rather than the read_line() function, BufRead also gives you a lines() method, which gives you back a Lines iterator.

In both cases — the read_line() method or the Lines iterator, the error that you can get back can be of ErrorKind::InvalidData, which indicates that there was an invalid UTF-8 sequence in the line to be read. It can also be a normal I/O error, of course.

Summary so far

There is no way to build a String, or a &str slice, from invalid UTF-8 data. All the methods that let you turn bytes into string-like things perform validation, and return a Result to let you know if your bytes validated correctly.

The exceptions are in the unsafe methods, like String::from_utf8_unchecked(). You should really only use them if you are absolutely sure that your bytes were validated as UTF-8 beforehand.

There is no way to bring in data from a file (or anything file-like, that implements the Read trait) and turn it into a String without going through functions that do UTF-8 validation. There is not an unsafe "read a line" API without validation — you would have to build one yourself, but the I/O hit is probably going to be slower than validating data in memory, anyway, so you may as well validate.

C strings and Rust

For unfortunate historical reasons, C flings around char * to mean different things. In the context of Glib, it can mean

  • A valid, nul-terminated UTF-8 sequence of bytes (a "normal string")
  • A nul-terminated file path, which has no meaningful encoding
  • A nul-terminated sequence of bytes, not validated as UTF-8.

What a particular char * means depends on which API you got it from.

Bringing a string from C to Rust

From Rust's viewpoint, getting a raw char * from C (a "*const c_char" in Rust parlance) means that it gets a pointer to a buffer of unknown length.

Now, that may not be entirely accurate:

  • You may indeed only have a pointer to a buffer of unknown length
  • You may have a pointer to a buffer, and also know its length (i.e. the offset at which the nul terminator is)

The Rust standard library provides a CStr object, which means, "I have a pointer to an array of bytes, and I know its length, and I know the last byte is a nul".

CStr provides an unsafe from_ptr() constructor which takes a raw pointer, and walks the memory to which it points until it finds a nul byte. You must give it a valid pointer, and you had better guarantee that there is a nul terminator, or CStr will walk until the end of your process' address space looking for one.

Alternatively, if you know the length of your byte array, and you know that it has a nul byte at the end, you can call CStr::from_bytes_with_nul(). You pass it a &[u8] slice; the function will check that a) the last byte in that slice is indeed a nul, and b) there are no nul bytes in the middle.

The unsafe version of this last function is unsafe CStr::from_bytes_with_nul_unchecked(): it also takes an &[u8] slice, but you must guarantee that the last byte is a nul and that there are no nul bytes in the middle.

I really like that the Rust documentation tells you when functions are not "instantaneous" and must instead walks arrays, like to do validation or to look for the nul terminator above.

Turning a CStr into a string-like

Now, the above indicates that a CStr is a nul-terminated array of bytes. We have no idea what the bytes inside look like; we just know that they don't contain any other nul bytes.

There is a CStr::to_str() method, which returns a Result<&str, Utf8Error>. It performs UTF-8 validation on the array of bytes. If the array is valid, the function just returns a slice of the validated bytes minus the nul terminator (i.e. just what you expect for a Rust string slice). Otherwise, it returns an Utf8Error with the details like we discussed before.

There is also CStr::to_string_lossy() which does the replacement of invalid UTF-8 sequences like we discussed before.

Conclusion

Strings in Rust are UTF-8 encoded, they know their length, and they can have nul bytes in the middle.

To build a string from raw bytes, you must go through functions that do UTF-8 validation and tell you if it failed. There are unsafe functions that let you skip validation, but then of course you are on your own.

The low-level functions which read data from files operate on bytes. On top of those, there are convenience functions to read validated UTF-8 characters, lines, etc. All of these tell you when there was invalid UTF-8 or an I/O error.

Rust lets you wrap a raw char * that you got from C into something that can later be validated and turned into a string. Anything that manipulates a raw pointer is unsafe; this includes the "wrap me this pointer into a C string abstraction" API, and the "build me an array of bytes from this raw pointer" API. Later, you can validate those as UTF-8 and build real Rust strings — or know if the validation failed.

Rust builds these little "corridors" through the API so that illegal states are unrepresentable.

a silhouette of a person's head and shoulders, used as a default avatar

Yundoo Y8 RK3399 Settop Box (Part 1)

For some tasks like compiling and QA tests I was looking for some more powerful additions to my SBC collection of RPI3 and Allwinner Pine64. Requirements: Aarch64, plenty of RAM, fast CPU, fast network, mainline kernel support (in a not to distant future).

Currently, many SBCs available in the 100 $US/€ price range only have 1 or 2 GByte of RAM (or even less) and often provide just 4 Cortex-A53 cores running at 1.2-1.4 GHz. There are some exceptions, at least regarding the CPU power, but still limited memory (some examples):

  • BananaPi M3 (85€)
    • Pros: 8 Cores @ 1.8 GHz (Allwinner A83T), GBit Ethernet, 8 GByte eMMC
    • Cons: ARM A7 (32bit), 2 GByte RAM, only 2x USB 2.0, PowerVR graphics
  • LeMaker HiKey 620 (109 $US)
    • Pros: 8 Cores @ 1.2 GHz (Kirin 620), 64bit (A53) 8 GByte eMMC
    • Cons: No ethernet, 2 GByte RAM, only 3x USB 2.0, discontinued
  • Asus Tinkerboard (65€)
    • Pros: 4 Cores @ 2.0 GHz (Rockchip RK3288), GBit Ethernet
    • Cons: ARM A17 (32bit), 2GByte RAM, USB 2.0 only (4x)

There are more expensive options like the LeMaker HiKey 960 (240€, 3 GByte RAM, still no ethernet), or the Firefly RK3399 (200€, 4 GByte RAM), but these where clearly out of the 100€ price range.

The SBC board all sport a considerable number of of GPIOs, SPI and I2C ports and the like, but for hardware connectivity I already have the above mentioned bunch of RPIs (1 and 3) and Pine64 boards. So going into a slightly different direction, I investigated some current settop boxes.

Rockchip RK3399

The Rockchip RK3399 used on the Firefly is one of the available options, itt powers several different STBs with different RAM and eMMC configurations. I went with the the Yundoo Y8, for the following reasons:

  • 4 GByte of RAM
  • 32 GByte of eMMC
  • 2 A72 Cores (2.0 GHz) + 4 A53 Cores (1.5 GHz)
  • 1x GBit Ethernet
  • 1x USB 3.0, standard A type receptable
  • 1x USB 3.0, Type C (also provides DisplayPort video using alternate mode)
  • 2x USB 2.0, Type A
  • available for 105€, including EU power supply, IR Remote, short HDMI cable

Contrary to the above mentioned Firefly, you neither get PCIe expansion ports nor any GPIOs, so if this is a requirement the Y8 is no option for you.

In the next part, I will give some more details useful for running stock Linux instead of the provided Android 6.0 on the device – so stay tuned for UART info, MaskRom mode etc.

Update: UART/MaskRom info is now available: Yundoo Y8 RK3399 Settop Box (Part 2)