Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

Решение проблемы с долгой перезагрузкой в openSUSE

Этот баг долго не давал мне покоя, ещё с версии 42.1. Суть в том, что при выключении или перезагрузке система замирала на консольном приглашении на 60-90 секунд, и лишь потом выключалась. Поиск в Google, конечно же, выдавал кучу подобных жалоб от других людей, вместе с советами по решению проблемы. Но мне ничего не помогало: ни отключение служб, ни ручное отмонтирование разделов, ни ручное завершение всех процессов.

Наконец, мне удалось выяснить, что дело было в кривизне Systemd 228, используемой в openSUSE Leap. Мне помог рецепт, описанный тут. Быстрое решение выглядит так:

The bug can be worked around by creating a file /etc/sysctl.d/50-coredump.conf with the following contents:

kernel.core_pattern=core

That causes the kernel to write coredumps directly, bypassing the buggy systemd code.

Однако, так ядро будет при определённых обстоятельствах писать большие дампы в файл на корневом разделе. Чтобы избежать этого, можно скидывать дамп ядра в /dev/null:

sudo -i

ln -s /dev/null /etc/sysctl.d/50-coredump.conf

echo '* hard core 0' >> /etc/security/limits.conf

UPD.

За последнее время мне удалось выяснить, что в некоторых случаях описанный выше метод не помогает, но зато гарантированно помогает уменьшение таймаута, который использует Systemd для ожидания завершения пользовательских процессов. Нужно всего лишь изменить файл /etc/systemd/system.conf, раскомментировав параметр DefaultTimeoutStopSec и установив ему какое-нибудь небольшое значение. Например, так:

#DefaultStandardOutput=journal
#DefaultStandardError=inherit
#DefaultTimeoutStartSec=90s
DefaultTimeoutStopSec=5s
#DefaultRestartSec=100ms
#DefaultStartLimitIntervalSec=10s
#DefaultStartLimitBurst=5

Теперь больше никаких задержек при перезагрузке или выключении!

the avatar of Federico Mena-Quintero

How Glib-rs works, part 1: Type conversions

During the GNOME+Rust hackfest in Mexico City, Niko Matsakis started the implementation of gnome-class, a procedural macro that will let people implement new GObject classes in Rust and export them to the world. Currently, if you want to write a new GObject (e.g. a new widget) and put it in a library so that it can be used from language bindings via GObject-Introspection, you have to do it in C. It would be nice to be able to do this in a safe language like Rust.

How would it be done by hand?

In a C implementation of a new GObject subclass, one calls things like g_type_register_static() and g_signal_new() by hand, while being careful to specify the correct GType for each value, and being super-careful about everything, as C demands.

In Rust, one can in fact do exactly the same thing. You can call the same, low-level GObject and GType functions. You can use #[repr(C)]] for the instance and class structs that GObject will allocate for you, and which you then fill in.

You can see an example of this in gst-plugins-rs. This is where it implements a Sink GObject, in Rust, by calling Glib functions by hand: struct declarations, class_init() function, registration of type and interfaces.

How would it be done by a machine?

That's what Niko's gnome-class is about. During the hackfest it got to the point of being able to generate the code to create a new GObject subclass, register it, and export functions for methods. The syntax is not finalized yet, but it looks something like this:

gobject_gen! {
    class Counter {
        struct CounterPrivate {
            val: Cell<u32>
        }

        signal value_changed(&self);

        fn set_value(&self, v: u32) {
            let private = self.private();
            private.val.set(v);
            // private.emit_value_changed();
        }

        fn get_value(&self) -> u32 {
            let private = self.private();
            private.val.get()
        }
    }
}

I started adding support for declaring GObject signals — mainly being able to parse them from what goes inside gobject_gen!() — and then being able to call g_signal_newv() at the appropriate time during the class_init() implementation.

Types in signals

Creating a signal for a GObject class is basically like specifying a function prototype: the object will invoke a callback function with certain arguments and return value when the signal is emitted. For example, this is how GtkButton registers its button-press-event signal:

  button_press_event_id =
    g_signal_new (I_("button-press-event"),
                  ...
                  G_TYPE_BOOLEAN,    /* type of return value */
                  1,                 /* how many arguments? */
                  GDK_TYPE_EVENT);   /* type of first and only argument */

g_signal_new() creates the signal and returns a signal id, an integer. Later, when the object wants to emit the signal, it uses that signal id like this:

GtkEventButton event = ...;
gboolean return_val;

g_signal_emit (widget, button_press_event_id, 0, event, &return_val);

In the nice gobject_gen!() macro, if I am going to have a signal declaration like

signal button_press_event(&self, event: &ButtonPressEvent) -> bool;

then I will need to be able to translate the type names for ButtonPressEvent and bool into something that g_signal_newv() will understand: I need the GType values for those. Fundamental types like gboolean get constants like G_TYPE_BOOLEAN. Types that are defined at runtime, like GDK_TYPE_EVENT, get GType values generated at runtime, too, when one registers the type with g_type_register_*().

Rust type GType
i32 G_TYPE_INT
u32 G_TYPE_UINT
bool G_TYPE_BOOLEAN
etc. etc.

Glib types in Rust

How does glib-rs, the Rust binding to Glib and GObject, handle types?

Going from Glib to Rust

First we need a way to convert Glib's types to Rust, and vice-versa. There is a trait to convert simple Glib types into Rust types:

pub trait FromGlib<T>: Sized {
    fn from_glib(val: T) -> Self;
}

This means, if I have a T which is a Glib type, this trait will give you a from_glib() function which will convert it to a Rust type which is Sized, i.e. a type whose size is known at compilation time.

For example, this is how it is implemented for booleans:

impl FromGlib<glib_ffi::gboolean> for bool {
    #[inline]
    fn from_glib(val: glib_ffi::gboolean) -> bool {
        !(val == glib_ffi::GFALSE)
    }
}

and you use it like this:

let my_gboolean: glib_ffi::gboolean = g_some_function_that_returns_gboolean ();

let my_rust_bool: bool = from_glib (my_gboolean);

Booleans in glib and Rust have different sizes, and also different values. Glib's booleans use the C convention: 0 is false and anything else is true, while in Rust booleans are strictly false or true, and the size is undefined (with the current Rust ABI, it's one byte).

Going from Rust to Glib

And to go the other way around, from a Rust bool to a gboolean? There is this trait:

pub trait ToGlib {
    type GlibType;

    fn to_glib(&self) -> Self::GlibType;
}

This means, if you have a Rust type that maps to a corresponding GlibType, this will give you a to_glib() function to do the conversion.

This is the implementation for booleans:

impl ToGlib for bool {
    type GlibType = glib_ffi::gboolean;

    #[inline]
    fn to_glib(&self) -> glib_ffi::gboolean {
        if *self { glib_ffi::GTRUE } else { glib_ffi::GFALSE }
    }
}

And it is used like this:

let my_rust_bool: bool = true;

g_some_function_that_takes_gboolean (my_rust_bool.to_glib ());

(If you are thinking "a function call to marshal a boolean" — note how the functions are inlined, and the optimizer basically compiles them down to nothing.)

Pointer types - from Glib to Rust

That's all very nice for simple types like booleans and ints. Pointers to other objects are slightly more complicated.

GObject-Introspection allows one to specify how pointer arguments to functions are handled by using a transfer specifier.

(transfer none)

For example, if you call gtk_window_set_title(window, "Hello"), you would expect the function to make its own copy of the "Hello" string. In Rust terms, you would be passing it a simple borrowed reference. GObject-Introspection (we'll abbreviate it as GI) calls this GI_TRANSFER_NOTHING, and it's specified by using (transfer none) in the documentation strings for function arguments or return values.

The corresponding trait to bring in pointers from Glib to Rust, without taking ownership, is this. It's unsafe because it will be used to de-reference pointers that come from the wild west:

pub trait FromGlibPtrNone<P: Ptr>: Sized {
    unsafe fn from_glib_none(ptr: P) -> Self;
}

And you use it via this generic function:

#[inline]
pub unsafe fn from_glib_none<P: Ptr, T: FromGlibPtrNone<P>>(ptr: P) -> T {
    FromGlibPtrNone::from_glib_none(ptr)
}

Let's look at how this works. Here is the FromGlibPtrNone trait implemented for strings.

1
2
3
4
5
6
7
impl FromGlibPtrNone<*const c_char> for String {
    #[inline]
    unsafe fn from_glib_none(ptr: *const c_char) -> Self {
        assert!(!ptr.is_null());
        String::from_utf8_lossy(CStr::from_ptr(ptr).to_bytes()).into_owned()
    }
}

Line 1: given a pointer to a c_char, the conversion to String...

Line 4: check for NULL pointers

Line 5: Use the CStr to wrap the C ptr, like we looked at last time, validate it as UTF-8 and copy the string for us.

Unfortunately, there's a copy involved in the last step. It may be possible to use Cow<&str> there instead to avoid a copy if the char* from Glib is indeed valid UTF-8.

(transfer full)

And how about transferring ownership of the pointed-to value? There is this trait:

pub trait FromGlibPtrFull<P: Ptr>: Sized {
    unsafe fn from_glib_full(ptr: P) -> Self;
}

And the implementation for strings is as follows. In Glib's scheme of things, "transferring ownership of a string" means that the recipient of the string must eventually g_free() it.

1
2
3
4
5
6
7
8
impl FromGlibPtrFull<*const c_char> for String {
    #[inline]
    unsafe fn from_glib_full(ptr: *const c_char) -> Self {
        let res = from_glib_none(ptr);
        glib_ffi::g_free(ptr as *mut _);
        res
    }
}

Line 1: given a pointer to a c_char, the conversion to String...

Line 4: Do the conversion with from_glib_none() with the trait we saw before, put it in res.

Line 5: Call g_free() on the original C string.

Line 6: Return the res, a Rust string which we own.

Pointer types - from Rust to Glib

Consider the case where you want to pass a String from Rust to a Glib function that takes a *const c_char — in C parlance, a char *, without the Glib function acquiring ownership of the string. For example, assume that the C version of gtk_window_set_title() is in the gtk_ffi module. You may want to call it like this:

fn rust_binding_to_window_set_title(window: &Gtk::Window, title: &String) {
    gtk_ffi::gtk_window_set_title(..., make_c_string_from_rust_string(title));
}

Now, what would that make_c_string_from_rust_string() look like?

  • We have: a Rust String — UTF-8, known length, no nul terminator

  • We want: a *const char — nul-terminated UTF-8

So, let's write this:

1
2
3
4
5
fn make_c_string_from_rust_string(s: &String) -> *const c_char {
    let cstr = CString::new(&s[..]).unwrap();
    let ptr = cstr.into_raw() as *const c_char;
    ptr
}

Line 1: Take in a &String; return a *const c_char.

Line 2: Build a CString like we way a few days ago: this allocates a byte buffer with space for a nul terminator, and copies the string's bytes. We unwrap() for this simple example, because CString::new() will return an error if the String contained nul characters in the middle of the string, which C doesn't understand.

Line 3: Call into_raw() to get a pointer to the byte buffer, and cast it to a *const c_char. We'll need to free this value later.

But this kind of sucks, because we the have to use this function, pass the pointer to a C function, and then reconstitute the CString so it can free the byte buffer:

let buf = make_c_string_from_rust_string(my_string);
unsafe { c_function_that_takes_a_string(buf); }
let _ = CString::from_raw(buf as *mut c_char);

The solution that Glib-rs provides for this is very Rusty, and rather elegant.

Stashes

We want:

  • A temporary place to put a piece of data
  • A pointer to that buffer
  • Automatic memory management for both of those

Glib-rs defines a Stash for this:

1
2
3
4
5
6
pub struct Stash<'a,                                 // we have a lifetime
                 P: Copy,                            // the pointer must be copy-able
                 T: ?Sized + ToGlibPtr<'a, P>> (     // Type for the temporary place
    pub P,                                           // We store a pointer...
    pub <T as ToGlibPtr<'a, P>>::Storage             // ... to a piece of data with that lifetime ...
);

... and the piece of data must be of of the associated type ToGlibPtr::Storage, which we will see shortly.

This struct Stash goes along with the ToGlibPtr trait:

pub trait ToGlibPtr<'a, P: Copy> {
    type Storage;

    fn to_glib_none(&'a self) -> Stash<'a, P, Self>;  // returns a Stash whose temporary storage
                                                      // has the lifetime of our original data
}

Let's unpack this by looking at the implementation of the "transfer a String to a C function while keeping ownership":

1
2
3
4
5
6
7
8
9
impl <'a> ToGlibPtr<'a, *const c_char> for String {
    type Storage = CString;

    #[inline]
    fn to_glib_none(&self) -> Stash<'a, *const c_char, String> {
        let tmp = CString::new(&self[..]).unwrap();
        Stash(tmp.as_ptr(), tmp)
    }
}

Line 1: We implement ToGlibPtr<'a *const c_char> for String, declaring the lifetime 'a for the Stash.

Line 2: Our temporary storage is a CString.

Line 6: Make a CString like before.

Line 7: Create the Stash with a pointer to the CString's contents, and the CString itself.

(transfer none)

Now, we can use ".0" to extract the first field from our Stash, which is precisely the pointer we want to a byte buffer:

let my_string = ...;
unsafe { c_function_which_takes_a_string(my_string.to_glib_none().0); }

Now Rust knows that the temporary buffer inside the Stash has the lifetime of my_string, and it will free it automatically when the string goes out of scope. If we can accept the .to_glib_none().0 incantation for "lending" pointers to C, this works perfectly.

(transfer full)

And for transferring ownership to the C function? The ToGlibPtr trait has another method:

pub trait ToGlibPtr<'a, P: Copy> {
    ...

    fn to_glib_full(&self) -> P;
}

And here is the implementation for strings:

impl <'a> ToGlibPtr<'a, *const c_char> for String {
    fn to_glib_full(&self) -> *const c_char {
        unsafe {
            glib_ffi::g_strndup(self.as_ptr() as *const c_char, 
                                self.len() as size_t)
                as *const c_char
        }
    }

We basically g_strndup() the Rust string's contents from its byte buffer and its len(), and we can then pass this on to C. That code will be responsible for g_free()ing the C-side string.

Next up

Transferring lists and arrays. Stay tuned!

a silhouette of a person's head and shoulders, used as a default avatar

Yundoo Y8 RK3399 Settop Box (Part 2)

Debug-UART

Although the Y8 has no pin header providing access to the Debug UART, its easy to add one, given you have a soldering iron at hand:

 

As the pads have a 2.0 mm spacing, I used a short piece of ribbon cable with the typical 2.54 mm pin header at the other end. The RK3399 UART uses 1.5 Mbps instead of the common 115.2 kbps.

MaskRom Mode

The RK3399 has a fixed boot order – SPI, eMMC, SD card. So to force the RK3399 to boot anything different than the Android firmware on the internal, soldered eMMC flash, one has to „disable“ the eMMC. The canonical method for doing so on any Rockchip board is to short the eMMC clock line to ground. As the bootrom code can no longer detect any usable boot code on the eMMC it will try the SD card next and will finally fall back the so called MaskRom mode. The MaskRom code provides access to the SoC via USB, allowing to bootstrap the device.

Finding the eMMC clock was a little bit more difficult – the eMMC chip is a BGA chip on the top of the PCB. Older NAND chips in TSSOP packages allowed to directly access the the chip pins, obviously this is not possible with BGAs. As there are no PCB traces near the eMMC on the top layer, there was a quite good chance to find the relevant traces on the bottom layer. Two photos and a few minutes later, spend with GIMP to to carefully align the two images, I was able to locate the traces and a few test points:

From the datasheet of the Samsung KLMBG4WEBD-B031 eMMC flash it became apparent the clock had to be on the left side of the board. Using my scope I measured the signals available on the different test points, et voilà, found a steady 200 MHz signal. Another give away was the 22 Ω resistor on this signal, which can also be found in the Firefly-RK3399 schematics.

 

The series resistor is somewhat odd, as a series resistor for signal conditioning should normally be located near the driver (i.e. the SoC). Anyway, it makes a nice spot to force the clock signal to ground.

When pressing the Power button to start the Y8, normally the red LED lights up for one to two seconds and then becomes blue. When the eMMC clock is shorted, the LED will stay red, and the device is in MaskRom mode. The Y8 can be connected to a USB host using a  USB Type-C to Type-A cable, and it will show up with the RK3399 specific Vendor/Product ID 2207:330C.

To be continued …

a silhouette of a person's head and shoulders, used as a default avatar

Developing with OpenSSL 1.1.x on openSUSE Leap

The SUSE OpenSSL maintainers are hard at work to migrate openSUSE Tumbleweed and SUSE Linux Enterprise 15 to use OpenSSL 1.1 by default. Some work remains, see boo#1042629.

Here is how you can use OpenSSL 1.1 in parallel to the 1.0 version on openSUSE Leap for development and run-time:

zypper ar --refresh --check obs://security:tls security_tls
zypper in --from security_tls openssl libopenssl-devel

This replaces the following:

The following 3 NEW packages are going to be installed:
libopenssl-1_1_0-devel libopenssl1_1_0 openssl-1_1_0

The following 2 packages are going to be upgraded:
libopenssl-devel openssl

The following 2 packages are going to change architecture:
libopenssl-devel x86_64 -> noarch
openssl x86_64 -> noarch

The following 2 packages are going to change vendor:
libopenssl-devel openSUSE -> obs://build.opensuse.org/security
openssl openSUSE -> obs://build.opensuse.org/security

This will retain a working OpenSSL 1.0 run-time, and the two versions can be used in parallel. As you can see, the openssl and libopenssl-devel turn into non-architecture specific meta-packages.

Verify the installed command line version:

openssl version
OpenSSL 1.1.0f-fips 25 May 2017

To revert:

zypper in --from update --oldpackage openssl -libopenssl1_1_0

Accept the default conflict resolutions to switch back to the distribution packages.

Have the best of success porting your code to support OpenSSL 1.1. The upstream wiki has a good article on the OpenSSL 1.1 API changes.

the avatar of Federico Mena-Quintero

Correctness in Rust: building strings

Rust tries to follow the "make illegal states unrepresentable" mantra in several ways. In this post I'll show several things related to the process of building strings, from bytes in memory, or from a file, or from char * things passed from C.

Strings in Rust

The easiest way to build a string is to do it directly at compile time:

let my_string = "Hello, world!";

In Rust, strings are UTF-8. Here, the compiler checks our string literal is valid UTF-8. If we try to be sneaky and insert an invalid character...

let my_string = "Hello \xf0";

We get a compiler error:

error: this form of character escape may only be used with characters in the range [\x00-\x7f]
 --> foo.rs:2:30
  |
2 |     let my_string = "Hello \xf0";
  |                              ^^

Rust strings know their length, unlike C strings. They can contain a nul character in the middle, because they don't need a nul terminator at the end.

let my_string = "Hello \x00 zero";
println!("{}", my_string);

The output is what you expect:

$ ./foo | hexdump -C
00000000  48 65 6c 6c 6f 20 00 20  7a 65 72 6f 0a           |Hello . zero.|
0000000d                    ^ note the nul char here
$

So, to summarize, in Rust:

  • Strings are encoded in UTF-8
  • Strings know their length
  • Strings can have nul chars in the middle

This is a bit different from C:

  • Strings don't exist!

Okay, just kidding. In C:

  • A lot of software has standardized on UTF-8.
  • Strings don't know their length - a char * is a raw pointer to the beginning of the string.
  • Strings conventionally have a nul terminator, that is, a zero byte that marks the end of the string. Therefore, you can't have nul characters in the middle of strings.

Building a string from bytes

Let's say you have an array of bytes and want to make a string from them. Rust won't let you just cast the array, like C would. First you need to do UTF-8 validation. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
fn convert_and_print(bytes: Vec<u8>) {
    let result = String::from_utf8(bytes);
    match result {
        Ok(string) => println!("{}", string),
        Err(e) => println!("{:?}", e)
    }
}

fn main() {
    convert_and_print(vec![0x48, 0x65, 0x6c, 0x6c, 0x6f]);
    convert_and_print(vec![0x48, 0x65, 0xf0, 0x6c, 0x6c, 0x6f]);
}

In lines 10 and 11, we call convert_and_print() with different arrays of bytes; the first one is valid UTF-8, and the second one isn't.

Line 2 calls String::from_utf8(), which returns a Result, i.e. something with a success value or an error. In lines 3-5 we unpack this Result. If it's Ok, we print the converted string, which has been validated for UTF-8. Otherwise, we print the debug representation of the error.

The program prints the following:

$ ~/foo
Hello
FromUtf8Error { bytes: [72, 101, 240, 108, 108, 111], error: Utf8Error { valid_up_to: 2, error_len: Some(1) } }

Here, in the error case, the Utf8Error tells us that the bytes are UTF-8 and are valid_up_to index 2; that is the first problematic index. We also get some extra information which lets the program know if the problematic sequence was incomplete and truncated at the end of the byte array, or if it's complete and in the middle.

And for a "just make this printable, pls" API? We can use String::from_utf8_lossy(), which replaces invalid UTF-8 sequences with U+FFFD REPLACEMENT CHARACTER:

fn convert_and_print(bytes: Vec<u8>) {
    let string = String::from_utf8_lossy(&bytes);
    println!("{}", string);
}

fn main() {
    convert_and_print(vec![0x48, 0x65, 0x6c, 0x6c, 0x6f]);
    convert_and_print(vec![0x48, 0x65, 0xf0, 0x6c, 0x6c, 0x6f]);
}

This prints the following:

$ ~/foo
Hello
He�llo

Reading from files into strings

Now, let's assume you want to read chunks of a file and put them into strings. Let's go from the low-level parts up to the high level "read a line" API.

Single bytes and single UTF-8 characters

When you open a File, you get an object that implements the Read trait. In addition to the usual "read me some bytes" method, it can also give you back an iterator over bytes, or an iterator over UTF-8 characters.

The Read.bytes() method gives you back a Bytes iterator, whose next() method returns Result<u8, io::Error>. When you ask the iterator for its next item, that Result means you'll get a byte out of it successfully, or an I/O error.

In contrast, the Read.chars() method gives you back a Chars iterator, and its next() method returns Result<char, CharsError>, not io::Error. This extended CharsError has a NotUtf8 case, which you get back when next() tries to read the next UTF-8 sequence from the file and the file has invalid data. CharsError also has a case for normal I/O errors.

Reading lines

While you could build a UTF-8 string one character at a time, there are more efficient ways to do it.

You can create a BufReader, a buffered reader, out of anything that implements the Read trait. BufReader has a convenient read_line() method, to which you pass a mutable String and it returns a Result<usize, io::Error> with either the number of bytes read, or an error.

That method is declared in the BufRead trait, which BufReader implements. Why the separation? Because other concrete structs also implement BufRead, such as Cursor — a nice wrapper that lets you use a vector of bytes like an I/O Read or Write implementation, similar to GMemoryInputStream.

If you prefer an iterator rather than the read_line() function, BufRead also gives you a lines() method, which gives you back a Lines iterator.

In both cases — the read_line() method or the Lines iterator, the error that you can get back can be of ErrorKind::InvalidData, which indicates that there was an invalid UTF-8 sequence in the line to be read. It can also be a normal I/O error, of course.

Summary so far

There is no way to build a String, or a &str slice, from invalid UTF-8 data. All the methods that let you turn bytes into string-like things perform validation, and return a Result to let you know if your bytes validated correctly.

The exceptions are in the unsafe methods, like String::from_utf8_unchecked(). You should really only use them if you are absolutely sure that your bytes were validated as UTF-8 beforehand.

There is no way to bring in data from a file (or anything file-like, that implements the Read trait) and turn it into a String without going through functions that do UTF-8 validation. There is not an unsafe "read a line" API without validation — you would have to build one yourself, but the I/O hit is probably going to be slower than validating data in memory, anyway, so you may as well validate.

C strings and Rust

For unfortunate historical reasons, C flings around char * to mean different things. In the context of Glib, it can mean

  • A valid, nul-terminated UTF-8 sequence of bytes (a "normal string")
  • A nul-terminated file path, which has no meaningful encoding
  • A nul-terminated sequence of bytes, not validated as UTF-8.

What a particular char * means depends on which API you got it from.

Bringing a string from C to Rust

From Rust's viewpoint, getting a raw char * from C (a "*const c_char" in Rust parlance) means that it gets a pointer to a buffer of unknown length.

Now, that may not be entirely accurate:

  • You may indeed only have a pointer to a buffer of unknown length
  • You may have a pointer to a buffer, and also know its length (i.e. the offset at which the nul terminator is)

The Rust standard library provides a CStr object, which means, "I have a pointer to an array of bytes, and I know its length, and I know the last byte is a nul".

CStr provides an unsafe from_ptr() constructor which takes a raw pointer, and walks the memory to which it points until it finds a nul byte. You must give it a valid pointer, and you had better guarantee that there is a nul terminator, or CStr will walk until the end of your process' address space looking for one.

Alternatively, if you know the length of your byte array, and you know that it has a nul byte at the end, you can call CStr::from_bytes_with_nul(). You pass it a &[u8] slice; the function will check that a) the last byte in that slice is indeed a nul, and b) there are no nul bytes in the middle.

The unsafe version of this last function is unsafe CStr::from_bytes_with_nul_unchecked(): it also takes an &[u8] slice, but you must guarantee that the last byte is a nul and that there are no nul bytes in the middle.

I really like that the Rust documentation tells you when functions are not "instantaneous" and must instead walks arrays, like to do validation or to look for the nul terminator above.

Turning a CStr into a string-like

Now, the above indicates that a CStr is a nul-terminated array of bytes. We have no idea what the bytes inside look like; we just know that they don't contain any other nul bytes.

There is a CStr::to_str() method, which returns a Result<&str, Utf8Error>. It performs UTF-8 validation on the array of bytes. If the array is valid, the function just returns a slice of the validated bytes minus the nul terminator (i.e. just what you expect for a Rust string slice). Otherwise, it returns an Utf8Error with the details like we discussed before.

There is also CStr::to_string_lossy() which does the replacement of invalid UTF-8 sequences like we discussed before.

Conclusion

Strings in Rust are UTF-8 encoded, they know their length, and they can have nul bytes in the middle.

To build a string from raw bytes, you must go through functions that do UTF-8 validation and tell you if it failed. There are unsafe functions that let you skip validation, but then of course you are on your own.

The low-level functions which read data from files operate on bytes. On top of those, there are convenience functions to read validated UTF-8 characters, lines, etc. All of these tell you when there was invalid UTF-8 or an I/O error.

Rust lets you wrap a raw char * that you got from C into something that can later be validated and turned into a string. Anything that manipulates a raw pointer is unsafe; this includes the "wrap me this pointer into a C string abstraction" API, and the "build me an array of bytes from this raw pointer" API. Later, you can validate those as UTF-8 and build real Rust strings — or know if the validation failed.

Rust builds these little "corridors" through the API so that illegal states are unrepresentable.

a silhouette of a person's head and shoulders, used as a default avatar

Yundoo Y8 RK3399 Settop Box (Part 1)

For some tasks like compiling and QA tests I was looking for some more powerful additions to my SBC collection of RPI3 and Allwinner Pine64. Requirements: Aarch64, plenty of RAM, fast CPU, fast network, mainline kernel support (in a not to distant future).

Currently, many SBCs available in the 100 $US/€ price range only have 1 or 2 GByte of RAM (or even less) and often provide just 4 Cortex-A53 cores running at 1.2-1.4 GHz. There are some exceptions, at least regarding the CPU power, but still limited memory (some examples):

  • BananaPi M3 (85€)
    • Pros: 8 Cores @ 1.8 GHz (Allwinner A83T), GBit Ethernet, 8 GByte eMMC
    • Cons: ARM A7 (32bit), 2 GByte RAM, only 2x USB 2.0, PowerVR graphics
  • LeMaker HiKey 620 (109 $US)
    • Pros: 8 Cores @ 1.2 GHz (Kirin 620), 64bit (A53) 8 GByte eMMC
    • Cons: No ethernet, 2 GByte RAM, only 3x USB 2.0, discontinued
  • Asus Tinkerboard (65€)
    • Pros: 4 Cores @ 2.0 GHz (Rockchip RK3288), GBit Ethernet
    • Cons: ARM A17 (32bit), 2GByte RAM, USB 2.0 only (4x)

There are more expensive options like the LeMaker HiKey 960 (240€, 3 GByte RAM, still no ethernet), or the Firefly RK3399 (200€, 4 GByte RAM), but these where clearly out of the 100€ price range.

The SBC board all sport a considerable number of of GPIOs, SPI and I2C ports and the like, but for hardware connectivity I already have the above mentioned bunch of RPIs (1 and 3) and Pine64 boards. So going into a slightly different direction, I investigated some current settop boxes.

Rockchip RK3399

The Rockchip RK3399 used on the Firefly is one of the available options, itt powers several different STBs with different RAM and eMMC configurations. I went with the the Yundoo Y8, for the following reasons:

  • 4 GByte of RAM
  • 32 GByte of eMMC
  • 2 A72 Cores (2.0 GHz) + 4 A53 Cores (1.5 GHz)
  • 1x GBit Ethernet
  • 1x USB 3.0, standard A type receptable
  • 1x USB 3.0, Type C (also provides DisplayPort video using alternate mode)
  • 2x USB 2.0, Type A
  • available for 105€, including EU power supply, IR Remote, short HDMI cable

Contrary to the above mentioned Firefly, you neither get PCIe expansion ports nor any GPIOs, so if this is a requirement the Y8 is no option for you.

In the next part, I will give some more details useful for running stock Linux instead of the provided Android 6.0 on the device – so stay tuned for UART info, MaskRom mode etc.

Update: UART/MaskRom info is now available: Yundoo Y8 RK3399 Settop Box (Part 2)

a silhouette of a person's head and shoulders, used as a default avatar

A good question about TDD with game development.

A good question about TDD with game development.

Really it depends what you mean by game development. If we are talking about creating a game engine then TDD absolutely has a place in the development process. It can be used like it is in any other application where you write tests using whatever framework is appropriate to express the business logic in logical chunks (individual tests that test a single requirement, not multiple requirements at once, this makes it easier to troubleshoot when a test fails). As far as I know a game engine is not much different from any other software so following TDD principles should be no different.

If however we are just talking about creating a game from an existing engine then things might be a bit tricky. A quick search shows that Unreal Engine supports some form of testing but I am not sure how easy to use it is. Another issue with writing tests for games is how can you express the logic of the game in a test? Sure, you can test things like the math or physics libraries, but what about more complex functionality? Overall I don’t think it would be worth trying to test games given the large amount of possible use cases, it would likely be too much of an effort to test the game as a whole. That however does not mean you should not test individual libraries/modules!

I am curious to know what developers who actually do game development use for doing their tests and what exactly they try to test when it comes to game functionality.


A good question about TDD with game development. was originally published in Information & Technology on Medium, where people are continuing the conversation by highlighting and responding to this story.

the avatar of Federico Mena-Quintero

GUADEC 2017 presentation

During GUADEC this year I gave a presentation called Replacing C library code with Rust: what I learned with librsvg. This is the PDF file; be sure to scroll past the full-page presentation pages until you reach the speaker's notes, especially for the code sections!

Replacing C library code with Rust - link to PDF

You can also get the ODP file for the presentation. This is released under a CC-BY-SA license.

For the presentation, my daughter Luciana made some drawings of Ferris, the Rust mascot, also released under the same license:

Ferris says hi Ferris busy at work Ferris makes a mess Ferris presents her work

a silhouette of a person's head and shoulders, used as a default avatar

There can never be enough testing

As you may already know (if you don’t, please check these older posts) openQA, the automated testing system used by openSUSE runs daily tests on the latest KDE software from git. It works well and uncovered a number of bugs. However, it only tested X11. With Wayland starting to become usable, and some developers even switching to Wayland full time, it was a notable shortcoming. Until now.

Why would openQA not run on Wayland? The reason lies in the video driver. openQA runs its tests in a virtual machine (KVM) which uses the “cirrus” virtual hardware for video display. This virtualized video card does not expose an interface capable of interfacing with the kernel’s Direct Rendering Manager (DRM), which is required by kwin_wayland, causing a crash. To fix it, “virtio”, which presents a way for the guest OS to properly interface with the host’s hardware, is needed. Virtio can be used to emulate many parts of the VM’s hardware, including the video card. That would mean a working DRM interface, and a working Wayland session!

However, openQA tests did not handle the case where virtio was needed. This is where one of our previously-sung heroes, Fabian Vogt, comes into play. He added support for running Wayland tests in openQA using virtio. This means now that the Argon and Krypton live images now undergo an additional series of tests which are performed under Wayland. And of course, actual bugs have been found already.

Last but not least, now the groundwork has been laid down to do proper Wayland testing on vanilla Tumbleweed as well. openQA should be probably churning out results soon. Exciting times ahead!

a silhouette of a person's head and shoulders, used as a default avatar

Test Driven Development In JavaFX With TestFX

This post has been migrated to my new blog that you can find here:

https://pureooze.com/blog/posts/2017-08-06-test-driven-development-in-javafx-with-testfx/

A dive into how to test JavaFX applications using TestFX.

If you just want to read about how to use TestFX skip to the “Setting Up Our Application” section.

I recently started working at a company that has a very heavy reliance on Java applications. As a primarily Python, Javascript and C++ developer, I had not used Java for a few years so given the obvious fact that I would have to get comfortable with using Java on a day to day basis, I decided to work on a small project in Java to get my feet wet.

The Application

While thinking about possible project ideas for this Java application I browsed through my Github repos and saw an all too familiar project, a podcast feed aggregator/player that I had worked on with some friends. The application was written using Qt 5.8 and was capable of fetching RSS feeds for any audio podcast on iTunes, stream episodes and render the HTML contents associated to an individual episode. The links shown below in the right hand side widget would also open in the default system browser when clicked on by a user.

Screenshot of the original PodcastFeed application. On the left is the podcast list, the middle is the episode list (contents change based on selected podcast) and the right is the description rendering for each individual episode.

While the application looks pretty simple (and it is) the fun of developing this came from working with two friends to learn Qt while also developing the application quickly and in a small amount of time. The reason I considered rewriting this application in Java was because the code we originally wrote had several flaws (keep in mind we developed it in a very short amount of time):

  • We wrote zero tests for the application (Yikes!!)
  • Classes were an after thought, they were not part of the initial design
  • Due to the above the functions were very tightly coupled
  • Tight Coupling made it very difficult to modify core processes of the application (like XML parsing) without breaking other features

The Approach — TDD

Given the flaws mentioned above it became clear to me that I would need to take a different approach to this new application if I wanted any hope of being able to maintain it for any reasonable amount of time. For help with how to approach this application design I turned to a book I had been reading recently: Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce.

Tests are a core part of development, and this book is a great source of knowledge about how to create good tests.

This book is a fantastic read if you want to learn about what makes Test Driven Development a powerful tool for use in Object Oriented development. The authors provide a lot of deep insight on why tests are important, what kind of tests should be used when and how one writes expressive, easy to understand tests. I will spare the details on TDD but you can google it or read the book above to learn more about it.

The authors of the book continuously stress the importance of using End to End tests because of their ability to test the full development, integration and deployment processes in an organization. I myself do not have automated build systems such as Atlassian’s Bamboo system so my automation capabilities are limited, but I can make use of Unit tests to test individual functions and UI Tests to try and get as much End to End coverage (between the local application and the podcast data on the Internet) as possible.

To read the rest of this post it can be found on my new blog here:

https://pureooze.com/blog/posts/2017-08-06-test-driven-development-in-javafx-with-testfx/


Test Driven Development In JavaFX With TestFX was originally published in Information & Technology on Medium, where people are continuing the conversation by highlighting and responding to this story.