Skip to main content

the avatar of openSUSE News

Community Plans for Summit in Berlin

The community is headed to Berlin on June 19 for a Community Summit in association with SUSE’s premier annual global technical conference SUSECON.

Registration for the event is open and the Call for Papers is open until May 29. Partners of SUSE, openSUSE, open source community projects and community members that want to participate are encouraged to register for the summit and submit a talk.

The schedule for the Community Summit will be released on May 30.

There is a Community track and an open source track. There are two types of talks that can be submitted for the summit. One is a short talk with a 15-minute limit and the other is a standard talk with a 30-minute limit.

Attendees of SUSECON are also welcome to attend and submit talks. The Community Summit is a free community event that will take place on the last day of SUSECON.

The summit will take place a week before the openSUSE Conference in Nuremberg, so attendees of SUSECON should consider staying for the openSUSE Project’s annual conference and submit a technical talk. For small- and medium-sized enterprises, there will be a 4-hour Open 4 Business networking event held on June 26 next to SUSE’s offices in Nuremberg.

Contact ddemaio (@) opensuse.org if you have any questions concerning the summit.

the avatar of Nathan Wolf

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE Tumbleweed – Review of the weeks 2024/08

Dear Tumbleweed users and hackers,

This week, we had once again openQA blocking the release of one snapshot and protecting some of our users (using the experimental sdboot/disk-encryption). openQA has identified an inconsistency in snapshot 0215 and found that systems with this update would fail to unlock their disks. The fix landed in snapshot 0216. openQA confirmed the fix and the five snapshots 0216, 0218, 0220, 0221, and 0222 have been published.

The most relevant changes in those releases were:

  • Mozilla Firefox 122.0.1
  • bind 9.18.24
  • dav1d 1.4.0
  • PHP 8.2.16
  • Poppler 24.02.0
  • Shadow 4.14.5
  • Mesa 23.3.6
  • Meson 1.3.2
  • binutils 2.42
  • GCC 14 is now the libgcc provider. GCC 13 is still the default compiler being used
  • Linux kernel 6.7.5
  • pkgconf 2.1.1
  • Node.JS 21.6.2
  • Qt 6.6.2
  • Systemd 254.9
  • perl-Bootloader 1.12: no longer written in perl (package name change to happen later)
  • Qemu 8.2.1
  • Lots of packages preparing for RPM 4.20 (%patchN no longer supported) (~ 600 out of 2000 packages fixed this week)
  • RPM: enable reproducible builds by default (bsc#1148824)

In my opinion, that’s quite an impressive list. Soon (and a bit more distant) we will be shipping these changes:

  • Ruby 3.2 deprecation: ruby3.2 all ruby3.2-rubygem packages will be removed from Tumbleweed
  • python 3.9 deprecation: all python39-* packages are scheduled for removal. We still have Python 3.10, Python 3.11 (the default interpreter), and Python 3.12 in Tumbleweed. Unfortunately, this road will be bumpy, as many Python packages still do not build for Python 3.12 – and unless the builds succeed, the pytho39-XXX packages will stay lingering in the repository.
  • Systemd 255
  • Many more package fixes to prepare for RPM 4.20
  • KDE Frameworks and Plasma 6
  • dbus-broker: a big step forward; upgrades seem to be an issue that needs to be addressed
  • libxml 2.12.x: slow progress
  • GCC 14: phase 2: use gcc14 as default compiler

the avatar of openSUSE News

Engage with Uyuni Community Hours

Like many open-source projects, the Uyuni Project has a long tradition of fostering community engagement and open dialogue, which is why those who are interested in configuration management should consider joining the Uyuni Community Hours scheduled for Feb. 24 at 15:00 UTC.

Uyuni Community Hours sessions take place on the last Friday of the month. The sessions offer an invaluable opportunity for both the community and the project’s development team to come together.

During these sessions, participants are presented with the latest developments surrounding Uyuni. This open forum allows the community to ask questions, provide feedback and suggest features or enhancements directly to the development team. This proactive approach helps Uyuni to evolve and align with the needs and expectations of its user base.

The session for this Friday addresses the community’s feedback and needs:

  • Meeting Migration Recap: An overview of recent changes to the meeting platform, enhancing accessibility and participation for the community.

  • What’s New in Uyuni: A detailed exploration of the latest features and improvements in the February 2024 release of Uyuni.

  • Containerized Uyuni: Release Strategy: Insights into the future of Uyuni’s deployment and management within containerized environments.

  • Uyuni Health Check: Running on top of a “supportconfig”: Introduction of a new tool designed to simplify and streamline health checks for Uyuni servers.

  • One Shot Execution of Recurring Actions: A discussion on enhancing task management and execution within the Uyuni framework.

  • Testing, Building, and Publishing the Documentation with GitHub Actions: An innovative approach to maintaining and distributing up-to-date documentation for Uyuni users and developers.

This session is accessible with a detailed agenda and is meant to keep the contributing community well-informed of upcoming topics and discussions. Whether a developer, administrator or an open-source software enthusiast, join the Uyuni Community Hours to offer valuable insights into the project’s progress and future initiatives.

the avatar of SUSE Community Blog

The Year of Agama – an outlook to the 2024 roadmap

The following article has been contributed by Ancor González Sosa and the YaST team at SUSE.       At the end of 2023, we announced Agama 7, a new service-based installer for Linux. That version was the first prototype we could consider to be ‘functional enough’ for our purposes, as it covers areas such […]

The post The Year of Agama – an outlook to the 2024 roadmap appeared first on SUSE Communities.

the avatar of rickspencer3's Blog

Can you run SUSE on a $65 Laptop?

Can you run SUSE on a $65 Laptop?

Yeah, but you might not want to.

Of all of the computers that I have owned over the years, the two that remember the most fondly are netbooks. I was one of the first to get an eeePC. I put Ubuntu on it, and a friend at work was able to do some firmware cutting to get all of the hardware enabled. I moved on to the next eeePC version, and then on to one of the most useful computers I ever had, a Dell Mini 10v. I actually bought one for myself, and one for each of my kids. If I remember correctly, I was able to buy them through the Dell partner portal or something. I had a black one, but one of my kids got red, and one got blue. The “10” stood for the 10 inch screen, and the “v” meant that it was a “value” model, which, for me, meant that the graphics were supported with open source intel drivers instead of binary nvidia goo.

The beauty of these computers was that they were small and reasonably light, so I could lug them around to conferences and work trips quite easily. I am sad that I don’t really see anyone making netbooks anymore, but when I saw that you could get an $80 eleven inch computer at Microcenter, and that some folks had some luck installing Linux on it, I decided to go for it. You see, I have some work trips coming up, and I thought it would be nice to have a compact and cheap computer with me for these trips, rather than my expensive high performance laptop that I use as my daily at work.

I’ve always been an early adopter of tech when it reaches new levels of access. For example, my 3D printer was one of the first low drama, low cost printers (thanks Monoprice). I also bought the first sub $500 laptop,which had about 40 minutes of battery life.

“Wait”, you say? The title says $65, but you said it was $80? That’s right, Evolve III Maestro is on sale for $80 at Microcenter. When I went to look at it, I didn’t see it out on display, so I asked about it. It turns out that they don’t have it out, you have to know to ask, and you can’t see it, you have to buy it “sight unseen.” Yikes. The technician helping me said he could check if there was an open box in the back, and if so, he could let me have a look before I bought it. He brought it out, and sure enough, it was a small, cheap computer. I asked if there was a discount for buying the open box, and sure enough, $15 off. So there you go, I got a computer for $65.

Open Evolve III Box

The device is … erm … bare bones. For example, no USB-C ports. The keyboard is not exactly an IBM Selectric feel. I have a 5 year old 11 inch MacBook Air that I bought for like $2,000, and let’s just say, the build quality of Evolve III is “not quite at the same level.”

So, how did it go getting Leap on there? Getting into the bios and adjusting the boot menu to boot from my USB install media was pretty easy, after I did some web searches to find that “delete” is the magic key to get into the bios. Unfortunately, the wifi driver for the wifi module is too new, and so the built-in wifi is not supported in Leap 15.5s kernel, and of course there is no Ethernet port. After some confusing loops in the installer trying to get it to install without a network connection, I pulled out a random USB hub with an ethernet adapter that I had in my closet, and then used a USB-C to USB 3 connector. This actually worked.

Janky Ethernet connection

Given the small screen and the low specs, I opted for xfce for the desktop rather than my normal GNOME choice. Having an easy choice of “roles” at install is sweet.

The install wasn’t exactly fast, but it installed just fine. It occurred to me that instead of struggling with the unsupported built in wifi, I had some wifi dongles lying around, I could use one of those. The first one I tried was RALINK something something, and I looked it up, and there was a kernel module for it, but no package for it. I figured if I was going to go through that much pain, I would rather focus on getting the internal wifi module to work.

After some epic scrounging through a box of rpis, sensors, motors, and other crap, I found an official rpi dongle, that turned out to be an already supported Broadcom wifi modem, so now I have wifi. The dongle doesn’t see my “5g” network, but it works. I figure that making the real wifi work (and documenting it) will be an irresistible challenge for one of the Linux engineers I am surely going to meet up with in the next few months.

rpi wifi dongle

So that’s it, it’s mostly running. I installed Cheese to test the webcam, it works just fine.

On the audio side, Pulse doesn’t detect built in speakers or the built in microphone, but it works just fine with my Plantronics headset that SUSE sent me. I can’t really be bothered to make the built in audio work, because I assume that the hardware is going to produce terrible results anyway. There is an 1/8 inch pin connector as well, but I haven’t tried that or bluetooth yet. The bluetooth dialog works so I trust that bluetooth works generally. I barely use bluetooth with laptops, so I leave it off.

desktop with pulse audio working

Lastly, the computer came with mini HMDI, and I don’t seem to have an adapter for that around, so I will need to pick one up. Because the computer came with Intel graphics, I am confident that it will “just work”, but you never know.

Fortunately, I wrote a blog post about getting my real power house work machine set up, so I was able to follow those steps to get chrome, vscode,and slack installed and running. Xfce-wise I read up on how to do some light configuration, especially getting the apps that I use most on the panel.

All told, I'm thinking this machine might suit for travel. The keyboard is a little awkward, so I miss a lot of characters when I type. When typing in resource hungry web pages, like google docs or slack, it is very laggy. It really makes me miss the days of real netbooks. It’s great that it’s so cheap, but what I wanted was small. I would have rather have a smaller machine with a better build.

the avatar of Open Build Service

Build Results Summary Chart Links to Build Results Overview

the avatar of Federico Mena-Quintero

Rustifying libipuz: character sets

It has been, what, like four years since librsvg got fully rustified, and now it is time to move another piece of critical infrastructure to a memory-safe language.

I'm talking about libipuz, the GObject-based C library that GNOME Crosswords uses underneath. This is a library that parses the ipuz file format and is able to represent various kinds of puzzles.

The words "GNOME CROSSWORDS" set inside a crossword
puzzle

Libipuz is an interesting beast. The ipuz format is JSON with a lot of hair: it needs to represent the actual grid of characters and their solutions, the grid's cells' numbers, the puzzle's clues, and all the styling information that crossword puzzles can have (it's more than you think!).

{
    "version": "http://ipuz.org/v2",
    "kind": [ "http://ipuz.org/crossword#1", "https://libipuz.org/barred#1" ],
    "title": "Mephisto No 3228",
    "styles": {
        "L": {"barred": "L" },
        "T": {"barred": "T" },
        "TL": {"barred": "TL" }
    },
    "puzzle":   [ [  1,  2,  0,  3,  4,  {"cell": 5, "style": "L"},  6,  0,  7,  8,  0,  9 ],
                  [  0,  {"cell": 0, "style": "L"}, {"cell": 10, "style": "TL"},  0,  0,  0,  0,  {"cell": 0, "style": "T"},  0,  0,  {"cell": 0, "style": "T"},  0 ]
                # the rest is omitted
    ],
    "clues": {
        "Across": [ {"number":1, "clue":"Having kittens means losing heart for home day", "enumeration":"5", "cells":[[0,0],[1,0],[2,0],[3,0],[4,0]] },
                    {"number":5, "clue":"Mostly allegorical poet on writing companion poem, say", "enumeration":"7", "cells":[[5,0],[6,0],[7,0],[8,0],[9,0],[10,0],[11,0]] },
                ]
        # the rest is omitted
    }
}

Libipuz uses json-glib, which works fine to ingest the JSON into memory, but then it is a complete slog to distill the JSON nodes into C data structures. You need iterate through each node in the JSON tree and try to fit its data into yours.

Get me the next node. Is the node an array? Yes? How many elements? Allocate my own array. Iterate the node's array. What's in this element? Is it a number? Copy the number to my array. Or is it a string? Do I support that, or do I throw an error? Oh, don't forget the code to meticulously free the partially-constructed thing I was building.

This is not pleasant code to write and test.

Ipuz also has a few mini-languages within the format, which live inside string properties. Parsing these in C unpleasant at best.

Differences from librsvg

While librsvg has a very small GObject-based API, and a medium-sized library underneath, libipuz has a large API composed of GObjects, boxed types, and opaque and public structures. Using libipuz involves doing a lot of calls to its functions, from loading a crossword to accessing each of its properties via different functions.

I want to use this rustification as an exercise in porting a moderately large C API to Rust. Fortunately, libipuz does have a good test suite that is useful from the beginning of the port.

Also, I want to see what sorts of idioms appear when exposing things from Rust that are not GObjects. Mutable, opaque structs can just be passed as a pointer to a heap allocation, i.e. a Box<T>. I want to take the opportunity to make more things in libipuz immutable; currently it has a bunch of reference-counted, mutable objects, which are fine in single-threaded C, but decidedly not what Rust would prefer. For librsvg it was very beneficial to be able to notice parts of objects that remain immutable after construction, and to distinguish those parts from the mutable ones that change when the object goes through its lifetime.

Let's begin!

In the ipuz format, crosswords have a character set or charset: it is the set of letters that appear in the puzzle's solution. Internally, GNOME Crosswords uses the charset as a histogram of letter counts for a particular puzzle. This is useful information for crossword authors.

Crosswords uses the histogram of letter counts in various important algorithms, for example, the one that builds a database of words usable in the crosswords editor. That database has a clever format which allows answering questions like the following quickly: What words in the database match ?OR??WORDS and CORES will match.

IPuzCharset is one of the first pieces of code I worked on in Crosswords, and it later got moved to libipuz. Originally it didn't even keep a histogram of character counts; it was just an ordered set of characters that could answer the question, "what is the index of the character ch within the ordered set?".

I implemented that ordered set with a GTree, a balanced binary tree. The keys in the key/value tree were the characters, and the values were just unused.

Later, the ordered set was turned into an actual histogram with character counts: keys are still characters, but each value is now a count of the coresponding character.

Over time, Crosswords started using IPuzCharset for different purposes. It is still used while building and accessing the database of words; but now it is also used to present statistics in the crosswords editor, and as part of the engine in an acrostics generator.

In particular, the acrostics generator has been running into some performance problems with IPuzCharset. I wanted to take the port to Rust as an opportunity to change the algorithm and make it faster.

Refactoring into mutable/immutable stages

IPuzCharset started out with these basic operations:

/* Construction; memory management */
IPuzCharset          *ipuz_charset_new              (void);
IPuzCharset          *ipuz_charset_ref              (IPuzCharset       *charet);
void                  ipuz_charset_unref            (IPuzCharset       *charset);

/* Mutation */
void                  ipuz_charset_add_text         (IPuzCharset       *charset,
                                                     const char        *text);
gboolean              ipuz_charset_remove_text      (IPuzCharset       *charset,
                                                     const char        *text);

/* Querying */
gint                  ipuz_charset_get_char_index   (const IPuzCharset *charset,
                                                     gunichar           c);
guint                 ipuz_charset_get_char_count   (const IPuzCharset *charset,
                                                     gunichar           c);
gsize                 ipuz_charset_get_n_chars      (const IPuzCharset *charset);
gsize                 ipuz_charset_get_size         (const IPuzCharset *charset);

All of those are implemented in terms of the key/value binary tree that stores a character in each node's key, and a count in the node's value.

I read the code in Crosswords that uses the ipuz_charset_*() functions and noticed that in every case, the code first constructs and populates the charset using ipuz_charset_add_text(), and then doesn't modify it anymore — it only does queries afterwards. The only place that uses ipuz_charset_remove_text() is the acrostics generator, but that one doesn't do any queries later: it uses the remove_text() operation as part of another algorithm, but only that.

So, I thought of doing this:

  • Split things into a mutable IPuzCharsetBuilder that has the add_text / remove_text operations, and also has a build() operation that consumes the builder and produces an immutable IPuzCharset.

  • IPuzCharset is immutable; it can only be queried.

  • IPuzCharsetBuilder can work with a hash table, which turns the "add a character" operation from O(log n) to O(1) amortized.

  • build() is O(n) on the number of unique characters and is only done once per charset.

  • Make IPuzCharset work with a different hash table that also allows for O(1) operations.

Basics of IPuzCharsetBuilder

IPuzCharsetBuilder is mutable, and it can live on the Rust side as a Box<T> so it can present an opaque pointer to C.

#[derive(Default)]
pub struct CharsetBuilder {
    histogram: HashMap<char, u32>,
}

// IPuzCharsetBuilder *ipuz_charset_builder_new (void); */
#[no_mangle]
pub unsafe extern "C" fn ipuz_charset_builder_new() -> Box<CharsetBuilder> {
    Box::new(CharsetBuilder::default())
}

For extern "C", Box<T> marshals as a pointer. It's nominally what one would get from malloc().

Then, simple functions to create the character counts:

impl CharsetBuilder {
    /// Adds `text`'s character counts to the histogram.
    fn add_text(&mut self, text: &str) {
        for ch in text.chars() {
            self.add_character(ch);
        }
    }

    /// Adds a single character to the histogram.
    fn add_character(&mut self, ch: char) {
        self.histogram
            .entry(ch)
            .and_modify(|e| *e += 1)
            .or_insert(1);
    }
}

The C API wrappers:

use std::ffi::CStr;

// void ipuz_charset_builder_add_text (IPuzCharsetBuilder *builder, const char *text);
#[no_mangle]
pub unsafe extern "C" fn ipuz_charset_builder_add_text(
    builder: &mut CharsetBuilder,
    text: *const c_char,
) {
    let text = CStr::from_ptr(text).to_str().unwrap();
    builder.add_text(text);
}

CStr is our old friend that takes a char * and can wrap it as a Rust &str after validating it for UTF-8 and finding its length. Here, the unwrap() will panic if the passed string is not UTF-8, but that's what we want; it's the equivalent of an assertion that what was passed in is indeed UTF-8.

// void ipuz_charset_builder_add_character (IPuzCharsetBuilder *builder, gunichar ch);
#[no_mangle]
pub unsafe extern "C" fn ipuz_charset_builder_add_character(builder: &mut CharsetBuilder, ch: u32) {
    let ch = char::from_u32(ch).unwrap();
    builder.add_character(ch);
}

Somehow, the glib-sys crate doesn't have gunichar, which is just a guint32 for a Unicode code point. So, we take in a u32, and check that it is in the appropriate range for Unicode code points with char::from_u32(). Again, a panic in the unwrap() means that the passed number is out of range; equivalent to an assertion.

Converting to an immutable IPuzCharset

pub struct Charset {
    /// Histogram of characters and their counts plus derived values.
    histogram: HashMap<char, CharsetEntry>,

    /// All the characters in the histogram, but in order.
    ordered: String,

    /// Sum of all the counts of all the characters.
    sum_of_counts: usize,
}

/// Data about a character in a `Charset`.  The "value" in a key/value pair where the "key" is a character.
#[derive(PartialEq)]
struct CharsetEntry {
    /// Index of the character within the `Charset`'s ordered version.
    index: u32,

    /// How many of this character in the histogram.
    count: u32,
}

impl CharsetBuilder {
    fn build(self) -> Charset {
        // omitted for brevity; consumes `self` and produces a `Charset` by adding
        // the counts for the `sum_of_counts` field, and figuring out the sort
        // order into the `ordered` field.
    }
}

Now, on the C side, IPuzCharset is meant to also be immutable and reference-counted. We'll use Arc<T> for such structures. One cannot return an Arc<T> to C code; it must first be converted to a pointer with Arc::into_raw():

// IPuzCharset *ipuz_charset_builder_build (IPuzCharsetBuilder *builder);
#[no_mangle]
pub unsafe extern "C" fn ipuz_charset_builder_build(
    builder: *mut CharsetBuilder,
) -> *const Charset {
    let builder = Box::from_raw(builder); // get back the Box from a pointer
    let charset = builder.build();        // consume the builder and free it
    Arc::into_raw(Arc::new(charset))      // Wrap the charset in Arc and get a pointer
}

Then, implement ref() and unref():

// IPuzCharset *ipuz_charset_ref (IPuzCharset *charet);
#[no_mangle]
pub unsafe extern "C" fn ipuz_charset_ref(charset: *const Charset) -> *const Charset {
    Arc::increment_strong_count(charset);
    charset
}

// void ipuz_charset_unref (IPuzCharset *charset);
#[no_mangle]
pub unsafe extern "C" fn ipuz_charset_unref(charset: *const Charset) {
    Arc::decrement_strong_count(charset);
}

The query functions need to take a pointer to what really is the Arc<Charset> on the Rust side. They reconstruct the Arc with Arc::from_raw() and wrap it in ManuallyDrop so that the Arc doesn't lose a reference count when the function exits:

// gsize ipuz_charset_get_n_chars (const IPuzCharset *charset);
#[no_mangle]
pub unsafe extern "C" fn ipuz_charset_get_n_chars(charset: *const Charset) -> usize {
    let charset = ManuallyDrop::new(Arc::from_raw(charset));
    charset.get_n_chars()
}

Tests

The C tests remain intact; these let us test all the #[no_mangle] wrappers.

The Rust tests can just be for the internals, simliar to this:

    #[test]
    fn supports_histogram() {
        let mut builder = CharsetBuilder::default();

        let the_string = "ABBCCCDDDDEEEEEFFFFFFGGGGGGG";
        builder.add_text(the_string);
        let charset = builder.build();

        assert_eq!(charset.get_size(), the_string.len());

        assert_eq!(charset.get_char_count('A').unwrap(), 1);
        assert_eq!(charset.get_char_count('B').unwrap(), 2);
        assert_eq!(charset.get_char_count('C').unwrap(), 3);
        assert_eq!(charset.get_char_count('D').unwrap(), 4);
        assert_eq!(charset.get_char_count('E').unwrap(), 5);
        assert_eq!(charset.get_char_count('F').unwrap(), 6);
        assert_eq!(charset.get_char_count('G').unwrap(), 7);

        assert!(charset.get_char_count('H').is_none());
    }

Integration with the build system

Libipuz uses meson, which is not particularly fond of cargo. Still, cargo can be used from meson with a wrapper script and a bit of easy hacks. See the merge request for details.

Further work

I've left the original C header file ipuz-charset.h intact, but ideally I'd like to automatically generate the headers from Rust with cbindgen. Doing it that way lets me check that my assumptions of the extern "C" ABI are correct ("does foo: &mut Foo appear as Foo *foo on the C side?"), and it's one fewer C-ism to write by hand. I need to see what to do about inline documentation; gi-docgen can consume C header files just fine, but I'm not yet sure about how to make it work with generated headers from cbindgen.

I still need to modify the CI's code coverage scripts to work with the mixed C/Rust codebase. Fortunately I can copy those incantations from librsvg.

Is it faster?

Maybe! I haven't benchmarked the acrostics generator yet. Stay tuned!

the avatar of Hollow Man's Blog

My Igalia Coding Experience 2023 I & II at Wolvic

Wolvic is a fast and secure browser for standalone virtual-reality and augmented-reality headsets. ex. Mozilla Firefox Reality.

Project summaries

List of things I have done

PRs opened/handled

Issues opened/helped with

the avatar of Nathan Wolf

du | Directory Size in the Terminal

This is a letter to future me for the next time I need to look up the disk usage in the terminal. If you find this useful, great, if you think this is lacking and unhelpful, that’s fine too. I don’t always remember how I used various commands in the terminal when there are weeks […]