mdds 1.0.0
A new version of mdds is out, and this time, we’ve decided to bump up the version to 1.0.0. As always, you can download it from the project’s main page.
Here is the highlight of this release.
First off, C++11 is now a hard requirement starting with this release. It’s been four years since the C++11 standard was finalized. It’s about time we made this a new baseline.
Secondly, we now have an official API documentation. It’s programatically generated from the source code documentation via Doxygen, Sphinx and Breathe. Huge thanks to the contributors of the aforementioned projects. You guys make publishing API documentation such a breathe (no pun intended).
This release has finally dropped mixed_type_matrix which has been deprecated for quite some time now in favor of multi_type_matrix.
The multi_type_vector data structure has received some performance optimization thanks to patches from William Bonnet.
Aside from that, there is one important bug fix in sorted_string_map, to fix false positives due to incorrect key matching.
API versioning
One thing I need to note with this release is the introduction of API versioning. Starting with this release, we’ll use API versions to flag any API-incompatible releases. Going forward, anytime we introduce an API-incompatible change, we’ll use the version of that release as the new API version. The API version will only contain major and minor components i.e. API versions can be 1.0, 1.2, 2.1 etc. but never 1.0.6, for instance. That also implies that we will never introduce API-incompatible changes in the micro releases.
The API version will be a part of the package name. For example, this release will have a package name of mdds-1.0 so that, when using tools like pkg-config to query for compiler/linker flags, you’ll need to query for mdds-1.0 instead of simply mdds. The package name will stay that way until we have another release with an API-incompatible change.
A dive into requires with ranges
Disclaimer: I am aware that python does not allow easy installation of 2 libraries in parallel. It is just an example.
Let’s say you have an upstream requirement "needs pyfoo >= 1.6.0 but smaller than 2". Sounds easy right?
Our naive solution will be:
Requires: python-pyfoo >= 1.6.0
Requires: python-pyfoo < 2
so far … so good. If your repository has only one version of pyfoo available we will probably get the package you want. now lets say we have more than one version available.
targitter project – about OBS, tars and git
In OBS we use source tarballs everywhere to build rpms (and debs) from.
This has at least two major downsides:
- Storing all old tar files takes up a lot of disk space
- OBS workflows with .tar files and patches are rather different and somewhat disconnected from the git workflows we usually use everywhere else these days. E.g. for the SUSE OpenStack Cloud team we have a “trackupstream” jenkins job, that pulls the latest git version into a tarball once every day.
Fedora already keeps their metadata in git, but only a hash of the tarball.
So as one first step, I used two rather different projects to see how different the space usage would be. On the slow side I used 20 gtk2 tarballs from the last 5 years and on the fast side, I used 31 openstack-nova tarballs from Cloud:OpenStack:Master project from the last 5 months.
I used scripts that uncompressed each tarball, added it to a git repo and used git gc to trigger git’s compression.
Here are the resulting cumulative size graphs:
The raw numbers after 20 tarballs: for nova the ratio is 89772:7344 = 12.2 and for gtk2 the ratio is 296836:53076 = 5.6
What do you think: would it be worth the effort to use more git in our OBS workflows?
Do we care about being able to reproduce the original tarballs? While this is possible, it has some challenges in differing file-ordering, timestamps, file-ownerships and compression levels.
Or would it be enough if OBS converted a tarball into a signed commit (so it cannot be forged without people being able to notice)?
Do you know of a tool that can uncompress tarballs in a way that allows to track the content as single files, and allows to later re-create the original verbatim tarball, such that upstream signatures would still match?
Aarhus: Presentations & the Table Styles in Writer
Before I tell you about the Table Styles in Writer, the feature I was working on, let me share the slides from my presentations. First of all, I presented work of my GSoC students, Nathan Yee and Krisztian Pinter, during the GSoC panel:
![]() |
| Click the slide to see the presentation. |
![]() |
| Click the slide to see the presentation. |
![]() | |
| Click the slide to see the presentation. |
Table Styles in Writer
And now - the Table Styles in Writer. It is a feature that we have missed for a long time. In LibreOffice, we have the Table -> AutoFormat..., but applies the formatting only once; after you modify the table (like insert rows / columns) later, you basically destroy the look of the table.During summer 2013, Alex Ivan was working on implementing the table styles as GSoC project. I rebased his work to the current master, and made it to work again. Unfortunately, the approach there turned out to be very aggressive - the changes first destroyed the Table AutoFormat feature, and then started building the Table Styles. This means that we could merge that only after we have the import and export for Table Styles - but the GSoC work did not get that far.
I reconsidered the approach, and tried to find a way that implements the core of the Table Styles functionality without destroying the Table AutoFormat - and it worked :-)
I have pushed the results to master. Now, when you apply the Table AutoFormat in Writer, it behaves as a Table Style: When you insert more rows/columns, they still keep the correct formatting, similarly deleting, or splitting tables keeps the table formatted. Direct formatting is applied over the style too, and you can clear it via "Clear Direct Formatting".
Further work
Loading/saving is not implemented though, so once you save the table with Table Style, it turns into a "normal" AutoFormat - the next time you open it, you see the formatting, but it is "static", ie. works as before the Table Styles work.I hope to get the load/save done before 5.1; and there's also lots to be improved in the UI of Table Styles - but I believe the current state is already an improvement, and a step in the right direction.
Setting up Spark single node with local disk
Install ownCloud on openSUSE Tumbleweed for Banana Pi M1

There's a tutorial how to create an openSUSE Tumbleweed SD card with MATE. You can follow this tutorial without installing MATE but keep it headless. You can download the image from openSUSE-Tumbleweed-BananaPi-headless-20150927.tar.xz (username: root, password: linux) and continue this tutorial.
Here we'll see how to install ownCloud on openSUSE for Banana Pi M1.
At the end of this tutorial will be a link to the image with ownCloud. Please use an SD card minimum 2GB and re-partition the SD card or use a USB stick to save ownCloud data directory.
Let's start with the procedure.
1. Install ownCloud from the repository. Choose the repository because you can have automatic updates.
zypper refresh
zypper install owncloud
Don't be scared because this is factory repository. This is the official from ownCloud and it's the only one that is for ARM boards.
This will install all nessesary files. It will install apache2 and mariadb. At the end, it'll ask you if you want to see info about seting up mariadb.
You can start it using:
rcmysql start
During first start empty database will be created for your automatically.
PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !
To do so, start the server, then issue the following commands:
'/usr/bin/mysqladmin' -u root password 'new-password'
'/usr/bin/mysqladmin' -u root -h
Alternatively you can run:
'/usr/bin/mysql_secure_installation'
which will also give you the option of removing the test
databases and anonymous user created by default. This is
strongly recommended for production servers.
Regarding the servers apache and mariadb. If you're the only one user for ownCloud and don't have problem with speed, then you can use sqlite. If you have more users for the instance, then it's better to use mariadb. It's the same with apache. For lighter installations, you can use lighttpd or ngnix. Here I used apache2 but about database, it's up to you. You can either use sqlite or setup a mariadb darabase.
To setup a mariadb database, follow the commands.
CREATE DATABASE owncloudb;
GRANT ALL ON owncloudb.* TO ocuser@localhost IDENTIFIED BY 'dbpass';
2. Change the file php.ini.
and change the strings (you can search by pressing control+w).
upload_max_filesize = 25G
max_file_uploads = 200
max_input_time = 3600
max_execution_time = 3600
session.gc_maxlifetime = 3600
memory_limit = 512M
3. Start the webserver.
systemctl enable apache2.service
4. Create the data directory
It is recommended to use a data directory located on another partition of your SD card or a USB stick. The image requires minimum 2GB SD card, so you won't have enough storage to save your data.
Let's say you have a USB and you mounted under /mnt/USB folder. Create a directory and also give the right permissions.
chmod -R 0770 /mnt/USB/owncloud_data
chown wwwrun /mnt/USB/owncloud_data
5. Final ownCloud installation.
Open your browser to the IP of your Banana Pi
Set a username/password for administrator. Choose a username other than admin, root, administrator, superuser because of your safety.
Then you have to set the date folder (remember our example is /mnt/USB/owncloud_data)
Choose if you want mariadb or sqlite.
If it's mariadb, then you should create the database
CREATE DATABASE owncloudb;
GRANT ALL ON owncloudb.* TO ocuser@localhost IDENTIFIED BY 'dbpass';
DATABASE: owncloudb
USER: ocuser
PASSWORD: dbpass
HOST: localhost
and you're all set.
You can download the file openSUSE-Tumbleweed-20150930-BananaPi-ownCloud-8.1.3.tar.xz and just setup ownCloud as described on fifth step.
Tao-makefile-ui
Who those, who won’t use terminal to compile program I’ve created tao-makefile-ui.
Tao-makefile-ui is a simple tool, which aims allow to go through each step of compiling program in graphical user interface. Program will run autogen.sh, configure and make. It allows to select make target and save targets list to file. It also allows to change variables of Makefile.
There’s package for OpenSUSE. You can watch the video demonstrating tao-makefile-ui here
Thanks!
How to create an openSUSE Banana Pi M1 image with MATE Desktop

I won a Banana Pi from ownCloud. So I tried to install openSUSE.
There are 3 options:
1. According to the wiki page, you can download the image they provide but there's no kernel support for Mali400MP2 GPU (who knows if it's fixed by now). No Mali mean no GUI. The link to image is http://download.opensuse.org/ports/armv7hl/tumbleweed/images/.
2. Download the image from http://www.lemaker.org. The GUI used is XFCE.
3. Do it the hard way, build it yourself. I would like to install MATE. I know, I could use the lemaker image.
I followed the page HowTo Build Banana Pi Image.
This post has 2 sections. The first is how to create the SD card and the next one is how to install MATE.
Create the SD card.
1. Create a folder where you're going to work (download the nessesary files).
cd WORKSPACE
2. I'll skip the steps 1-5 from the Build it yourself page. You can download the file:
BananaPi_hwpack.tar.xz
Download also the rootfs openSUSE image file.
openSUSE-Tumbleweed-ARM-JeOS.armv7-rootfs.armv7l-Current.tbz
3. Create the folder with the ROOTFS_DIR
4. Decompress the file to ROOTFS_DIR
openSUSE-Tumbleweed-ARM-JeOS.armv7-rootfs.armv7l-Current.tbz
5. Now work with the file BananaPi_hwpack.tar.xz. Decompress the file.
6. Copy related files to the directory ROOTFS_DIR
cp kernel/uImage ROOTFS_DIR/boot
Create the file:
with the following content
fatload mmc 0 0x48000000 uImage; if fatload mmc 0 0x43100000 uInitrd; \
then bootm 0x48000000 0x43100000; else bootm 0x48000000; fi
uenvcmd=run mmcboot
bootargs=console=ttyS0,115200 console=tty0 \
disp.screen0_output_mode=EDID:1280x720p60 \
hdmi.audio=EDID:0 root=/dev/mmcblk0p1
Copy the rootfs folder:
7. Now prepare the SD. Format the sdcard (assume the sdcard mounted at /dev/sdb. You can find it with the command cat /proc/partitions)
sudo dd if=/dev/zero of=/dev/sdb bs=1k count=1024
sudo dd if=bootloader/u-boot-sunxi-with-spl.bin of=/dev/sdb bs=1024 seek=8
Create partition (you can do it using gparted too)
* Delete partitions: o
* List partitions: p
* Create new partitions: n
* Primary partitions: p
* Partition number: 1
* Press ENTER twice to use the total size of the card
* Write the partition table: w
Format the parititon
8. Copy ROOTFS_DIR into sdcard
sudo mount /dev/sdb1 mnt
sudo cp -a ROOTFS_DIR/* mnt
sudo sync
sudo umount mnt
Now boot the card. The default username/password are:
Password: linux
Unfortunately ssh didn't work. I logged in and changed few things.
First of all I edited the file sshd_conf
And found:
Port 22
PasswordAthentication yes
PermitRootlogin yes
Then I used the command
Rebooted and all set.
You can download the image from openSUSE-Tumbleweed-BananaPi-headless-20150927.tar.xz
copy it at least 2GB sd card and resize it.
Install MATE Desktop
The first thing you have to do is to update (zypper up).
The easiest way is to open YaST and go to Software Management.
Then filter by Patterns.
Click to install MATE Desktop Environment and MATE Base system.
After everything is installed, make MATE-session as default window manager
Find the line:
DEFAULT_WM = "kde-plasma"
and change it to
DEFAULT_WM = "mate-session"
Then reboot. Login and type startx

184 Qt Libraries
Inqlude is based on a collection of manifests. If you like to add or update a library, simply submit a pull request there. The inqlude tool is used to manage the manifests, it generates the web site, but you can also use it to validate manifests, or download libraries. There also is inqlude-client, which is a C++ client for retrieving sources of libraries via the data on the Inqlude web site. It's pretty handy, if you want to integrate some library into your project.
If you want to get a brief introduction into Inqlude, you might want to watch my award winning lightning talk from Qt Dev Days 2013: "News from Inqlude, the Qt library archive". It still provides a pretty accurate explanation of what Inqlude is about and how it works.
A big part of the libraries which are collected on Inqlude are coming from KDE as part of KDE Frameworks. We just released KDE Frameworks 5.14. It's 60 Qt addon libraries which represent the state of the art of Linux desktop development and more.
Inqlude as well as KDE Frameworks are a community effort. Incidentally they both started at a developer sprint at Randa. Getting community people together for intense hacking and discussions is a tremendously powerful catalyst in the free software world. Randa exemplifies how this is done. The initial ideas for Inqlude were created there and last year it enabled me to release the first alpha version of Inqlude. These events are important for the free software world. You can help to make them happen by donating. Do this now. It's very much appreciated.
One more recent change was the addition of a manifest for all libraries part of the Inqlude archive. This is a JSON file aggregating all latest individual manifests. It makes it very easy for tools who don't need to deal with the history of releases to get everything in one go. The inqlude client uses it, and it's a straight-forward choice for integration with other tools which would like to benefit from the data available through Inqlude.
At the last Qt contributors summit we had some very good discussions about more integration. Integration with the Qt installer would allow to get third party library the same way you get Qt itself, or integration with Qt Creator would allow to find and use third party libraries for specific purposes natively in the environment you use to develop your application. One topic which came up was a classification of libraries to provide some information about stability, active development, and support. We will need to look into that, if there are some automatic indications we can offer for activity, or what else we can do to help people to find suitable libraries for their projects.
It's quite intriguing to follow what is going on in the Qt world. As an application developer there is a lot of good stuff to choose from. Inqlude intends to help with that. The web site is there and will continue to be updated and there also are a number of ideas and plans how to improve Inqlude to serve this purpose. Stay tuned. Or get involved. You are very welcome.
Deploying Limba packages: Ideas & current status
The Limba project does not only have the goal to allow developers to deploy their applications directly on multiple Linux distributions while reducing duplication of shared resources, it should also make it easy for developers to build software for Limba.
Limba is worth nothing without good tooling to make it fun to use. That’s why I am working on that too, and I want to share some ideas of how things could work in future and which services I would like to have running. I will also show what is working today already (and that’s quite something!). This time I look at things from a developer’s perspective (since the last posts on Limba were more end-user centric). If you read on, you will also find a nice video of the developer workflow 
1. Creating metadata and building the software
To make building Limba packages as simple as possible, Limba reuses already existing metadata, like AppStream metadata to find information about the software you want to create your package for.
To ensure upstreams can build their software in a clean environment, Limba makes using one as simple as possible: The limba-build CLI tool creates a clean chroot environment quickly by using an environment created by debootstrap (or a comparable tool suitable for the Linux distribution), and then using OverlayFS to have all changes to the environment done during the build process land in a separate directory.
To define build instructions, limba-build uses the same YAML format TravisCI uses as well for continuous integration. So there is a chance this data is already present as well (if not, it’s trivial to write).
In case upstream projects don’t want to use these tools, e.g. because they have well-working CI already, then all commands needed to build a Limba package can be called individually as well (ideally, building a Limba package is just one call to lipkgen).
I am currently planning “DeveloperIPK” packages containing resources needed to develop against another Limba package. With that in place and integrated with the automatic build-environment creation, upstream developers can be sure the application they just built is built against the right libraries as present in the package they depend on. The build tool could even fetch the build-dependencies automatically from a central repository.
2. Uploading the software to a repository
While everyone can set up their own Limba repository, and the limba-build repo command will help with that, there are lots of benefits in having a central place where upstream developers can upload their software to.
I am currently developing a service like that, called “LimbaHub”. LimbaHub will contain different repositories distributors can make available to their users by default, e.g. there will be one with only free software, and one for proprietary software. It will also later allow upstreams to create private repositories, e.g. for beta-releases.
3. Security in LimbaHub
Every Limba package is signed with they key of its creator anyway, so in order to get a package into LimbaHub, one needs to get their OpenPGP key accepted by the service first.
Additionally, the Hub service works with a per-package permission system. This means I can e.g. allow the Mozilla release team members to upload a package with the component-ID “org.mozilla.firefox.desktop” or even allow those user(s) to “own” the whole org.mozilla.* namespace.
This should prevent people hijacking other people’s uploads accidentally or on purpose.
4. QA with LimbaHub
LimbaHub should also act as guardian over ABI stability and general quality of the software. We could for example warn upstreams that they broke ABI without declaring that in the package information, or even reject the package then. We could validate .desktop files and AppStream metadata, or even check if a package was built using hardening flags.
This should help both developers to improve their software as well as users who benefit from that effort. In case something really bad gets submitted to LimbaHub, we always have the ability to remove the package from the repositories as a last resort (which might trigger Limba to issue a warning for the user that he won’t receive updates anymore).
What works
Limba, LimbaHub and the tools around it are developing nicely, so far no big issues have been encountered yet.
That’s why I made a video showing how Limba and LimbaHub work together at time:
Still, there is a lot of room for improvement – Limba has not yet received enough testing, and LimbaHub is merely a proof-of-concept at time. Also, lots of high-priority features are not yet implemented.
LimbaHub and Limba need help!
At time I am developing LimbaHub and Limba alone with only occasional contributions from others (which are amazing and highly welcomed!). So, if you like Python and Flask, and want to help developing LimbaHub, please contact me – the LimbaHub software could benefit from a more experienced Python web developer than I am
(and maybe having a designer look over the frontend later makes sense as well). If you are not afraid of C and GLib, and like to chase bugs or play with building Limba packages, consider helping Limba development 



