Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

It's HackWeek @SUSE Again!

We Have Just Started!

This is our fourteenth HackWeek at SUSE already. HackWeek is a SUSE way of Hackathon
which is usually one full week long. This time it's actually six days, overlapping one day with the openSUSE conference in Nürnberg. See more including current projects at the HackWeek page.

SUSE Prague at the HackWeek XIV Opening Event
Although we have started on Friday and it's just Monday, we already have quite some results to show...

YaST Dialog Editor

YaST guys discussing the idea
Ladislav came with an idea to have a WYSIVYG editor for YaST dialogs. Right now, you have to design a dialog, write an exact definition of the dialog in the code and then run it to see the result. His project enables everyone to change a dialog on-the-fly directly in a running YaST. You can already open the editor, delete some widgets or edit their properties. See more at the project itself.

We were already able to open a Stylesheet Editor by pressing Ctrl-Shift-Alt-S in YaST (and also the Installer) and edit the used style. Ladislav's project uses similar approach to the dialog content.

Rooms management for Janus/Jangouts using Salt

Jangouts welcome screen
We use Jangouts every day for our SCRUM stand-up calls, and other meetings. More and more teams at SUSE want to do the same, but the current Jangouts does not allow to create rooms as they are needed yet - an administrator has to adjust the configuration manually and reload the service.

Ancor originated the idea to use Salt for the Jangouts room management and Pablo took it and already implemented a working solution which talks directly to the Janus REST API.

Speeding-Up The Installer

13 seconds faster!
We have been using openQA for continuous integration testing for quite some time and we are writing more and more tests. This means that we have to run them, but as time is actually limited, every second counts. Especially the saved one and even more if you multiply the result by several hundreds (tests per week). Moreover, this should lead into saving resources and being greener at the end.

Josef has decided to speed-up the installer by not doing things that are not necessary. He already succeeded in saving 13 seconds (from 58 seconds down to 45) while writing and adjusting the bootloader settings (here and here).

Stay tuned for the update! Or contribute to our projects :)

a silhouette of a person's head and shoulders, used as a default avatar

Two in one

As you may know (unless you’ve been living in Alpha Centauri for the past century) the openSUSE community KDE team publishes LiveCD images for those willing to test the latest state of KDE software from the git master branches without having to break machines, causing a zombie apocalypse and so on. This post highlights the most recent developments in the area.

Up to now, we had 3 different media, depending on the base distribution (stable Leap, ever-rolling Tumbleweed) and whether you wanted to go with the safe road (X11 session) or the dangerous path (Wayland):

  • Argon (Leap based)
  • Krypton (Tumbleweed based, X11)
  • Krypton Wayland (Tumbleweed based, Wayland)

So far we’ve been trying to build new images in sync with the updates to the Unstable KDE software repositories. With the recent switch to being Qt 5.7 based, they broke. That’s when Fabian Vogt stepped up and fixed a number of outstanding issues with the images as well.

But that wasn’t enough. It was clear that perhaps a separate image for Wayland wasn’t required (after all, you could always start a session from SDDM). So, perhaps it was the time to merge the two…

Therefore, from today, the Krypton image will contain both the X11 session and the Wayland session. You can select which session to use from the SDDM screen. Bear in mind that if you use a virtual machine like QEMU, you may not be able to start Wayland from SDDM due to this bug.

Download links:

Should you want to use these live images, remember where to report distro bugs and where to report issues in the software. Have a lot of fun!

a silhouette of a person's head and shoulders, used as a default avatar

Scale your Flask Python Web Application with Docker and HAProxy

For the last few months I was using Docker quite intensively for my projects and I really like it. In this post I will just describe the necessary steps to deploy a minimal Flask python application and scale it using docker-compose and HAProxy.

So, here is a diagram with what I plan to deploy.

ServicesDiagram

I am using a Vultr compute instance with Docker on CentOS 7. After the instance is up and running let's test if docker is working:

vultr-running

# docker run hello-world

docker-initial-test

  • it seems that is working, so let's install docker-compose
# yum install python-pip
# pip install docker-compose
  • if we try to run "docker-compose" command will give us an error

docker-compose-install-error

  • we need to install one more python module to have everything working
# pip install --upgrade backports.ssl_match_hostname
  • test the installation by running a simple hello world using docker-compose. We have to build a docker-compose.yml file with the following content:
my-test:
    image: hello-world
  • let's bring the container up and running
# docker-compose up

docker-compose-example-output

  • good, working perfectly. Time to prepare the files for our flask python application and haproxy.

  • Dockerfile - the file is needed to pull a minimal Flask application from my GitHub repository and run it as a docker container

FROM ubuntu:14.04
RUN apt-get update \
    && apt-get install -y python-pip \
    && apt-get install -y git
WORKDIR /myapp
RUN git clone https://github.com/vioan/minflask.git .
RUN pip install -r requirements.txt
CMD python app.py
EXPOSE 5000
  • let's try to see if our app is really running inside the docker container

  • for that we need to build first the image

# docker build .
  • list our available images and see if the new one was created
# docker images

docker-list-built-image

  • run the new created image by passing its "IMAGE ID" to our docker run command. We make sure that the port 5000 which is running inside our container, is mapped to port 80 on host
# docker run -it -p 80:5000 d73fa1b9c33

docker-run-image

  • open the browser and type the host ip address

docker-browser-output-test

  • is working. Let's move on ...

  • docker-compose.yml - the file which put together and link different services/containers (in our case python app and haproxy load balancer)

pyapp:
  build: .
loadbalancer:
  image: 'dockercloud/haproxy:latest'
  links:
    - pyapp
  volumes:
    - /var/run/docker.sock:/var/run/docker.sock
  ports:
    - 80:80
  • time to run our services together:
# docker-compose up -d
  • check if they are running
# docker-compose ps

docker-compose-test-noscalling2

  • yupiiiii, working. But wait, we have only one service corresponding to our python application. Let's scale our python application (e.g. run 5 instances)
# docker-compose scale pyapp=5

docker-scalling1

  • let's check if indeed we have 5 containers for our python app and one container for our load balancer
# docker-compose ps

docker-scalling2

  • yes, we have all services as we expected. But there is one more thing to do. After we scaled our python application we need to inform HAProxy about the new containers. You can do that in different ways but I did it like this:
# docker-compose stop loadbalancer
# docker-compose rm loadbalancer
# docker-compose up -d
  • so, basically, stopping, removing and restarting the load balancer container. Docker-compose knows that the other services don't have to be updated and will restart only our loadbalancer
microservices_pyapp_4 is up-to-date
microservices_pyapp_2 is up-to-date
microservices_pyapp_3 is up-to-date
microservices_pyapp_5 is up-to-date
microservices_pyapp_1 is up-to-date
Creating microservices_loadbalancer_1
  • now we can test to see if HAProxy is distributing the requests in Round-Robin mode. I used curl for that but of course you can use your browser as well and refresh the page to see that the hostname change which means that our python application is running in a different container.
[root@vioan microservices]# for request in `seq 1 5`; do curl http://45.32.187.37; done
Hello from FLASK. My Hostname is: 6f96511879bb
Hello from FLASK. My Hostname is: d860bfd42116
Hello from FLASK. My Hostname is: 1cd07155d7cd
Hello from FLASK. My Hostname is: e99bb7d73451
Hello from FLASK. My Hostname is: 5a9631644e16
[root@vioan microservices]#
  • that's all. Was not difficult but indeed this is just a minimal application without even a database. Perhaps in a following post I will describe how to build a more complete infrastructure using docker-compose

Note: Here is my repository with the files: GitHub

the avatar of Klaas Freitag

ownCloud Client 2.2.x

A couple of weeks ago we released another significant milestone of the ownCloud Client, called version 2.2.0, followed by two small maintenance releases. (download). I’d like to highlight some of the new features and the changes that we have made to improve the user experience:

Overlay Icons

Overlay icons for the various file managers on our three platforms already exist for quite some time, but it has turned out that the performance was not up to the mark for big sync folders. The reason was mainly that too much communication between the file manager plugin and the client was happening. Once asked about the sync state of a single file, the client had to jump through quite some hoops in order to retrieve the required information. That involved not only database access to the sqlite-based sync journal, but also file system interaction to gather file information. Not a big deal if it’s only a few, but if the user syncs huge amounts, these efforts do sum up.

This becomes especially tricky for the propagation of changes upwards the file tree. Imagine there is a sync error happening in the foo/bar/baz/myfile. What should happen is that a warning icon appears on the icon for foo in the file manager, telling that within this directory, a problem exists. The complexity of the existing implementation was already high and adding this extra functionality would have reduced the reliability of the code lower than it already was.

Jocelyn was keen enough to do a refactoring of the underlying code which we call the SocketApi. Starting from the basic assumption that all files are in sync, and the code has just to care for these files that are new or changed, erroneous or ignored or similar, the amount of data to keep is very much reduced, which makes processing way faster.

Server Notifications

On the ownCloud server, there are situation where notifications are created which make the user aware of things that happened.

An example are federated shares:

If somebody shares a folder with you, you previously had to acknowledge it through the web interface. This explicit step is a safety net to avoid people sharing tons of Gigabytes of content, filling up your disk.

notifications

With 2.2.x, you can acknowledge the share right from the client, saving you the round trip to the web interface to check for new shares.

Keeping an Eye on Word & Friends

Microsoft Word and other office tools are rather hard to deal with in syncing, because they do very strict file locking of the files that are worked on. So strict that the subsequent sync app is not even allowed to open the file, not even for reading. That would be required to be able to sync the file.

As a result the sync client needs to wait until word unlocks the file, and then continue syncing.

For previous version of the client, this was hard to detect and worked only if other changes happened in the same directory where the file in question resides.

With 2.2.0 we added a special watcher that keeps an eye on the office docs Word and friends are blocking. And once the files are unlocked, the watcher starts a sync run to get the files to the server, or down from the server.

Advances on Desktop Sharing

The sharing has been further integrated and received several UX- and bugfixes. There is more feedback when performing actions so you know when your client is waiting for a response from the server. The client now also respect more data returned from the server if you have apps enabled on the server that for example limit the expiration date.

Further more we better respect the share permissions granted. This means that if somebody shared a folder without create permissions with you and you want to reshare this folder in the client you won’t get the option to share with delete permissions. This avoids errors when sharing and is more in line with how the whole ownCloud platform handles re-sharing. We also adjusted the behavior for federated reshares with the server.

Please note to take full advantage of all improvements you will need to run at least server version 9.0.

Have fun!

a silhouette of a person's head and shoulders, used as a default avatar

OpenStack Summit Barcelona: Call for Speakers

Barcelona (Spain) will be the venue for the next OpenStack Summit this October (25.-28.10.2016). The "Call for Speakers" period started. You can submit your presentation here. The deadline to fill your application is July 13, 2016 at 11:59 pm (PDT) / July 14 8:59 CEST.

In May there was a long discussion in the community about changing the selection process for the presentations, especially about if there should be a voting at all. The process changed slightly compared to Austin, checkout the current selection process.

a silhouette of a person's head and shoulders, used as a default avatar

Deutsche OpenStack Tage 2016


Tomorrow the German OpenStack Days 2016 (Deutsche OpenStack Tage) will start in Cologne. I will give a talk on Ceph and OpenStack about "OpenStack at 99,999% availability with Ceph".


It seems the event is already sold out completely, but as far as I know the presentations (mostly conducted in German) will be recorded and available after the conference. If you have already a ticket, have a interesting OpenStack conference in Cologne!

a silhouette of a person's head and shoulders, used as a default avatar

A few words about the future of the Limba project

limba-smallI wanted to write this blogpost since April, and even announced it in two previous posts, but never got to actually write it until now. And with the recent events in Snappy and Flatpak land, I can not defer this post any longer (unless I want to answer the same questions over and over on IRC ^^).

As you know, I develop the Limba 3rd-party software installer since 2014 (see this LWN article explaining the project better then I could do 😉 ) which is a spiritual successor to the Listaller project which was in development since roughly 2008. Limba got some competition by Flatpak and Snappy, so it’s natural to ask what the projects next steps will be.

Meeting with the competition

At last FOSDEM and at the GNOME Software sprint this year in April, I met with Alexander Larsson and we discussed the rather unfortunate situation we got into, with Flatpak and Limba being in competition.

Both Alex and I have been experimenting with 3rd-party app distribution for quite some time, with me working on Listaller and him working on Glick and Glick2. All these projects never went anywhere. Around the time when I started Limba, fixing design mistakes done with Listaller, Alex started a new attempt at software distribution, this time with sandboxing added to the mix and a new OSTree-based design of the software-distribution mechanism. It wasn’t at all clear that XdgApp, later to be renamed to Flatpak, would get huge backing by GNOME and later Red Hat, becoming a very promising candidate for a truly cross-distro software distribution system.

The main difference between Limba and Flatpak is that Limba allows modular runtimes, with things like the toolkit, helper libraries and programs being separate modules, which can be updated independently. Flatpak on the other hand, allows just one static runtime and enforces everything that is not in the runtime already to be bundled with the actual application. So, while a Limba bundle might depend on multiple individual other bundles, Flatpak bundles only have one fixed dependency on a runtime. Getting a compromise between those two concepts is not possible, and since the modular vs. static approach in Limba and Flatpak where fundamental, conscious design decisions, merging the projects was also not possible.

Alex and I had very productive discussions, and except for the modularity issue, we were pretty much on the same page in every other aspect regarding the sandboxing and app-distribution matters.

Sometimes stepping out of the way is the best way to achieve progress

So, what to do now? Obviously, I can continue to push Limba forward, but given all the other projects I maintain, this seems to be a waste of resources (Limba eats a lot of my spare time). Now with Flatpak and Snappy being available, I am basically competing with Canonical and Red Hat, who can make much more progress faster then I can do as a single developer. Also, Flatpaks bigger base of contributors compared to Limba is a clear sign which project the community favors more.

Furthermore, I started the Listaller and Limba projects to scratch an itch. When being new to Linux, it was very annoying to me to see some applications only being made available in compiled form for one distribution, and sometimes one that I didn’t use. Getting software was incredibly hard for me as a newbie, and using the package-manager was also unusual back then (no software center apps existed, only package lists). If you wanted to update one app, you usually needed to update your whole distribution, sometimes even to a development version or rolling-release channel, sacrificing stability.

So, if now this issue gets solved by someone else in a good way, there is no point in pushing my solution hard. I developed a tool to solve a problem, and it looks like another tool will fix that issue now before mine does, which is fine, because this longstanding problem will finally be solved. And that’s the thing I actually care most about.

I still think Limba is the superior solution for app distribution, but it is also the one that is most complex and requires additional work by upstream projects to use it properly. Which is something most projects don’t want, and that’s completely fine. 😉

And that being said: I think Flatpak is a great project. Alex has much experience in this matter, and the design of Flatpak is sound. It solves many issues 3rd-party app development faces in a pretty elegant way, and I like it very much for that. Also the focus on sandboxing is great, although that part will need more time to become really useful. (Aside from that, working with Alexander is a pleasure, and he really cares about making Flatpak a truly cross-distributional, vendor independent project.)

Moving forward

So, what I will do now is not to stop Limba development completely, but keep it going as a research project. Maybe one can use Limba bundles to create Flatpak packages more easily. We also discussed having Flatpak launch applications installed by Limba, which would allow both systems to coexist and benefit from each other. Since Limba (unlike Listaller) was also explicitly designed for web-applications, and therefore has a slightly wider scope than Flatpak, this could make sense.

In any case though, I will invest much less time in the Limba project. This is good news for all the people out there using the Tanglu Linux distribution, AppStream-metadata-consuming services, PackageKit on Debian, etc. – those will receive more attention 😉

An integral part of Limba is a web service called “LimbaHub” to accept new bundles, do QA on them and publish them in a public repository. I will likely rewrite it to be a service using Flatpak bundles, maybe even supporting Flatpak bundles and Limba bundles side-by-side (and if useful, maybe also support AppImageKit and Snappy). But this project is still on the drawing board.

Let’s see 🙂

P.S: If you come to Debconf in Cape Town, make sure to not miss my talks about AppStream and bundling 🙂

the avatar of Jos Poortvliet

Migrating to Nextcloud 9

Now that Nextcloud 9 is out, many users are already interested in migration so I'd like to address the why and how in this blog post.

Edit: Nextcloud 10 is out with loads of unique features. We now also have a client! You can find out about client account migration here.

Why migrate

Let's start with the why. First, you don't have to migrate yet. This release as well as at least the upcoming releases of own- and Nextcloud will be compatible so you'll be able to migrate between them in the future. We don't want to break compatibility if we can avoid it!

Of course, right now Nextcloud 9 has some extra features and fixes and future releases will introduce other capabilities. With regards to security, we have Lukas Reschke working for us. However, we promise that for the foreseeable future we will continue to report all security issues we find to upstream in advance of any release we do. That means well ahead of our usual public disclosure policy, so security doesn't have to be a reason for people to move.

EditNextcloud 10 comes with far more features on top of this. For Nextcloud 11 we have a ambitious road map already but we'll still enable migration from ownCloud 9.1 to Nextcloud 11 so you can migrate at your leisure!

Migration overview

If you've decided to migrate there are a number of steps to go through:
  • Make sure you have everything set up properly and do a backup
  • Move the old ownCloud install, preserving data and config
  • Extract Nextcloud, correct permissions and put back data and config
  • Switch data and config
  • Trigger the update via command line or the web UI
Note that we don't offer packages. This has been just too problematic in the past and while we might offer some for enterprise distributions, we hope to work together with distributions to create packages for Nextcloud 9 and newer releases. Once that is done we will of course link to those on our installation page.

There are other great resources besides this blog, especially this awesome post on our forums which gives a great and even more detailed overview of a migration with an Ubuntu/NGINX/PHP7/MariaDB setup.

Edit: With regard to packages, there are now packages for CentOS and Fedora and other distributions will likely follow soon. See our packages repository if you want to help!

Preparation

First, let's check if you're set up properly. Make sure:
  • You are on ownCloud 8.2.3 or later
  • Make sure you have all dependencies
  • Your favorite apps are compatible (with ownCloud 9), you can check this by visiting the app store at apps.owncloud.com
  • You made a backup
Once that's all done, time to move to the next step: cleaning out the old files.

Removing old files

In this step, we'll move the existing installation preserving the data and configuration.
  • Put your server in maintenance mode. Go to the folder ownCloud is installed in and execute sudo -u www-data php occ maintenance:mode --on (www-data has to be your HTTP user). You can also edit your config.php file and changing 'maintenance' => false, to 'maintenance' => true,.
  • Now move the data and config folder out of the way. Best to go to your webserver folder (something like /var/www/htdocs/ and do a mv owncloud owncloud-backup

Deploying Nextcloud

Now, we will put Nextcloud in place.
  • Grab Nextcloud from our download page or use wget: wget https://download.nextcloud.com/server/releases/nextcloud-9.0.50.zip
    • Optional: you can verify if the download went correct using our MD5 code, see this page. Run md5sum nextcloud-9.0.50.zip. The output has to match this value: 5ae47c800d1f9889bd5f0075b6dbb3ba
  • Now extract Nextcloud: unzip nextcloud-9.0.50.zip or tar -xvf nextcloud-9.0.50.tar.bz2
  • Put the config.php file in the right spot: cp owncloud-backup/config/config.php nextcloud/config/config.php
  • Now change the ownership of the files to that of your webserver, for example chown wwwrun:www * -R or chown www-data *
  • If you keep your data/ directory in your owncloud/ directory, copy it to your new nextcloud/ [*]. If you keep it outside of owncloud/ then you don't need to do anything as its location is in config.php.

* Note that if you have been upgrading your server from before ownCloud 6.0 there is a risk that moving the data directory causes issues. It is best to keep the folder with Nextcloud named 'owncloud'. This also avoids having to change all kinds of settings on the server, so it might be a wise choice in any case: rename the nextcloud folder to owncloud.

Now upgrade!

Next up is restarting the webserver and upgrading.
  • Restart your webserver. How depends on your distribution. For example, rcapache2 restart on openSUSE, service restart apache2 on Ubuntu.
  • You can now trigger the update either via OCC or via web. Command line is the most reliable solution. Run it as sudo -u apache php occ upgrade from the nextcloud folder. This has to run as the user of your webserver and thus can also be www-data or www for example.
  • Then, finally, turn of maintenance mode: sudo -u www-data php occ maintenance:mode --off

That's it!

At this point, you'll see the fresh blue of a Nextcloud server! If you encounter any issues with upgrading, discuss them on our forums.

the avatar of Jos Poortvliet

On Open Source, forking and collaboration: Nextcloud 9 is here!

The nature of Open Source is, in a sense, dualistic. It encourages collaboration through the threat of not collaborating--a fork. When I was approached by Struktur AG to join them to work on ownCloud and Spreed, I loved the idea. I always wanted an ecosystem around ownCloud, which is why I pushed things forward like our collaboration with Western Digital Labs and Collabora, matters of no business interest to the company I worked for. I believe a stronger ecosystem benefits everybody.

Ecosystems and confidence

A major point which makes open source so beneficial for businesses is that it puts pressure on suppliers to offer great service and support. If they don't, another can enter the market and out-service them. Tight control over the community tough things like CLA and trademark makes it hard to grow such an ecosystem and negates some of the benefits of open source for customers.

Luckily, in the end, the AGPL license protects the future of a project, even if its steward clings to power. From conversations with Niels early on, it was clear to me that he has a very different and very confident view on his ability to run a real open source company. His history at Red Hat results in frequent comparisons. And indeed, Red Hat runs things the right way, even supporting a project like CentOS which many other companies would consider an existential threat to their business model. Just as their investment in opensource.com shows: they aim to grow the pie, not grab a bigger slice.

former 'enterprise feature' done right (and open)


I'm super proud and happy that we could announce today, with our first release, that Nextcloud will not be doing proprietary code. No closed apps means no inherent conflict between sales and community management/developers within the company, but a full alignment in one simple direction: servicing the customer.

And if you wonder about the collaboration with Collabora/LibreOffice Online and with Western Digital: yes, of course, we'll go full steam ahead and will facilitate where we can! No, we're not afraid that either would 'compete' with us: both will complement and strengthen the ecosystem. So we will work together.

Why? Because the core contributors and founder shared an ambitious goal for Nextcloud: be THE solution for privacy and security.

the avatar of Jos Poortvliet

BBQ and forking

Last night we had our first Nextcloud BBQ! Despite some rain it's a good start of something that should be a tradition. ;-)

It was great to have conversations with the contributors who visited us as well as some downtime with the team. It's been a busy time since we announced our new endeavor. And it continues to be awesome to get so many supportive comments and feedback on what we're up to! People are excited about our open strategy and appreciate the fact that there is a solid company behind it. The flood of incoming requests for information and support from customers presents a good problem. So let me point out, again, that we're hiring!