Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

Live Streaming to HTML5?

We have our mice TV now streaming our colony of mus minutoides at the canonical URL http://mice.or.cz/ but it would be nice if you could watch them in your web browser (without flash) instead of having to open a media player for the purpose.

I gave that some serious prodding. We still use vlc with the same config as in the original post (mp4v codec + mpegts container). Our video source is an IP cam producing mp4v via rtsp and an important constraint is CPU usage as it runs on my many-purpose server (current setup consumes 10% of one CPU core). We’d like things to work in Debian’s chromium and iceweasel, primarily.

It seems that in the HTML5 world, you have these basic options:

  • MP4/H264 in MP4 – this *does not work* with live streaming because you need to make sure the browser receives a correct header with metadata which normally occurs only at the top of the file; it might work with some horrible custom code hacks but nothing off-the-shelf
  • VP80/VP90 in webm – this works, but encoding consumes between 150%-250% CPU! even with low bitrates; this may be okay for dedicated streaming servers but completely out of the question for me
  • Theora in Ogg – this almost works, but the stream stutters every few seconds (or slips into endless buffering), making it pretty hard to watch; apparently some keyframes are lost and Theora homepage gives a caveat that Ogg encoding is broken in VLC; the CPU usage is about 30%, which would have been acceptable

That’s it for the stock video tag formats, apparently. There are two more alternatives:

  • HTTP Live Stream (HLS) has no native support in browsers outside of mobile, might work with a hack like https://github.com/RReverser/mpegts but you may as well use MSE then
  • Media Source Extensions (MSE) seem to allow basically implementing decoding custom containers (in javascript) for any codecs, which sounds hopeful if we’d just like to pass mp4v (or h264) through. The most popular such container is DASH, which seems to be all about fragmenting video to smaller HTTP requests with per-fragment bitrate negotiation, but still completely codec agnostic. Re Firefox, needs almost latest version. Media players support DASH too.

So far, the best looking courses seem to be:

  • Media server nginx-rtmp-module (in my case with pull directive towards the ipcam’s rtsp) with mpeg-dash output and dash.js based webpage. I might have misunderstood something but it might actually just work (assuming that the bitrate negotiation could always end up just choosing the ipcam’s fixed bitrate; something very low is completely sufficient anyway).
  • Debug libogg + libtheora to find out why it produces corrupted streams – have fun!

a silhouette of a person's head and shoulders, used as a default avatar

The simplest package ever

From time to time I need to some tweaking of our packages and then I need a simple package that builds lightning-fast in order to test stuff.

This is the package I am using

https://build.opensuse.org/package/show/home:jordimassaguerpla:test/simple

Feel free to "fork" it.

a silhouette of a person's head and shoulders, used as a default avatar

Battleship – Sinclair ZX Spectrum

The first computer, which was in my family, was ZX Spectrum. I think, I was about 6 when my father bought the first computer. This computer I used for gaming of course. I started programming later and on PC. I will never forget the sound of loaded games on Spectrum (software was distributed on audio cassette tapes and loading into memory was a sound (perceived by the human ear), interpreted as a sequence of bytes)…

Last weekend I played with my 5 years old son in Battleship. I showed him this game first time and we used pencil and paper. He teaches the alphabet and I think this could be a good experience for him. You know that feeling when you have something to show your children, do you remember your childhood. I don’t know why, but I remembered not a “paper version”, but our first computer and how I with my older brother played in Battleship on Spectrum against the computer.
Battleship
I thought about this until evening and at the same evening found this game in net. I found a lot of different information about this game, the most important of which was the fact that this game can be installed on any PC running GNU/Linux. Yaaay… I can’t remember what I planed to do on this night, but until I went to sleep I installed spectrum’s Battleship on my x86_64 openSUSE and plunged into childhood for few hours 🙂
Battleship
To install spectrum’s programs on UNIX/Linux you need to install emulator. In case of openSUSE, you need to add Emulators project first. After that install FUSE package. It just works. Just start fuse-binary with game-file-name as a parameter.
Battleship
I would like to thank FUSE developers and openSUSE FUSE mainainers. I don’t play in “today’s” games, but time-to-time can spend a bit time for games of my childhood.

a silhouette of a person's head and shoulders, used as a default avatar

Let's breathe life back into my blog

I haven’t updated it in a while and the content is quite old and maybe boring :)

But here I am and the first step was to migrate from Wordpress to Pelican. That way will be faster for me to publish content and to maintain it, as well. I decided to keep the old posts, just for reference, so, if you see any strange things, just let me know and I will try to fix them.

Let's hope that my future posts will teach you something useful or at least will make you think.

the avatar of Kohei Yoshida

Orcus 0.11.0

I’m very pleased to announce that version 0.11.0 of the orcus library is officially out in the wild! You can download the latest source package from the project’s home page.

Lots of changes went into this release, but the two that I would highlight most are the inclusions of JSON and YAML parsers and their associated tools and interfaces. This release adds two new command-line tools: orcus-json and orcus-yaml. The orcus-json tool optionally handles JSON references to external files when the --resolves-refs option is given, though currently it only supports resolving external files that are on the local file system and only when the paths are relative to the referencing file.

I’ve also written an API documentation on the JSON interface in case someone wants to give it a try. Though the documentation on orcus is always work-in-progress, I’d like to spend more time to make the documentation in a more complete state.

On the import filter front, Markus Mohrhard has been making improvements to the ODS import filter especially in the area of styles import. Oh BTW, he is also proposing to mentor a GSOC project on this front under the LibreOffice project. So if you are interested, go and take a look!

That’s all I have at the moment. Thank you, ladies and gentlemen.

the avatar of Aleksa Sarai

Debugging why ping was Broken in Docker Images

All complicated bugs start with the simplest of observations. I recently was assigned a bug on our openSUSE Docker images complaining that ping didn't work. After a couple of days of debugging, I was taken into a deep and dark world where ancient Unix concepts, esoteric filesystem features and new kernel privilege models culminate to produce this bug. Strap yourself in, this is going to be a fun ride.

the avatar of Sankar P

Programmers guide to Microservices/SOA

Introduction

SOA or Service Oriented Architecture is one of the buzzwords among architects/senior-developers, job descriptions for the last few years. However, most of the definitions of SOA online are riddled with formal words, such as the one from OASIS, which says: "A paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains. It provides a uniform means to offer, discover, interact with and use capabilities to produce desired effects consistent with measurable preconditions and expectations."

The above definition though is precise, is too abstract for a developer. This post tries to explain what constitutes a [micro-]service oriented architecture and how it differs from a traditional monolithic approach that a programmer may be accustomed to. This post is an introductory material aimed at beginners to SOA, who have already done some monolithic projects. Experts in SOA, could validate the facts mentioned and suggest alternatives.

Microservices

A very informal way to understand microservices is, if we split up every class of our design into a HTTP accessible webservice on its own, we would end up with a bunch of services, which together constitute a microservices based architecture. The difference between SOA and microservices is just the level of granularity to which you decompose your classes (in a monolithic application) into independent HTTP services. The more minimal in functionality, each of your service implementation is, the more closer it is to be called a microservice.

Splitting a single application into multiple services, imposes a few restrictions on our coding, but in turn gives us a lot of flexibility and power in scaling. Let us look at some of the coding/design constraints.

Stateless Systems

The fundamental difference from a monolithic design is in maintaining state information. All the individual classes which earlier interacted via global variables (for locks, mutexes, config variables, etc.) can no longer rely on them.

Let us take a simple example, we are building a rudimentary Shopping application with just one type of item stored. The Shopping application has two parts, the Inventory part that adds new items and the Sales part that removes items. Let us consider the following pseudo-code:


var mu = &sync.Mutex{}
var stockItemCount = 10

func (i *Inventory) AddToStock(n int) {
 mu.Lock()
 stockItemCount += n
 mu.Unlock()
}

func (s *Sales) UpdateStock(n int) bool {
 mu.Lock()
 defer mu.Unlock()
 if stockItemCount >= n {
  stockItemCount -= n
  return true
 } else {
  return false
 }
}

In the above code snippet (trivialized for brevity), we have a global variable stockItemCount, which is protected by a mutex mu. The AddToStock function of the Inventory class/type, adds to this global variable whereas the UpdateStock function of the Sales class/type, removes from the global variable. The mu lock synchronizes the access, such that the functions have exclusive access to the global variable on execution.

In a SOA, the Inventory and the Sales classes will become their own individual HTTP webservices. These new individual classes, viz., SalesService and InventoryService, may now run on different machines.

Inter-Service Co-ordination

So how do these different services potentially running on different machines, share and synchronize access to common data ? The solution is simple. We move away from the globalVariable+mutex pattern and implement a publish-subscribe or queueing pattern. What does that mean ?

We move the stockItemCount management into a separate StockService which is accessible by both the InventoryService and SalesService (earlier considered classes/type). Let us take a look at a sample pseudo-code:


// Stock Service on Machine A
var count = 10
func (s *StockService) ProcessQ() {
    for {
        op := Q.Read()
        if op.Type == “Inventory” {
            count += n
        } else if op.Type == “Sale” {
            if count >= n {
                count -= n
                http.POST(op.callbackURL, "success");
            } else {
                http.POST(op.callbackURL, "failure");
            }
        }
    }
}

type Operation struct {
    Type string
    Value int
    Callback URL
}



// Inventory Service on Machine B
func (i *InventoryService) AddToStock(n int) {
 Q.Write(Operation{“Add”, n, nil})
}



// Sales Service on Machine C
func (s *SalesService) UpdateStock(n int) {
 Q.Write(Operation{“Remove”, n, callbackURL})
}

func (s *SalesService) POSTHandler(w http.Response, *r http.Request) {
    s.Notify(r.Body) // success or failure
}


As seen above, we have two Classes which are converted into Services (SalesService and InventoryService) and a new third service named StockService. We also have a Q (a distributed Queue infrastructure) that we use. We have a Operation class/type, with a Type string, whose instance we will be adding to the Q. The AddToStock function in the Inventory service, creates a new Operation item of type "Add", whereas the UpdateStock function of the SalesService creates a new Operation item of type "Remove" to the queue. The StockService has a ProcessQ function which goes on an infinite loop to fetch items from the Q and based on the Type of the operation, perform either addition or deletion of value.

It should be clear now that the SalesService and the InventoryService are now totally stateless. They just make use of the Q to communicate with the StockService. The meticulous among the blog-readers would have observed that the StockService is still stateful. We maintain the count variable still as a global variable. In any large scale system, there may be some components, which may not be completely stateless. We will have some drawbacks because of having such stateful parts. We will discuss them more in a future section.

The Q forms a very central part of the above architecture. The Q can be implemented by the programmer manually, and could potentially be deployed in a totally different set of machine(s) from either of A, B or C. However, there are some stable Queue implementations that we could use, instead of reinventing the wheel. Apache Kafka, RabbitMQ are some popular opensource systems. If you want a hosted solution, Amazon SQS is offered by AWS and Cloud PubSub by Google. These systems could be called Messaging Middleware.

There are projects where a massive amount of data will be generated (say from sensors instead of humans) and we will need realtime processing of streaming data. We could use specialized streaming middleware such as Apache Storm or a hosted solution such as Amazon Kinesis.

Benefits of SOA

As we just saw above, what was a simple single process with two classes, became three different classes and a queueing system with four different processes across four (or more) different machines, to accommodate SOA. Why should a programmer put up with so much of complexity ? What do we get in return ? We will see some of the benefits in this section.

Horizontal Scalability

If we have a server with 4GB RAM serving a 100k requests per second for our above Shopping site, and due to an upcoming holiday season, there will be an estimated increases in the visitors count and we will have to serve, 400k parallel requests per second, we could do one of two things. (1) We could buy more expensive hardware, say a 16GB RAM machine. We could move our site deployment to this bigger machine until the holiday season and get back to the old system later. (2) We could launch another three of 4 GB RAM machines and handle the increased load. The former is called Vertical Scaling and the latter is called Horizontal Scaling.

Vertical scaling is appealing for small workloads but is costlier as we have to provision huge machines. Even if you could rent high-end VMs in the cloud, the pricing is not too friendly. Horizontal scaling is cheaper on your wallet as well as provides more throughput and allows for more dynamism.

Auto-Scaling

In our Shopping application, we saw that the Sales and the Inventory Services are stateless. So we could horizontally scale them individually. For example, we could launch 3 new instances of the SalesService to handle a holiday-traffic while maintaining the single machine for the Inventory service. This kind of flexibility would not have been possible with our earlier monolithic design. However, note that the Stock Service that we had was stateful and so it could not be horizontally scaled. This is the drawback of having stateful components in your architecture.

Once we know that the systems could be horizontally scaled, the next logical progression is to make the scaling automatic. There are systems like Amazon Beanstalk and Google AppEngine (to a certain extent (with vendor lockin)) that allow your application code to automatically horizontally scale by launching new instances whenever the demand is higher. The new instances will be automatically shutdown when the burst of traffic is reduced. This reduces huge IT administration overheads. We could have such nice features, only because our application architecture was composed of stateless services.

Serverless Systems

The next step in the evolution of auto-scaling, is to have code that automatically decides the number of servers on which it should run, instead of having to provision anything. To quote, Dr. Werner Vogels, CTO of Amazon, "No server is easier to manage than no server". We are clearly moving in this direction with serverless webapps. Amazon Lambda brings to life this functional programming dream. Google is not far behind and have recently launched Cloud Functions (but not as rich as AWS Lambda yet imho). We have frameworks to build entire suite of applications without servers, using these services.

Polyglot Development

As we are deploying each service independently, we could use different programming languages, frameworks and technologies for each of the services. For example, any CPU intensive service could be written in a performant language like Go while a bunch of front end code could be written in parallel in React or nodejs.

Mobile First

Since we have developed proper HTTP APIs for our application, in addition to the webclient, any mobile client too could use our webservices. In this day, most of the companies start with a mobile-first or mobile-only strategy and do not require a webclient. Some pro-monolithic engineers tend to argue that the first iteration of development should be in a monolithic model and we could re-engineer for a SOA at a later stage of development, as development speed is faster in monolithic design. Personally, I disagree to this. If we start with a SOA in mind from scratch, with our modern day development stack, we could easily plumb existing things instead of reinventing wheel and could do projects faster. There are frameworks and techniques to auto-generate a lot of code, once we have finalized the APIs. I have had experience building web applications both as a monolith and in SOA from scratch, I have felt happier with SOA code every time, YMMV.

Auxiliary Parts

If we are building a SOA based system, we need to have a lot more auxiliary support systems. If we do not have these auxiliary parts in place, it will be very difficult to measure, debug or optimize. Different companies implement different parts below, based on their business needs and deadlines.

Performance Metrics

The most important auxiliary aspect of SOA is to have precise performance metrics for each of the services. If we have SOA without performance/metrics measurement, It will be as ineffective as trying to do bodybuilding or weightloss without observing what we eat. We will not be able to rate limit requests, prevent DoS attacks, understand the health of the service without measurement. The performance measurement can be done in two ways, (1) Measure the performance and show metrics by realtime event monitoring (2) Log various events, errors, response times, etc., aggregate these logs and batch process them later, to understand the health of various components. We will need a combination of both the approaches for any large scale systems.

Luckily there are plenty of tools, services and libraries available for this. AWS API Gateway is perhaps the easiest way to register your APIs and monitor the endpoints. However we may need more finegrained measurements too (such as how long the calls to the database takes, which user is causing more load, what times are the loads high, etc.). There are various tools that we could use such as statsd, ganglia, nagios, etc. and various companies that offer hosted solutions too, such as sematext, signalfx, newrelic, etc.

Distributed Tracing

Tracing is a concept that is supplementary to metrics and performance measurement. When a new request comes to a service, it may in turn make use of 3-4 other services to serve the original request. Those 3-4 other services may in turn call 3-4 other services. Tracing helps us find out, on a per-request basis, the map of which services are used to serve it, how long it took at each point, where the request is stuck if could not be serviced, etc.

We could achieve tracing by giving a unique id / context object to each incoming new request in the outermost API which receives the request, pass it along as we make further API requests until the final response is finished. This context could be passed along as a parameter in the webservice calls. The monitoring of the tracing events could again be realtime or deducted from log-aggregation.

Dapper is a paper released by Google summarizing how tracing is done in Google. Twitter have released Zipkin, a FOSS implementation of the above Dapper paper, that is in production.

Pagination

Assume that we are exposing an API in our StockService to list all the items that we have, along with its RetailPrice. If we have say a billion products, the response to the API will be huge. Not just the response, the system resources needed to build that response, on the server side will be tremendous. If we are fetching the billion items from the database, the caches will be thrashed, the network will be clogged, etc. To avoid all these issues, any API that could potentially list a lot of items should consider paginating its response by a pagenumber, i.e., an API call should take a page number as a parameter and should return only M number of items in a page. The value of M could be decided based on the size of each item on the response. We can optionally get the number of results that the user wants, also as a HTTP Parameter.

For example:

  • http://127.0.0.1:8080/posts/label/tech  - Returns the first 10 blog posts with label "tech"
  • http://127.0.0.1:8080/posts/label/tech/1 - Same as above
  • http://127.0.0.1:8080/posts/label/tech/2 - Returns blog posts 11 -> 20 with label "tech"
  • http://127.0.0.1:8080/posts/label/tech/?limit=5 - Returns the first 5 blog posts with label "tech"
  • http://127.0.0.1:8080/posts/label/tech/?start=15&limit=5 - Returns the blog posts 15 to 20 with label "tech"

API Versioning

If software never changes, we software engineers will be out of jobs. It is good that software evolves. However, we need some contracts/APIs, so that the changes are smooth and does not bring down the entire ecosystem when a change happens. Once we have an exposed an API outside our developer team, it is wiser to never change its request/response parameters.

In our StockService example (that we discussed a few paragraphs ago), we could have the following API:

http://stockservice/items/ - Returns all the items.

Later someone figured out that, it is not wise to return all the items always and decides to change the behavior to return only the first 10 items. This change will break all the existing clients, who will all assume that there are only 10 items in total while in reality we may have a billion more items waiting to be paginated.

The easiest way to regulate the API changes is by adding version to APIs. For example, if the original API to return all the items had a version param, we could just increment it like:

http://stockservice/V1/items/ - Returns all the items
http://stockservice/V2/items - Returns the top 10 items

The version need not be part of the URL always. We could take the Version as an extra HTTP header also, instead of creating a new URL endpoint. It is a matter of taste and each approach has its own pros and cons.

CircuitBreaking

Once we have multiple components in a system, there is a high chance that some part of the system may be down for updates. When such a thing happens, a service could choose to wait for some time before making any attempts to retry if it knows that the service will be failing. Martin Fowler has written in detail about this, which is a good read.

Service Discovery

In a large scale system architected with a microservices based design, you will have a plenty of services. Now each of these service, may want to know the location (URL, ip-address+port, etc.) for the services on which it depends. So we need some kind of a centralized service registry where all these information is stored and maintained.

The easiest and probably most used way to identify these services is through DNS. However, there are plenty of other tools available for this purpose too. ZooKeeper from Apache, etcd from CoreOS are all strongly consistent, distributed datastores which could be used for service discovery. Consul from HashiCorp, Eureka from Netflix are dedicated service discovery software. All of the above are FOSS projects as well. If your application has only less than a dozen services, probably it makes sense to just read from a shared file, across these services, instead of deploying a complex suite of software too. But keep in mind that it won't scale as you grow and so it is better to start with good practices as a habit from the beginning.

SDKs

A new TCP connection takes time to establish because of the initial handshake delay. It will be foolish to not reuse these connections. There is an inherent need for retrying things in HTTP if things fail, before giving up. Some programmers do not like writing HTTP client code always either. It is often recommended to release SDKs for the APIs that we release, to facilitate programmers to consume our APIs easily. For example, a python programmer can merely import our SDK's classes to add an item to our StockService, instead of having to write http retry code.

In the past we have had technologies like DCOMCORBARMI etc. that aimed at doing distributed computing within walled gardens of technology. They lost out in market share due to the simplicity of REST services where HTTP verbs (GET, PUT, POST, DELETE) could perform remote operations, without the need for complex and mostly platform-specific stubs/skeletons etc.

There is a common middle ground where the best of both worlds could be used. The most notable framework for this is gRPC. It is an open source project, started by Google, adopted by many companies (most recently coreos) that helps in providing a web API where the client SDK generation is also made easy. It support http2 as well. If I were starting a new project today, I would give this a serious thought.

Further Information


  • A very good read on the need for SOA is Steve Yegge's Platforms rant.
  • Read the techblog of companies who are moving to SOA (not just those who have moved already)
  • Talk to engineers from Netflix, Amazon Web Services, if you know someone. Sadly both of those companies do not exist in India (as of 2016), even though the both the services are available.
  • Follow Netflix techblog http://techblog.netflix.com/
  • Watch AWS reInvent videos and if you have a chance attend that event (instead of events like Google I/O which are more business driven)

Other Notes:
  • If you like this post, share it with your friends. 
  • Please send any comments / feedback regarding the language or content used. I am planning to use this for teaching material in a college, for a 1 hour talk, shortly. Should some other topics be covered ?
  • All opinions expressed are purely personal.

the avatar of Bruno Friedmann

TOSprint or not to sprint?

TOSprint Paris 2016

Report of a week of sprint

TOSprint in Paris has just ended (wiki page).

What a week!

First of all I want to warmfully thank the sponsors, especially Olivier Courtin from Oslandia for the organization, and Mozilla France for hosting us.

What is TOSprint?

Once a year a bunch of hackers from projects under OsGeo umbrella, meet in a face to face sprint.
This year it happenned in Paris with great number of participants (52).

There was globally five big groups, and if each community was running its own schedule,
there was a lot of cross echanges too.

TOSprint Mozilla

Mateusz Łoskot

Personal objectives

My main objective, except being enough luckly to be a sponsor, was to go there and be in direct contact with upstream.

This can help a lot to improve packages, and create new ones.

Moreover, as one of my openSUSE’s Application:Geo peer maintainer, Angelos Kalkas was also participating, we decided to make somes changes, and improve the situation of the repository.

openSUSE packaging

We were using a Staging repository to test the global changes to minimize the breakage on the main repo, kinda à la Factory 😉

Let’s talk about what you will get once the rebuild will finished:
* gdal goes to 2.0.2 which is big jump since version 1.11

* postgis got upgrade to 2.2.1 with sfcgal as dependency so 3D operations are avalaible.

I added two interesting extensions for postgresql/postgis database
– pgRouting : a long time missing extension in our repository. see pgrouting.org
– pointcloud : allow you to store and work with pointcloud in postgresql and also contain a postgis extension. see https://github.com/pgpointcloud/pointcloud
Both packages are respecting the postgresql/postgis naming scheme: so to install pointcloud on a postgresql94 server you will install postgresql94-pointcloud package.
They are available at least for 13.2, Leap 42.1, Tumbleweed.

A big thanks to Paul Ramsey for his help resolving the issues raised. Especially the advise to stick to -j1 during compilation of postgis 🙂

* PDAL pdal.io (with libght) is a point cloud abstraction layer which is under active development and should
in the future replace libLAS once the compatible C interface will be written.
A big thanks to Howard Butler, for helping to get all packaging issues resolved.

* Mapserver : mapserver.org
During the week, mapserver team made impressive changes:
– First by closing numerous github issues which didn’t get updates for a long time.
They run a bot script which automatically close the github issue, and users get a nice message about it.
Perhaps it could inspire us, on how we write closing ticket in bugtriage.
OSGeo TOSprint Paris

  This is an automated comment

  This issue has been closed due to lack of activity. This doesn't mean the issue is invalid,
  it simply got no attention within the last year. Please reopen with missing/relevant information
  if still valid.

  Typically, issues fall in this state for one of the following reasons:

      Hard, impossible or not enough information to reproduce
      Missing test case
      Lack of a champion with interest and/or funding to address the issue

– Part of the team took the challenge to update all the tutorial material.
– A number of question about the future of mapscript : lacking maintenance resources (humans and or funding)
– Bugfix release on thursday night. 6.4.3 and 7.0.1

For openSUSE, I’ve been discussing a lot with Thomas Bonfort.
The idea would be to be able to propose at least two or more versions that receive security and bug fixes. Actually the 6.4 and the 7.0 branches.
This will allow people to smoothly upgrade their map files, when there’s breakage or adaptation needed.

I classify this request as a good idea, and started analyzing what we can do. So its actually a work in progress.

Conclusion

There’s nothing more enthusiastic (for me) than participating to a FLOSS event. If some days are more frustrating than others, the others serve to build this free world we all need.
So I would like to ping your attention : all FLOSS software and communities need your contribution . If you’re using one of them, become interested in how it is built, organized, start to learn today how to contribute, and enjoy your journey.

There’s more to come, especially on the mapserver side, and more and more packages.
Stay tuned!

a silhouette of a person's head and shoulders, used as a default avatar

Of gases, Qt, and Wayland

Ever since the launch of Argon and Krypton, the openSUSE community KDE team didn’t really stand still: a number of changes (and potentially nice additions) have been brewing this week. This post recapitulates the most important one.

I’d like the most recent Qt, please

As pre-announced by a G+ post, the openSUSE repositories bringing KDE software directly from KDE git (KDE:Unstable:Frameworks and KDE:Unstable:Applications) have switched their Qt libraries from Qt 5.5 to the recently released Qt 5.6. This move was made possible by the heroic work of Christophe “krop” Giboudeaux, who was able to beat QWebEngine into submission and make it build in the OBS.

Qt 5.6 comes with a much improved multiscreen handling and a lot of fixes contributed by people from KDE. A certain number of issues that were plaguing Plasma users should be gone with this version. Be wary however that’s not yet released in stable form, and there may be plenty of bugs. Also, the interaction of KDE software with Qt 5.6 is not completely tested: be sure to try it, and report those bugs!.

For these reasons, Qt 5.6 is not yet in Tumbleweed (the stable release will be submitted once it is out). Therefore you will need an additional repository to be able to update, KDE:Qt56. If you have KDE:Qt55 in your repository list, you must remove it and replace it with KDE:Qt56.

You can add the repository easily like this:

zypper ar -f obs://KDE:Qt56 KDE_Qt56 # Tumbleweed
zypper ar -f obs://KDE:Qt56/openSUSE_Leap_42.1 KDE_Qt56 # for Leap

Then force an update to this repository

zypper ref
zypper dup --from KDE_Qt56

Then update from KDE:Unstable:Frameworks first, then KDE:Unstable:Applications.

A bit of Wayland in my gases, too!

Of course, this change has also trickled down to the Argon and Krypton media, which have been updated to reflect this change. But that’s not all. The KDE team is now proud to offer a Wayland-based Krypton image, which accompanies the standard one. Thanks to KIWI, making this was faster than shouting Ohakonyapachininko! Well, perhaps not, but still quite easy.

If you want to try out the Wayland base image, be aware that it is nowhere near alpha level. There is a lot going on and development is heavy, so it may be broken in may interesting ways before you even notice it. You have been warned!

Where do you find all this goodness? The KDE:Medias directory on download.opensuse.org has all the interesting bits. The three kinds of images residing there are:

  • openSUSE_Argon (x86_64 only): Leap based, X11 based KDE git live image;
  • openSUSE_Krypton (i586 and x86_64): Tumbleweed based, X11 based KDE git live image;
  • openSUSE_Krypton_Wayland (i586 and x86_64): Tumbleweed based, Wayland based KDE git live image.

Let’s not forget about bugs!

Borrowing from my previous entry:

Chances are that you might run into one or more bugs. This is software from git, and code review/continuous integration can only do so much. What should you do in such a case?

  1. (NOT recommended) Kick and scream and demand a refund ;)
  2. If your bug is in the distribution stack, like drivers or packaging, file a bug on openSUSE’s bugzilla with enough details so that it can be looked at;
  3. If your bug is instead in KDE software, turn to KDE’s bugzilla.

As always, “have a lot of fun!”

a silhouette of a person's head and shoulders, used as a default avatar

Where are my noble gases? I need MORE noble gases!

As KDE software (be it the Frameworks libraries, the Plasma 5 workspace, or the Applications) develops during a normal release cycle, a lot of things happen. New and exciting features emerge, bugs get fixed, and the software becomes better and more useful than it was before. Thanks to code review and continuous integration, the code quality of KDE software has also tremendously improved. Given how things are improving, it is tempting to follow development as it happens. Sounds exciting?

Except that there are some roadblocks that can be problematic:

  • If you want to build the full stack from source, there are often many problems if you’re not properly prepared;
  • Builds take time (yes, there are tricks to reduce that, but not all people know about them);
  • If things break… well, you get to keep the pieces.

But aside personal enjoyment, KDE would really benefit from more people tracking development. It would mean faster bug reporting, uncovering bugs in setups that the developers don’t have, and so on.

What about noble gases?

Recently, an announcement about a gas used in fluorescent lamps generated quite a buzz in the FOSS community. Indeed, such an effort would solve many of the problems highlighted above, because part of the issues would be on the backs of integrators and packagers, which are much better apt for this task.

But what, am I telling a story you already know? Not quite.

A little flashback

For those who don’t know, openSUSE has a certain number of additional repositories with KDE software. Some of these, since many years, have been providing the current state of KDE software as in git for those who wanted to experiment. This hasn’t been done just for being on the bleeding edge: it’s also been used by the openSUSE KDE team itself to identify and fix in advance issues related to packaging, dependencies, and occasionally help testing patches (or submit their own to KDE).

So, in a way, there were already means to test KDE software during development. However, there was a major drawback in adoption, which involves the fact that these packages replace the current ones on the system. For technical reasons, it is not possible to do co-installation (for example, in a separate prefix) in a way that is maintainable long term.

So, what now?

After hearing about the announcement, we (the openSUSE KDE team) realized that we had already the foundation to provide this software to our users. Of course, if you got too much neon, you’d asphyxiate ;), so we had to look at alternative solutions. And the solutions were, like the repositories, already there, provided by openSUSE: the Open Build Service and the KIWI image system which can create images from distribution packages.

But wait, there’s more (TM)!

openSUSE ships two main flavors: the ever-changing (but battle-tested) Tumbleweed, and the rock-solid Leap. So, one user would ever want to experience the latest state of many applications, or just be focused on KDE software while running from a stable base. So, if we could create images using KIWI, why not create two, one for Leap and one for Tumbleweed? And you know what…

Lo and behold, in particular thanks to the heroic efforts of Raymond Wooninck, we had working images! We also like noble gases, so Argon and Krypton were born!

The nitty gritty details

These images work in two ways:

  • They work as live images, meaining you can test the latest KDE software without touching your existing system, and like that, not worry about something that breaks;
  • You can also install them, and have a fully updated Leap or Tumbleweed system with the KDE:Unstable repositories active. Use this if you know what you’re doing, and want to test and report issues.

Bugs, bugs everywhere!

Chances are that you might run into one or more bugs. This is software from git, and code review/continuous integration can only do so much. What should you do in such a case?

  1. (NOT recommended) Kick and scream and demand a refund ;)
  2. If your bug is in the distribution stack, like drivers or packaging, file a bug on openSUSE’s bugzilla with enough details so that it can be looked at;
  3. If your bug is instead in KDE software, turn to KDE’s bugzilla.

And of course, like the openSUSE login text says, “have a lot of fun!”