Skip to main content

the avatar of Timo's openSUSE Posts

Super Resolution Video Enhancing with AMD GPU

I’ve had a somewhat recent AMD Radeon RX 560 graphics card in my Mini-ITX PC for over a year already, and one long term interest I have would be to be able to enhance old videos. Thanks to Hackweek I could look at this as one of the things that I’ve waited to have time for. In recent years there have been approaches to use eg neural networks to do super resolution handling of photos and also videos, so that there would be more actual details and shapes than what would be possible via normal image manipulation. The only feasible way of doing those is by using GPUs, but unfortunately the way those are utilized is a bit of a mess, most of all because proprietary one vendor only CUDA is the most used one.

On the open source side, there is OpenCL in Mesa by default and Vulkan Compute, and there’s AMD’s separate open source ROCm that offers a very big platform for computing but also among else OpenCL 2.x support and a source level CUDA to portable code (including AMD support) translator called HIP. I won’t go into HIP, but I’m happy there’s at least the idea of portable code being thrown around. OpenCL standardization has been failing a bit, with the newest 3.0 trying to fix why the industry didn’t adopt 2.x properly and OpenCL 1.2 being the actual (but a bit lacking) baseline. I think the same happened a bit with OpenGL 1.x -> 2.x -> 3.x a long time ago by the way… Regardless if the portable code is OpenCL or Vulkan Compute, the open standards are now making good progress forward.

I first looked at ROCm’s SLE15SP2 installation guide - interestingly, it also installed on Tumbleweed if ignoring one dependency problem of openmp-extras. However, on my Tumbleweed machine I do not have Radeon so this was just install test. I was however surprised that even the kernel module compiled against TW’s 5.11 kernel - and if it would have not, there’s the possibility of using upstream kernel’s module instead.

Over at the Radeon machine side, I installed ROCm without problems. Regardless of that SDK, many real world applications for video enhancing are still CUDA specific so would at minimum require porting to portable code with HIP. These include projects like DAIN, DeOldify etc. However, there are also projects using OpenCL and Vulkan Compute - I’ll take video2x as an example here that supports multiple backends, some of which support OpenCL (like waifu2x-converter-cpp) or Vulkan (like realsr-ncnn-vulkan). I gave the latter a try on my Radeon as it won the CVPR NTIRE 2020 challenge on super-resolution.

I searched my video collection far enough back in time so that I could find some 720p videos, and selected one which has a crowd and details in distance. I selected 4x scaling even though that was probably a time costly mistake as even downscaling that 5120x2880 video probably wouldn’t be any better than 2x scaling to 2560x1440 and eg downscaling that to 1920x1080. I set up the realsr-ncnn-vulkan and ffmpeg paths to my compiled versions, gave the 720p video as input and fired way!

And waited. For 3.5 hours :) Noting that I should probably buy an actual highend AMD GPU, but OTOH my PC case is smallish and on the other hand the market is unbearable due to crypto mining. The end result was fairly good, even if not mind blowing. There are certainly features it handles very well, and it seems to clean up noise a bit as well while improving contrast a bit. It was clearly something that would not be possible by traditional image manipulation alone. Especially the moving video seems like higher resolution in general. Maybe the samples I’ve seen from some other projects out there have been better, then OTOH my video was a bit noisy with movement blur included so not as high quality as would be eg a static photo or a video clip tailored to show off an algorithm.

For showing the results, I upscaled a frame from the original 1280x720 clip to the 4x 5120x2880 resolution in GIMP, and then cropped a few pieces of both it and the matching frame in the enhanced clip. I downscaled them actually then to 1440p equivalent crop. Finally, I also upscaled the original frame to FullHD and downscaled the enhanced one to FullHD as well, to be more in line what could be one actual desired realistic end result - 720p to 1080p conversion while increasing detail.

The “1440p equivalent” comparison shots standard upscale vs realsr-ncnn-vulkan.

crop1 original crop1 enhanced

crop2 original crop2 enhanced

The FullHD scaling comparison - open both as full screen in new window and compare.

original scaled to FullHD enhanced scaled to FullHD

the avatar of Henne Vogelsang

Measure twice, cut once: App Performance Monitoring with influxdb-rails

Now that we learned how truly magnificent ActiveSupport::Notifications is in the previous post, let's explore a RubyGem Chris, others and me have built around this: influxdb-rails. Together with even more awesome Software Libre, it will help you to deep dive into your Ruby on Rails application performance.

This post ist part two of a three part series about monitoring your Ruby on Rails app with influxdb-rails. Make sure to check them all out!

Application Performance Monitoring (APM)

I'm not sure we have to discuss this at all anymore, but here is why I think I need application performance monitoring for my Ruby on Rail apps: I'm an expert in using the software I hack. Hence I always travel the happy path, cleverly and unconsciously avoid the pitfalls, use it with forethought of the underlying architecture. Because of that, I usually think my app is fast: perceived performance.

It's not what you look at that matters, it's what you see. – Henry David Thoreau

That is why I need someone to correct that bias for me, in black and white.

Free Software APM

The good folks at Rails bring the instrumentation framework. InfluxData deliver a time series database. Grafana Labs make a dashboard builder. All we need to do, as so often, is to plug Software Libre together: SUCCE$$!

InfluxDB + Grafana == 🧨

I assume your Rails development environment is already running on 127.0.0.0:300. Getting InfluxDB and Grafana is a matter of pulling containers these days. You can use this simple docker-compose configuration for running them locally.

# docker-compose.yml
version: "3.7"
services:
  influx:
    image: influxdb:1.7
    environment:
      - INFLUXDB_DB=rails
      - INFLUXDB_USER=root
      - INFLUXDB_USER_PASSWORD=root
      - INFLUXDB_READ_USER=grafana
      - INFLUXDB_READ_USER_PASSWORD=grafana
    volumes:
      - influxdata:/var/lib/influxdb
    ports:
      - 8086:8086
  grafana:
    image: grafana/grafana:7.4.5
    ports:
      - 4000:3000
    depends_on:
      - influx
    volumes:
      - grafanadata:/var/lib/grafana
volumes:
  influxdata:
  grafanadata:

A courageous docker-compose up will boot things and you can access Grafana at http://127.0.0.1:4000 (user: admin / password: admin). To read data from the InfluxDB container in Grafana, leave the /datasources InfluxDB defaults alone and configure:

URL: http://influx:8086
Database: rails
User: grafana
Password: grafana

influxdb-rails: 🪢 things together

The influxdb-rails RubyGem is the missing glue code for making your app report metrics into the InfluxDB. Plug it into your Rails app by adding it to your bundle.

bundle add influxdb-rails
bundle exec rails generate influxdb

The next time you boot your Rails dev-env it will start to measure the performance of your app. Now comes the interesting part, interpreting this data with Grafana.

Understanding Ruby on Rails Performance

Every time you use your dev-env, influxdb-rails will write a plethora of performance data into InfluxDB. Let's look at one of the measurements so you get an idea of what's going on. You remember the ActiveSupport::Notification called process_action.action_controller from the previous post? Rails sends this message every time an action in your controller has finished. It includes performance data for this action.

You should know this from somewhere: development.log! It contains the same information.

Started GET "/things" for 127.0.0.1 at 2021-03-25 15:20:14 +0100
...
Completed 200 OK in 5ms (Views: 4.1ms | ActiveRecord: 0.1ms | Allocations: 3514)

ThingsController#index took 5ms to finish overall, 4.1ms of those 5 in rendering, 0.1ms in querying the database. You find the same data for every request you make in your InfluxDB. Head over to http://127.0.0.1:4000/explore and let Grafana plot it for you.

A Grafana Panel plot

Only want to see how your views are performing? Change the field from controller to view. Magic 🪄 But this is only one out of many different ways to look at this measurement. All of the panels below use this one measurement and the data it brings to visualize controller actions.

Many different Grafana panel plots

And this is only one measurement, influxdb-rails reports around a dozen. Now I could send you off your way to learn about ALL. THE. SOFTWARE. involved, but that would be mean wouldn't it?

Ruby on Rails Dashboards

We, the Free Software community, are in this together! We collaborate on Ruby on Rails. We work together to make InfluxDB better. We cooperate to improve Grafana. Why not do the same for the dashboards to visualize Rails performance data? Let's collaborate! That is why we have build a couple of dashboards you can import. Just copy and paste the URLs into your Grafana.

Play a little, I will tell you about all the nitty gritty details in the last post of this series: Let's build Software Libre APM together

the avatar of Nathan Wolf
the avatar of Henne Vogelsang

ActiveSupport::Notifications is Rad!

One of the lesser known parts of Rails core is the ActiveSupport instrumentation framework. ActiveSupport::Notifications includes all the things you need to implement pub-sub in Rails. Pub-Sub is a software architecture where you publish (send) a message without being specific about who should receive it. Fire and forget.

Receiving a message, and doing something with it, "just" requires you to subscribe to it. Because the publisher doesn't need to know about the subscribers (as they are decoupled), this provides nice opportunities for organization and scale.

Let's explore the joyful shenanigans of this.

This post is the first part of a three part series about instrumenting your Ruby on Rails app with influxdb-rails. Make sure to check them all out!

Publish & Subscribe

There is an instrumentation message emitted from ActionController that includes interesting data about the things happening in your controller action. Let's explore this.

If you don't have a Ruby on Rails app at hand, just setup a minimal one with rails new --minimal

Add this code into an initializer

# config/initializers/instrumentation_subscriber.rb

# What happens in ActionController?
module ActionController
  class InstrumentationSubscriber < ActiveSupport::Subscriber
    attach_to :action_controller

    def process_action(message)
      Rails.logger.debug "Instrumentation #{message.name} Received!"
    end
  end
end

Boot the app (rails server), visit http://127.0.0.0:3000 and you'll see the new log lines in your development.log. So what? What's the difference to calling Rails.logger in an action or callback inside your controller? Why is ActiveSupport::Notifications fabulous?

ActiveSupport::Notifications Scales

First, as explained in the intro, the main advantage is that the publisher is decoupled from the subscriber. For instance, you can have more than one subscriber listening to process_action.action_controller.

# config/initializers/slowlog_subscriber.rb

module ActionController
  class SlowlogSubscriber < ActiveSupport::Subscriber
    attach_to :action_controller

    def process_action(message)
      return if message.duration <= 10

      controller_location = [message.payload[:controller], message.payload[:action]].join("#") # -> ThingsController#something
      Rails.logger.debug "#{controller_location} was slow (#{message.duration}ms)" 
    end
  end
end

You are free to organize this however you want. Decouple publisher/subscriber in different files, chronological or even in different threads.

# config/initializers/poor_mans_background_job_subscriber.rb

module ActionController
  class PMBGJSubscriber < ActiveSupport::Subscriber
    include Concurrent::Async

    attach_to :action_controller

    def process_action(message)
      async.background_job(message)
    end

    def background_job(message)
      # ...do something expensive with the message in a thread
      sleep(60)
    end
  end
end

Activesupport::Notifications Is Everywhere Today

Second, and you probably already guessed this from the example, what makes this most awesome are the ready made messages that are already published today.

Rails for instance uses ActiveSupport::Notifications to publish close to 50(!) different instrumentation events that include data about your application. Data ranging from the HTTP status code of your requests, over which partials were rendered, to more esoteric measurements like the byte range attempted to be read from your ActiveStorage service. Check the instrumentation guide for all the dirty secrets.

ActiveSupport::Notifications Is Easily Extendible

Last but not least, you can not only listen to messages others publish, you can publish messages to yourself.

ActiveSupport::Notifications.instrument "cool_thing_happened.my_app", {some: :data} do
  MyApp.do_cool_thing(true)
end

Okay, you're convinced I hope! Now what do people do with ActiveSupport::Notifications out in the world?

Application Health/Performance Monitoring

Ever wondered how your metrics get to Sentry, New Relic, Datadog or Skylight? You guessed it, via ActiveSupport::Notifications.

Now if the main work, publishing tons of messages through a pub-sub architecture, is already done by Rails, surely there are non-SaaS (a.k.a. Software Libre) options to display this data, right? Turns out there are not. While evaluating options for some of my projects (especially Open Build Service) we came across this gap and started to fill it.

How, why and where? Read on in the next part of this series: Measure twice, cut once: App Performance Monitoring with influxdb-rails

the avatar of Nathan Wolf

Cubicle Chat | 20 Mar 2021

In the absence of a Linux centered live stream called BDLL, I was encouraged by another community member, Vash, to have a kind of “low rent” replacement of the show until it is picked back up. This was my first run at such an activity and I ultimately think it went fairly well. I made […]

a silhouette of a person's head and shoulders, used as a default avatar

Thinking in Questions with SQL

I love SQL, despite its many flaws.

Much is argued about functional programming vs object oriented. Different ways of instructing computers.

SQL is different. SQL is a language where I can ask the computer a question and it will figure out how to answer it for me.

Fluency in SQL is a very practical skill. It will make your life easier day to day. It’s not perfect, it has many flaws (like null) but it is in widespread use (unlike, say, prolog or D).

Useful in lots of contexts

As an engineer, sql databases often save me writing lots of code to transform data. They save me worrying about the best way to manage finite resources like memory. I write the question and the database (usually) figures out the most efficient algorithm to use, given the shape of the data right now, and the resources available to process it. Like magic.

SQL helps me think about data in different ways, lets me focus on the questions I want to ask of the data; independent of the best way to store and structure data.

As a manager, I often want to measure things, to know the answer to questions. SQL lets me ask lots of questions of computers directly without having to bother people. I can explore my ideas with a shorter feedback loop than if I could only pose questions to my team.

SQL is a language for expressing our questions in a way that machines can help answer them; useful in so many contexts.

It would be grand if even more things spoke SQL. Imagine you could ask questions in a shell instead of having to teach it how to transform data 

Why do we avoid it?

SQL is terrific. So why is there so much effort expended in avoiding it? We learn ORM abstractions on top of it. We treat SQL databases as glorified buckets of data: chuck data in, pull data out.

Transforming data in application code gives a comforting amount of control over the process, but is often harder and slower than asking the right question of the database in the first place.

Do you see SQL as a language for storing and retrieving bits of data, or as a language for expressing questions?

Let go of control 

The database can often figure out the best way of answering the question better than you.

Let’s take an identical query with three different states of data.

Here’s two simple relations with 1 attribute each. a and b. With a single tuple in each relation. 

create table a(id int);
create table b(id int);
insert into a values(1);
insert into b values(1);
explain analyze select * from a natural join b;

“explain analyze” is telling us how postgres is going to answer our question. The operations it will take, and how expensive they are. We haven’t told it to use quicksort, it has elected to do so.

Looking at how the database is doing things is interesting, but let’s make it more interesting by changing the data. Let’s add in a boatload more values and re-run the same query.

insert into a select * from generate_series(1,10000000);
explain analyze select * from a natural join b;

We’ve used generate_series to generate ten million tuples in relation ‘a’. Note the “Sort method” has changed to use disk because the data set is larger compared to the resources the database has available. I haven’t had to tell it to do this. I just asked the same question and it has figured out that it needs to use a different method to answer the question now that the data has changed.

But actually we’ve done the database a disservice here by running the query immediately after inserting our data. It’s not had a chance to catch up yet. Let’s give it a chance by running analyze on our relations to force an update to its knowledge of the shape of our data. 

analyze a;
analyze b;
explain analyze select * from a natural join b;

Now re-running the same query is a lot faster, and the approach has significantly changed. It’s now using a Hash Join not a Merge Join. It has also introduced parallelism to the query execution plan. It’s an order of magnitude faster. Again I haven’t had to tell the database to do this, it has figured out an easier way of answering the question now that it knows more about the data.

Asking Questions

Let’s look at some of the building blocks SQL gives us for expressing questions. The simplest building block we have is asking for literal values.

SELECT 'Eddard';
SELECT 'Catelyn';

A value without a name is not very useful. Let’s rename them.

SELECT 'Eddard' AS forename;
SELECT 'Catelyn' AS forename;

What if we wanted to ask a question of multiple Starks: Eddard OR Catelyn OR Bran? That’s where UNION comes in. 

select 'Eddard' as forename 
UNION select 'Catelyn' AS forename 
UNION SELECT 'Bran' AS forename;

We can also express things like someone leaving the family. With EXCEPT.

select 'Eddard' as forename 
UNION select 'Catelyn' AS forename 
UNION select 'Bran' AS forename 
EXCEPT select 'Eddard' as forename;

What about people joining the family? How can we see who’s in both families. That’s where INTERSECT comes in.

(
  SELECT 'Jamie' AS forename 
  UNION select 'Cersei' AS forename 
  UNION select 'Sansa' AS forename
) 
INTERSECT 
(
  select 'Sansa' AS forename
);

It’s getting quite tedious having to type out every value in every query already. 

SQL uses the metaphor “table”. We have tables of data. To me that gives connotations of spreadsheets. Postgres uses the term “relation” which I think is more helpful. Each “relation” is a collection of data which have some relation to each other. Data for which a predicate is true. 

Let’s store the starks together. They are related to each other. 

create table stark as 
SELECT 'Sansa' as forename  
UNION select 'Eddard' AS forename  
UNION select 'Catelyn' AS forename  
UNION select 'Bran' AS forename ;

create table lannister as 
SELECT 'Jamie' AS forename 
UNION select 'Cersei' AS forename 
UNION select 'Sansa' AS forename;

Now we have stored relations of related data that we can ask questions of. We’ve stored the facts where “is a member of house stark” and “is a member of house lannister” are true. What if we want people who are in both houses. A relational AND. That’s where NATURAL JOIN comes in.

NATURAL JOIN is not quite the same as the set based and (INTERSECT above). NATURAL JOIN will work even if there are different arity tuples in the two relations we are comparing.

Let’s illustrate this by creating a relation pet with two attributes.

create table pet as 

CREATE TABLE pet as 
SELECT 'Sansa' as forename, 'Lady' as pet
UNION select 'Bran' AS forename, 'Summer' as pet;

Now we have an AND, what about OR? We have a set-or above (UNION). I think the closest thing to a relational OR is a full outer join. 

create table animal as select 'Lady' as forename, 'Wolf' as species UNION select 'Summer' as forename, 'Wolf' as species;
select * from stark full outer join animal using(forename);

Ok so we can ask simple questions with ands and ors. There are also equivalents of most of the relational algebra operations

What if I want to invade King’s Landing?

What about more interesting questions? We can do those too. Let’s jump ahead a bit.

What if we’re wanting to plan an attack on Kings Landing and need to consider the routes we could take to get there. Starting from just some facts about the travel options between locations, let’s ask the database to figure out routes for us.

First the data. 

create table move (place text, method text, newplace text);
insert into move(place,method,newplace) values
('Winterfell','Horse','Castle Black'),
('Winterfell','Horse','White Harbour'),
('Winterfell','Horse','Moat Cailin'),
('White Harbour','Ship','King''s Landing'),
('Moat Cailin','Horse','Crossroads Inn'),
('Crossroads Inn','Horse','King''s Landing');

Now let’s figure out a query that will let us plan routes between origin and destination as below

We don’t need to store any intermediate data, we can ask the question all in one go. Here “route_planner” is a view (a saved question)

create view route_planner as
with recursive route(place, newplace, method, length, path) as (
	select place, newplace, method, 1 as length, place as path from move --starting point
		union -- or 
	select -- next step on journey
		route.place, 
		move.newplace, 
		move.method, 
		route.length + 1, -- extra step on the found route 
		path || '-[' || route.method || ']->' || move.place as path -- describe the route
	from move 
	join route ON route.newplace = move.place -- restrict to only reachable destinations from existing route
) 
SELECT 
	place as origin, 
	newplace as destination, 
	length, 
	path || '-[' || method ||  ']->' || newplace as instructions 
FROM route;

I know this is a bit “rest of the owl” compared to what we were doing above. I hope it at least illustrates the extent of what is possible. (It’s based on the prolog tutorial). We have started from some facts about adjacent places and asked the database to figure out routes for us.

Let’s talk it through…

create view route_planner as

this saves the relation that’s the result of the given query with a name. We did this above with

create table lannister as 
SELECT 'Jamie' AS forename 
UNION select 'Cersei' AS forename 
UNION select 'Sansa' AS forename;

While create table will store a static dataset, a view will re-execute the query each time we interrogate it. It’s always fresh even if the underlying facts change.

with recursive route(place, newplace, method, length, path) as (...);

This creates a named portion of the query, called a “common table expression“. You could think of it like an extract-method refactoring.  We’re giving part of the query a name to make it easier to understand. This also allows us to make it recursive, so we can build answers on top of partial answers, in order to build up our route.

select place, newplace, method, 1 as length, place as path from move

This gives us all the possible starting points on our journeys. Every place we know we can make a move from. 

We can think of two steps of a journey as the first step OR the second step. So we represent this OR with a UNION

join route ON route.newplace = move.place

Once we’ve found our first and second steps, the third step is just the same—treating the second step as the starting point. “route” here is the partial journey so far, and we look for feasible connected steps. 

path || '-[' || route.method || ']->' || move.place as path;

here we concatenate instructions so far through the journey. Take the path travelled so far, and append the next mode of transport and next destination.

Finally we select the completed journey from our complete route

SELECT 
	place as origin, 
	newplace as destination, 
	length, 
	path || '-[' || method ||  ']->' || newplace as instructions 
FROM route;

Then we can ask the question

select instructions from route_planner 
where origin = 'Winterfell' 
and destination = 'King''s Landing';

and get the answer

                                 instructions                                   
-------------------------------------------------------------------------------
Winterfell-[Horse]->White Harbour-[Ship]->King's Landing
Winterfell-[Horse]->Moat Cailin-[Horse]->Crossroads Inn-[Horse]->King's Landing
(2 rows)

Thinking in Questions

Learning SQL well can be a worthwhile investment of time. It’s a language in widespread use, across many underlying technologies. 

Get the most out of it by shifting your thinking from “how can I get at my data so I can answer questions” to “How can I express my question in this language?”. 

Let the database figure out how to best answer the question. It knows most about the data available and the resources at hand.

The post Thinking in Questions with SQL appeared first on Benji's Blog.

the avatar of Nathan Wolf

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE Tumbleweed – Review of the week 2021/11

Dear Tumbleweed users and hackers,

The biggest trouble of the week was the mirror infrastructure having a hard time catching up to the full rebuild. Tumbleweed itself was, as usual, solid and has been steadily rolling. In total, there were 4 snapshots (0312, 0315, 0316, and 0317) released last week.

The main changes in those snapshots included:

  • Mozilla Thunderbird 78.8.1
  • Mozilla Firefox 86.0.1
  • KDE Frameworks 5.80.0
  • Bison 3.7.6
  • grub2: boothole v2 fixes: the first iteration was blocked, as dual boot was broken. New signing certs and revocation of old certs will follow.
  • PipeWire 0.3.23
  • Linux kernel 5.11.6
  • SQLite 3.35.0
  • Systemd 246.11

The staging projects are largely unchanged, the main topics there are still:

  • KDE Plasma 5.21.3
  • Perl 5.32.1
  • SELinux 3.2
  • Python 3.9 modules: besides python36-FOO and python38-FOO, we are testing to also shop python39-FOO modules; we already have the interpreter after all. Python 3.8 will remain the default for now.
  • UsrMerge is gaining some traction again, thanks to Ludwig for pushing for it
  • GCC 11 as the default compiler
the avatar of Nathan Wolf

Noodlings 25 | Getting Feedback

The 25th value menu sized podcast This is just a Nate-echo-chamber of ideas but if you are interested in more thoughts and opinions in discussion with other Linux and open source enthusiasts, subscribe to DLN Xtend, a podcast with the Destination Linux Network where I have a chat with my co-hosts Matt and Wendy on […]
the avatar of openSUSE News

Entire Rebuild of Tumbleweed Brings Enormous Update

There were few packages untouched in openSUSE’s rolling release distribution Tumbleweed this week as updates poured out of five new snapshots.

The 20210311 snapshot provided an entire rebuild of the distribution, which is something that occasionally happens.

The most recent 20210317 snapshot updated more than a half dozen packages, which included the data plotting package kplotting as the lone KDE Frameworks 5.80.0 package to update in the snapshot. A memory leak fix was made in the update of flatpak 1.10.2 and a security update in the package fixed a potential attack where a flatpak application could use custom formatted .desktop files to gain access to files on the host system. An update of systemd 246.11 fixed a void pointer arithmetic warning and moved Secure Boot logic to a new file. Other updates in the snapshot included spacenavd 0.8, python-packaging 20.9, python-scipy 1.6.1 and rtkit 0.13.

Snapshot 20210316 delivered most of the 5.80.0 Frameworks packages. Kirigami, which offers application framework components for mobile, had multiple improvements and fixes; it changed and improved the PlaceholderMessage for new Application Programming Interfaces. The Plasma Framework package ported a Plasma Style Kirigami Theme plugin to the new Kirigami API. A Flatpak manifest was also added to the Kirigami template. The snapshot brought an update of ImageMagick 7.0.11.3, which decodes HEIC images in sRGB instead of YCbCr. Mozilla Firefox 86.0.1 fixed a frequent Linux crash on the browser launch. The 5.11.6 Linux Kernel was updated in the snapshot, which had some Btrfs fixes. The kernel also enabled the headset microphone of the Acer Swift line. There was a fix for the maximum length of a password entered through a terminal with cryptsetup 2.3.5. Various fixes were made in the update of xfsprogs 5.11.0 and the Open Chinese Convert library opencc 1.1.2 added a Hong Kong Traditional Chinese conversion. A major version update of Python-hyperlink to 21.0.0 was included in the snapshot and bumped some long overdue dependencies. Other packages to update in the snapshot were gnutls 3.7.1, vim 8.2.2607 and sqlite3 3.35.0, which enhanced the .stats command to accept new arguments stmt and vmstep and causes the prepare statement statistics and only the virtual-machine step count to be shown, respectively.

Updated KDE packages appeared to be a common theme throughout the week as snapshot 20210315 provided an update of KDE Plasma 5.21.2; the Plasma Desktop update avoids using non-integer numbers as spacing and the Plasma Workspace fixed a bug that for a more graceful handling of the escape key in history view. KWin had some updates for Wayland like fixing the PrepareForSleep dbus connection and a commit that honors a NoPlugin option. A major version of rubygem-rspec-rails 5.0.0 added new fixture test support code and dropped support for Rails below 5.2. Multimedia package pipewire 0.3.23 has some critical fixes from the previous release, makes improvements in JSON parsing and encoding, and Bluetooth now supports delay adjustments. Wireshark 3.4.4 fixed one Common Vulnerabilities and Exposure that could open unsafe URLs.

Snapshot 20210312 provided some OpenPGP fixes and calendar fixes with the Mozilla Thunderbird 78.8.1 update. Data transfer package curl 7.75.0 added Hyper as a new optional HTTP backend and introduced AWS HTTP v4 Signature support. An update of btrfsprogs 5.11, which is the userspace utilities to manage btrfs filesystems, brought in a new subcommand create-control-device. The new version of bison 3.7.6 fixed the reused push parsers and table generation. The updated version of git 2.30.2 took care of a CVE that could have been fooled into running remote code during a clone. Other packages updated in the snapshot were text editor nano 5.6.1, sssd 2.4.2 and yast2 4.3.59.

Snapshot 20210311 was an entire rebuild of the Tumbleweed distribution. This snapshot also included package updates from KDE Plasma 5.21.2, but KDE Applications 20.12.3 had the most updates in in the snapshot. Video editor Kdenlive had the most enhancements in the snapshot; an enhancement included improving the handling when switching to fullscreen and there was a crash fix for group keyframe moves. Another 20.12.3 Applications update was made to the storage package Akonadi, which makes use of loose_ option prefix for the MySQL server settings. Most of the updates around the 7.1.1.2 update of LibreOffice involved translation updates. Updates to brltty, php, vim, fwupd and redis were among the several packages updated in this rebuild snapshot.