Unbricking an ESP8266 with esptool.py on Linux
Mindtrek 2025
The yearly Mindtrek 2025 was arranged again at Tampere, Finland, with some nice changes to the format. The full program was divided into three tracks – in addition to the former busines and public sector tracks, there was also a track for developers! I spend most of the time in the new Developers track thanks to interesting talks.
SUSE also had a presentation about digital sovereignty, including operations sovereignty, business continuity and overall making a great case for why open source is not a plus, it’s a must.
The keynote of the conference was by Peer Heinlein about OpenCloud and how it relates to NextCloud, and what happened to OwnCloud – and how all of that binds to how Europe can go forward to be self-sustaining in the digital world. So, the same theme there as well, which is not a surprise given the main theme for the conference was OpenTech: Towards Digital Sovereignty.
As for the developers track, there was an interesting talk about how open source was key in enabling entering to German education market for Opinsys/Puavo. Another talk was about studies on using AI, interesting in lack of negativity and having certain neutrality, even if the findings were negative. There was a talk about HPC federation in Europe’s compute ecosystem, how is running Meson project like in practice, a low-level presentation about TPM2 and then lastly some GIS spatial information open source company information.

All in all, the conference had me have lots of good talks with many of the presenters and other people, and also a nice, comfortable evening program where talks further continued for hours after hours.

Thanks to COSS for once again excellent conference, and I hope to see you all next year!

GSoC 2025, Building a Semantic Search Engine for Any Video
Hello, openSUSE community!
My name is Akash Kumar, and I was a Google Summer of Code (GSoC) 2025 mentee with the openSUSE organization. This blog post highlights the project I developed during this mentorship program, which openSUSE and its mentors helped make possible. This summer, I had the incredible opportunity to contribute to the project titled “Create open source sample microservice workload deployments and interfaces.” The goal was to build a functional, open-source workload that could provide relevant analytics for a specific use case.
For my project, I chose to tackle a common but complex problem: searching for content inside a video. This blog post details the outcome of my GSoC project: a full, end-to-end semantic video search engine.
The Problem: Beyond Keywords
Ever tried to find a specific moment in a long video? You might remember the scene vividly - a character gives a crucial speech, or there’s a beautiful, silent shot of a landscape - but you can’t remember the exact timestamp. You end up scrubbing back and forth, wasting minutes, or even hours.
Traditional video search relies on titles, descriptions, and manual tags. It’s limited. It can’t tell you what’s inside the video.
As part of my GSoC deliverable, I set out to solve this. I wanted to build a system that lets you search through a video’s content using natural language. I wanted to be able to ask, “find the scene where they discuss the secret plan in the warehouse,” and get an instant result.
The Big Picture: A Two-Act Play
The entire system is divided into two main parts:
- The Ingestion Pipeline (The Heavy Lifting): An offline process that takes a raw video file and uses a suite of AI models to analyze it, understand it, and store that understanding in a specialized database.
- The Search Application (The Payoff): A real-time web application with a backend API and a frontend UI that lets users perform searches and interact with the results.
Let’s walk through how it all works, step by step.
Part 1: The Ingestion Pipeline - Teaching the Machine to Watch TV
This is where the magic begins. We take a single .mp4 file and deconstruct it into a rich, multi-modal dataset.
Step 1: Deconstructing the Video (Extraction)
First, we break the video down into its fundamental atoms: shots, sounds, and words. I used a series of specialized AI models for this:
-
Shot Detection (
TransNetV2): The video is scanned to identify every single camera cut, creating a “skeleton” of the video’s structure. -
Transcription & Diarization (
WhisperX): The audio is extracted, and WhisperX transcribes all spoken dialogue into text. Crucially, it also performs diarization—identifying who spoke and when, assigning generic labels likeSPEAKER_00andSPEAKER_01. -
Visual Captioning (
BLIP): For every single shot, we extract a keyframe and ask the BLIP model to generate a one-sentence description of what it sees (e.g., “a man in a suit is standing in front of a car”). -
Action & Audio Recognition (
VideoMAE,AST): We go even deeper, analyzing the video clips to detect actions (“talking,” “running”) and the audio to identify non-speech events (“music,” “applause,” “engine sounds”).
At the end of this step, we have a mountain of raw, timestamped data.
Step 1.5: The Human in the Loop (Speaker ID)
The AI knows that different people are speaking, but it doesn’t know their names. This is where a little human intelligence goes a long way. The pipeline automatically pauses and launches a simple web tool. In this tool, I can see all the dialogue for SPEAKER_00, play a few clips to hear their voice, and map them to their real name, like “John Wick.” This simple, one-time step makes the final data infinitely more useful.
Step 2: Finding the Narrative (Intelligent Segmentation)
Searching through hundreds of tiny, 2-second shots isn’t a great user experience. We need to group related shots into coherent scenes or segments. A single conversation might involve 20 shots, but it’s one single event.
To solve this, I developed a “Boundary Scoring” algorithm. It iterates through every shot and calculates a “change score” to the next one, based on a weighted combination of factors:
- Has the topic of conversation changed? (semantic text similarity)
- Have the visuals changed significantly?
- Did the person speaking change?
- Did the background sounds or actions change?
If the total change score is high, we declare a “hard boundary” and start a new segment. This transforms a chaotic list of shots into a clean list of meaningful scenes.
Step 3: Adding a Layer of Genius (LLM Enrichment)
With our coherent segments defined, we bring in a Large Language Model (like Google’s Gemini) to act as an expert video analyst. For each segment, we feed the LLM all the context we’ve gathered—the transcript, the speakers, the visual descriptions, the actions—and ask it to generate:
- A short, descriptive Title.
- A concise 2-3 sentence Summary.
- A list of 5-7 relevant Keywords.
This adds a layer of human-like understanding, making the data even richer and more searchable.
Step 4: Preparing for Search (Indexing)
The final step is to prepare this data for lightning-fast search. We use a vector database (ChromaDB). The core idea is to convert text into numerical representations called embeddings.
The key innovation here is our hybrid embedding strategy. For each segment, we create two distinct embeddings:
- Text Embedding: Based on the transcript and summary. This represents what was said.
- Visual Embedding: Based on the visual captions and actions. This represents what was shown.
These embeddings are stored in ChromaDB. Now, the video is fully processed and ready to be searched.
Part 2: The Search Application - Reaping the Rewards
This is where all the offline work pays off. The application consists of a backend “brain” and a frontend “face.”
The Brains: The FastAPI Backend
The backend API is the engine of our search. When it receives a query, it follows a precise, high-speed process:
- Vectorize Query: The user’s query is converted into the same type of numerical vector using the same model from the indexing step.
- Hybrid Search: It queries ChromaDB twice in parallel—once against the text embeddings and once against the visual embeddings.
- Re-Rank & Fuse: It takes both sets of results and merges them using an algorithm called Reciprocal Rank Fusion (RRF). This is incredibly powerful. A segment that ranks highly on both the text and visual search (e.g., a character says “Look at the helicopter” while a helicopter is on screen) gets a massive score boost and shoots to the top of the list.
- Respond: The backend fetches the full metadata for the top-ranked results and sends it back to the frontend as a clean JSON response.
The Face: The Streamlit UI
The frontend is a simple, clean web interface built with Streamlit. It features a search bar, a video player, and a results area. When you click “Play” on a search result, it instantly jumps the video player to the exact start time of that segment. It’s fast, intuitive, and incredibly satisfying to use.
The Final Result & GSoC Experience
Imagine searching for “a tense negotiation in a warehouse.” The system finds it in seconds because:
- The Text Search matches the dialogue about “the deal,” “the money,” and “the terms.”
- The Visual Search matches the AI captions like “two men sitting at a table” and “a dimly lit, large room.”
- The RRF algorithm sees that both signals point to the same segment and ranks it as the #1 result.
This project was a fascinating journey into the world of multi-modal AI. It demonstrates that by combining the strengths of different models, we can deconstruct unstructured data like video and reassemble it into a smart, searchable, and genuinely useful asset.
I want to extend a huge thank you to my mentor, @bwgartner, and the entire openSUSE community for their support and guidance throughout the summer. Participating in GSoC with openSUSE has been an invaluable learning experience.
The days of aimless scrubbing may soon be behind us. If you’re interested in trying it out or contributing, you can find the entire project on GitHub: https://github.com/AkashKumar7902/video-seach-engine.
Budapest Audio Expo 2025
Last year’s Budapest Audio Expo was the first hifi event I had truly enjoyed in years. Needless to say, I spent a day this weekend at the Audio Expo again :-) Building on last year’s experience, I chose to visit the expo on Sunday. There were fewer people and better-sounding systems.
TL;DR:
If I had to sum up the expo in a one statement: Made in Hungary audio rivals the rest of the world in quality, while often being available at a much more affordable price. My top three favorite sounds came from systems with components mostly made in Hungary: Dorn / Heed, NCS Audio / Qualiton, and Popori / 72 Audio (in alphabetical order :-) ). I have listened to quite a few systems costing a lot more. However, these were the systems that I enjoyed listening to the most.
Dorn / Heed
Dorn / Heed Audio holds a special place in my heart. Not just there: I listen to a full Heed system at home. I was very happy to see them at the event. You could listen there to their latest speaker, connected to an Elixir (amplifier) and an Abacus (DAC). I listened to the exact same setup just a few weeks ago in their show room. Here they sounded even better. As you can see on the photos, “just” a pair of bookshelf speakers, still they could fill the room with clean sound. No matter their size, bass was also detailed and loud enough (disclaimer: there was no Metallica in the playlist, which might have changed this opinion ;-) ). Probably one of the cheapest systems on display at the Expo (not counting DIY), but still one of the best sounding. Natural, life-like sound, a joy to listen to. I went back there to rest, when I was tired from all the artificially sounding systems.
- Dorn (speakers): https://www.facebook.com/profile.php?id=100093087444699
- Heed (everything else): https://heedaudio.com/

Audio Expo 2025: Heed / Dorn

Audio Expo 2025: Heed / Dorn
NCS Audio / PRAUDIO / Qualiton
I first wrote about NCS Audio three years ago. Last year I called Reference One Premium the best value speaker at the expo, as it sounded equally good or sometimes even better than speakers costing an order of magnitude more. Well, nothing has changed from this point of view.
This year NCS Audio shared a room with PRAUDIO and Qualiton. I had a chance to participate in a quick demo, where we learned more about the various digital and analog sources and the speakers, and then listened to them. It was a kind of stereotypical hifi event: songs I listened many times in various rooms during the day. Still, it was good, as it was a wide selection of music, and they sounded just as good as I expected them :-)
- NCS Audio: https://www.ncsaudio.eu/
- PRAUDIO: https://praudio.hu/
- Qualiton: https://qualiton.eu/

Audio Expo 2025: NCS Audio

Audio Expo 2025: NCS Audio
Popori / 72 Audio
While most rooms featured devices, which are in production and available on the market, the room of 72 Audio was different. Everything we listened to, except for the electrostatic speakers from Popori Acoustics, were hand built long time ago. I was rude, as I responded to a text message while sitting in the back row and listening to the music in the room. While I was typing, the music stopped. A new song started, and suddenly I looked up confused: for a moment I was looking for the lady singing. Of course, she was not in the room, just the recording :-) Well, my ears are very difficult to trick, and it only works, when my mind is somewhere else. This was just the third time happening to me.
- Popori Acoustics: https://poporiacoustics.com/
- 72 Audio: https://www.72audio.com/

Audio Expo 2025: Popori Acoustics / 72 Audio
Disappointments
Of course, not everything was perfect. I do not want to say names here, just a few experiences.
I have seen ads about a pair of streaming speakers multiple times a day for the past few months. Finally I had a chance to listen to them. Well. Extreme amount of details. Extreme amount of bass, my weakness. Still, everything sounded too much processed, too artificial. Not my world.
Recently I learned about a speaker brand, developed and manufactured in the city of a close friend. Of course I became curious, how it sounds. Well, practically it’s a high-end home theater speaker. My immediate reaction was that I was looking around where can I watch the film. The exact same song, which was perfectly life-like on the NCS Reference One speakers, sounded like a background music of a movie on this lot more expensive system.
The hosts in most rooms were really kind, helpful, smiling. Not everywhere. When someone blocks the exit and tries to push a catalog in my hands without much communication, that’s a guarantee that I do not want to return there. Luckily this mentality was not typical at all at this event.
Others
Of course I cannot describe everything from a large expo in a single blog. But other than my top 3 favorites, there were a few more I definitely have to mention.
-
Allegro Audio was good, as always. They also had a Made in Hungary component, the Flow amplifier.
-
Hobby Audio was playing music from a tape using a self-built amplifier and pair of speakers. They looked DIY, and they were actually DIY, but had a much more natural sound than some of the much more expensive systems at the expo.

Audio Expo 2025: Audio Hobby
-
Natural Distortion demonstrated a prototype DAC and amplifier. Some of the features are still under development, none-the-less what already worked that sounded really nice and natural. A story definitely worth to follow!
-
Sound Mania had a nice sounding pair of speakers from Odeon Audio. Well, they look a kind of strange, but sound surprisingly good :-)

Audio Expo 2025: Sound Mania

Audio Expo 2025: Sound Mania
Disclaimers
I did not listen to everything. I skipped rooms with headphones, and probably two rooms at the end of a corridor which were always full when I tried to get in. Nobody asked me to shut up about stuff I did not like, I just do not like to be negative on things what are mostly subjective. Neither did anybody promise me money or any kind of audio equipment to write nice things, even if I would not mind receiving a new pair of speakers, an amplifier and a DAC :-)
Closing words
I borrow my closing words from my blog last year: I really hope that next year we will have a similarly good Audio Expo in Budapest!
openSUSE Leap Ready for Liftoff
Users are stepping forward to share how Linux distributions like openSUSE power their projects or interests as users in the community prepare for the next enduring release of openSUSE Leap.
Releases like Leap 16 can be used for aviation tracking and it is one of several use cases for the distribution.
“I’ve been feeding data since 2018 to FlightRadar24, and a few years ago I started sending to OpenSky Network and Plane Finder,” wrote one openSUSE user on the project’s mailing list. “My average distance is around 170 nautical miles.”
In the Mississippi Delta, the user, Malcolm, uses openSUSE as the backbone of these high-tech air traffic monitoring systems.
FlightRadar24, OpenSky Network and Plane Finder collect and share real-time aircraft data from ADS-B receivers worldwide, which allows users to track flights on interactive maps.
Using a stick PC with an Intel Celeron J4125 processor, a 26-inch ADS-B antenna, filters and amplifiers, and openSUSE’s reliability, Malcolm tracks more than 2,000 aircraft a day.
The setup runs with x86_64 hardware, while a Raspberry Pi 3 doubles as a GPS time server, keeping the system and local network synchronized. The systemd services manage the various software packages provided by the tracking networks. Leap’s stability and enduring maintenance and security provide users with an ideal distribution to deploy and enjoy.
Stories like this highlight the use cases that will shape Leap 16’s rollout. The distribution, which bridges community-driven development and enterprise-grade readiness, is expected to serve an even wider range of scenarios. From IoT devices and lab environments to production servers and specialized hobbyist setups, Leap 16 marks the start of a new lifecycle plan for the distribution.
Members of the openSUSE Project are trying to showcase how people use openSUSE . If you have a use cases for Leap 16 that you want to share, comment on the project’s mailing list.
People can leave feedback on survey.opensuse.org regarding the release of Leap 16.
Tumbleweed – Review of the week 2025/40
Dear Tumbleweed users and hackers,
The last trimester of 2025 has begun, and the northern hemisphere of the planet is getting colder; days are grey and short. What could be better than updating your Tumbleweed system? Only one thing: contributing to the package pool! Remind yourself: openSUSE is maintained by a vibrant community, and everybody is invited to help make the distributions better. Of course, Tumbleweed does not want to steal the spotlight from Leap 16.0 this week
Tumbleweed has seen a total of five snapshots (0925, 0929, 0930, 1001, and 1002) this week, bringing you these changes:
- GNOME 49.0: GDM has switched to dynamic users to support multi seat setups. We have seen issues with machines that had an /etc/nsswitch.conf without ‘systemd’ registered as lookup for ‘passwd, shadow, and groups’. Have a look at https://bugzilla.opensuse.org/show_bug.cgi?id=1250513 for details
- Coreutils 9.8
- Poppler 25.09.1
- Boost 1.89
- Meson 1.9.1
- strace 6.17
- AppStream 1.1.0
- Ghostscript 10
- LLVM 21.1.2
- PHP 8.4.13
- Linux kernel 6.16.9 & 6.17.0
- linux-glibc-devel 6.17
- Postgresql 18.0
- Mesa 25.2.3 & 25.2.4
- cURL 8.16.0
- Mozilla Firefox 143.0.3
The testing areas are currently looking into these areas of development:
- KDE Plasma 6.5 beta
- Systemd 258
- Rust 1.90
- ffmpeg-8 as default ffmpeg-implementation
Tomato and burrata pasta with basil Parmesan

End of summmer means home-grown tomatoes. Credits for this luscious Italian recipe from foodblogger Notorious Foodie. For full recipe see below. Their presentation beats mine, of course (make sure to turn the music on as well!).




- Slice in half ~400g of sweet, plump tomatoes – any small variety you like, datterini, piccolo, cherry – I used piccolo here
- In a med-heat pan, add 4tbsp EV olive oil, followed by 3 cloves garlic, half a shallot, 1 peperoncino / red chilli – fry ~4 mins till oil is infused and fragrant
- Add in tomatoes and season with sea salt + black pepp and basil – allow to cook for 15-20 mins, lid on – careful heat isn’t too high, you want a gentle simmer
- Remove lid and deglaze with a 200ml white wine – I used this Soave Classico from Inama Vigneti
- Allow to reduce for further 10 mins – you should see the tomato skins loosen and fall off – at this point, remove as many skins as poss + the garlic, basil, shallot and peperoncino, they’ve done their work
- Remove from heat and add sauce to pot along with 1 whole burrata
- Add 2tbsp olive oil and blitz together till smooth and creamy – taste and adjust for seasoning
- Add spaghetti to salted boiling water and add sauce back into the pan to heat up
- Throw your al dente pasta into the sauce along with some pasta water and continue melding together till perfectly cooked – once almost done, kill heat, add in some grated parmigiano and combine
- Basil Parmesan: add 75g cold parmigiano + a handful of basil leaves to a food processor and blitz till crumbly and bright green – this stuff is amazing
- Plate up, spooning over more of the tomato burrata sauce and sprinkling over your basil parm
Version 4.10.1 of syslog-ng now available
Version 4.10.1 is a bugfix release, not needed by most users. It fixes the syslog-ng container and platform support in some less common situations.
Before you begin
I assume that most people are lazy and/or overbooked, just like me. So, if you already have syslog-ng 4.10.0 up and running, and packaged for your platform, just skip this bugfix release.
What is fixed?
- You can now compile syslog-ng on FreeBSD 15 again.
- The syslog-ng container has Python support working again.
- Stackdump support compiles only on glibc Linux systems, but it also used to be enabled on others when libunwind was present. This problem affected embedded Linux systems using alternative libc implementations like OpenWRT and Gentoo in some cases. It is now turned off by default, therefore it needs to be explicitly enabled.
What is next?
If you are switching to 4.10.X now, using 4.10.1 might spare you some extra debugging. Especially if you are developing for an embedded system.

syslog-ng logo
Next Chapter Opens with Leap 16 Release
CA / CS / JA / LT / SV / ES / ZH-TW
Members of openSUSE Project are thrilled to announce the release of openSUSE Leap 16.
This major version update of our fixed-release community-Linux distribution has a fresh software stack and introduces an unmatched maintenance- and security-support cycle, a new installer and simplified migration options.
“Vendors and developers should give Leap and Leap Micro a serious look and consider it as the target platform for their solutions,” said release manager Lubos Kocman. “You get 24 months of free maintenance and security updates. No other community distro offers that at no cost.”
Leap 16 as a community-supported platform will shape open-source development breakthroughs and real-world solutions in the years ahead. The release is 2038 safe and comes with 32-bit (ia32) support disabled by default. It gives users the option to enable it manually and enjoy gaming with Steam, which still relies on 32-bit libraries. The hardware requirements have changed. Leap 16 now requires x86-64-v2 as a minimum CPU architecture level, which generally means CPUs bought in 2008 or later. Users with older hardware can migrate to Slowroll or Tumbleweed.
Leap 16 channels community and enterprise distribution code by building on the foundation of SUSE Linux Enterprise Server (SLES), bringing source and binary identicality with it. Users have the option to seamlessly migrate from openSUSE Leap 16 to SLES 16. Developers can use openSUSE Leap to create, test and run workloads for later deployment on SLES.
Leap 16 ships with the new Agama installer, which offers a more modern setup experience over the deprecated YaST-based installer. Leap 16 further supports parallel downloads in the package manager Zypper to speed up software installations and updates.
Migration also gets easier with this major version update. The new openSUSE Migration tool allows users to seamlessly upgrade from Leap 15 to Leap 16 as well as to migrate to Slowroll, Tumbleweed or SLES.
Leap 16 marks the start of a new life-cycle plan. Unless the project makes strategic changes, annual minor releases are expected to continue until 2031 with the release of Leap 16.6. A successor to Leap 16 is expected in 2032. Leap Micro, the project’s immutable server distribution, is adopting the same schedule.
The release comes with SELinux as the Linux Security Module (LSM) . AppArmor remains an option that can be selected post installation. Changes in Leap related to AppArmor and 32-bit support offer a transition period for users.
More advancements will come as Leap 16 evolves toward its final release next decade as automation, containerization, system tooling and hardware encryption mature.
Those who wish to develop for Leap 16 are encouraged to participate in the weekly feature review meeting on Mondays.
Known bugs for Leap 16 can be found on the project’s wiki.
People can leave feedback about the release of Leap 16 at survey.opensuse.org.
Migrating to openSUSE Leap 16.0 with opensuse-migration-tool
Over the years, I have noticed that the biggest challenges during upgrades usually involve 3rd-party repositories, mostly due to their unavailability for the new release or delays in catching up.
Another challenge has been constant changes to distribution repositories. For example, in Leap 15.3 we removed the ports repositories as part of the Closing the Leap Gap initiative and also introduced SLE Update repositories.
Now, with Leap 16.0, update repositories are being removed entirely. Leap Micro 6.X also no longer has dedicated update repositories.
In the past, users had to manually modify distribution repositories. Fortunately, openSUSE-repos automates this process and puts distribution repositories under RPM management. This is now the default behavior for both Leap Micro 6 and Leap 16. This dramatically simplified the whole upgrade and distribution migration process.
Why use opensuse-migration-tool
Upgrading your system doesn’t have to be scary or complicated. The opensuse-migration-tool is designed to make the process simple, safe, and predictable. I got inspired by our jeos-firstboot, which uses dialog for smooth interactions. The tool also greets you with a nice green dialog, thanks to a customized dialogrc—giving it that familiar openSUSE look and feel right from the start.
Here’s what it can do for you:
- Install updated distribution repository definitions automatically
- Disable non-distribution repositories to avoid conflicts
- Run
zypper dupfor a smooth, safe upgrade - Offer post-upgrade scripts to adopt new defaults—or keep your preferred settings, for example AppArmor vs SELinux
- Perform pre-migration checks to make sure your system is ready, including verifying
x86_64-v2support - Reboot
- Optional snapper-rollback or simply boot to older snapshot from grub
The tool isn’t limited to just Leap n → Leap n+1. You can also upgrade to SUSE Linux Enterprise, Slowroll, or Tumbleweed. Slowroll → Tumbleweed upgrades work too, and recent requests include Leap Micro → Slowroll Micro. As long as it’s an upgrade, it will simply work.
Want to see it in action? Check out LowTechLinux’s opensuse-migration-tool review on YouTube for a hands-on demo and external validation.
Getting started
In case the tool is not yet installed on your system do sudo zypper in opensuse-migration-tool
If you are using the tool for the first time or just want to check it out, run it in test mode:
/usr/sbin/opensuse-migration-tool --dry-run # no need to use sudo in dry-run mode
This will not show exactly what will be upgraded, but it will give you a good idea of what the tool can do and it will not make any changes to your system.
Once you feel confident, run:
sudo opensuse-migration-tool
The tool will offer to disable non-distribution repositories, which is strongly recommended. It will then trigger zypper dup --r and automatically rerun zypper if any issues occur.
The tool also performs pre-migration system checks. If you are affected by any issues, you might want to run the latest version directly from git. Contributions are welcome.
git clone https://github.com/openSUSE/opensuse-migration-tool.git
cd opensuse-migration-tool
./opensuse-migration-tool --dry-run
Further documentation
More information can be found at openSUSE System Upgrade. This document also suggests how to perform a manual upgrade to 16.0, although I would not recommend it, especially given the positive feedback we have received for the tool.
Make sure to read Leap 16.0 release notes as well as Known bugs wiki prior to upgrading.
Future plans
I plan to provide an optional GTK4 interface that preserves the shared migration logic and power of Bash. This will use custom GTK4 dialogs to keep the openSUSE look and feel, but it will be invoked similarly to zenity. Here’s a sneak peek from the first two dialogs:
People can leave feedback on survey.opensuse.org after the general availability of the release today at 12:00 UTC when the survey becomes published regarding the release of Leap 16.