Skip to main content

the avatar of openSUSE News

Planet News Roundup

This is a roundup of articles from the openSUSE community listed on planet.opensuse.org.

The community blog feed aggregator lists the featured highlights below from March 13 to March 19.

Blogs this week highlight Agama 19’s major architectural overhaul and new installation modes, the simultaneous release of Krita 5.3 and Krita 6.0, and Hyprland arriving on Tumbleweed with an official installation pattern. Blogs also cover Peter Czánik’s first steps running hardware-accelerated AI on Linux, animation smoothness improvements coming in Plasma 6.7, Mozilla’s new official RPM repository for Firefox Beta on openSUSE, the Himmelblau Workshop for Linux and Entra ID integration in Germany, an offline AI-powered child protection system for Linux using PAM, and more.

Here is a summary and links for each post:

My New Toy: OpenWebUI First Steps

Peter Czánik’s Blog continues his AI mini workstation series by documenting his first steps with Open WebUI on Fedora. He settled on running Ollama directly from the Fedora package repository after upgrading to Fedora 44 beta.

Install Firefox Beta on openSUSE

Victorhck explains how to add Mozilla’s new official RPM repository to install Firefox Beta on openSUSE alongside the stable and Nightly versions. Installing from the official Mozilla repository offers advantages including advanced compiler optimizations, faster updates, and hardened security binaries. The post provides the exact zypper commands needed to import the GPG key and install the package.

The New Features of Plasma 6.6

The KDE Blog takes a detailed look at the new features introduced in the Plasma 6.6 desktop release. The blog highlights a new global theme that automatically switches between light and dark mode by time of day, easier emoji skin tone selection via Meta+., and quick Wi-Fi connection by scanning a QR code with the device’s camera.

Trying Hyprland for the First Time on openSUSE Tumbleweed

Victorhck shares his first hands-on experience with the Hyprland tiling window manager on openSUSE Tumbleweed, which was made much easier by a new official installation pattern contributed by Lubos Kocman. The pattern bundles a minimal but functional setup including waybar, greetd, hyprpaper with an openSUSE wallpaper, and sensible keyboard defaults.

Compiling syslog-ng on an Old Mac

Peter Czánik’s Blog addresses the problem of Homebrew dropping full support for older Intel-based Macs and explains how to compile the latest syslog-ng release on these aging but still functional machines.

My New Toy: First Steps with AI on Linux

Peter Czánik’s Blog documents his first attempts at running hardware-accelerated AI workloads on his HP Z2 Mini under Linux, covering both Ubuntu 25.10 and Fedora 43. While Ubuntu proved difficult due to ROCm packaging limitations, Fedora’s Heterogeneous Computing SIG wiki provided a clear path to getting AMD ROCm working, with both llama-cpp and PyTorch successfully detecting and using the GPU.

Krita 5.3 and Krita 6.0 Released

The KDE Blog announces the simultaneous stable releases of Krita 5.3 and Krita 6.0. Krita 5.3 introduces a fully rewritten text tool with direct canvas editing and advanced OpenType support. Krita 6.0 builds on all of 5.3’s additions while completing the migration to Qt6.

Animation Improvements Coming in Plasma 6.7

The KDE Blog reports on work by KWin developer Vlad Zahorodnii to smooth out animation in the upcoming Plasma 6.7. The fix addresses the “jump” effect that occurs when a brief system stall causes an animation to skip several frames to catch up. The change affects compositor-managed animations such as window open/close effects and desktop transitions.

Himmelblau Workshop – Hands-On Integration on April 21 in Germany

Just Another Tech Blog announces the first official Himmelblau Workshop taking place on April 22 in Göttingen, Germany, which is the day after sambaXP 2026. The hands-on session targets Linux system administrators and IT professionals managing hybrid environments, covering Entra ID authentication, multi-factor authentication, Intune-based device management, and policy enforcement using the current stable Himmelblau release.

Agama 19 – A New Start for the SUSE and openSUSE Installer

Victorhck provides a thorough Spanish-language overview of the Agama 19 release and its significance for SUSE and openSUSE users. The post walks through the architectural renovation, the redesigned web interface with dynamic network configuration, the rewritten user and software management subsystems, and newly added features such as LVM volume group installation and SSH key authentication.

3 Top Features of Plasma 6.6

The KDE Blog spotlights three standout features from the Plasma 6.6 release. The completely redesigned “Plasma Keyboard” on-screen keyboard offers instant appearance, automatic window repositioning, and a mobile-style layout with emoji support and cursor control via the spacebar.

3 Sports Games for Linux

The KDE Blog continues its native Linux games series with three free and open-source sports titles. Freetennis is a realistic tennis simulator featuring advanced AI and LAN/internet multiplayer; Tux Football is a fast-paced 2D arcade football game inspired by Sensible Soccer; and Foobillard++ is a 3D OpenGL billiards simulator supporting 8-ball, 9-ball, snooker, and carom modes. All three games are natively available on Linux at no cost.

VLM + CNN + Agents: Solving Digital Child Protection on Linux Without the Cloud

Alessandro’s Blog presents a technical proposal for implementing Brazil’s “Digital Statute for Children and Adolescents” (ECA Digital) on Linux using a fully offline AI pipeline. The system combines Vision-Language Models, convolutional neural networks for facial age estimation, and intelligent agents integrated directly into Linux’s PAM authentication layer to block privilege escalation by minors.

Linux Saloon 192 – Storm OS Distribution Exploration

The CubicleNate Blog recaps a Linux Saloon podcast episode focused on Storm OS, a new Arch-based Linux distribution created by contributor Ben. Participants discussed what productivity applications the distro would need to attract intermediate users and shared their own experiences testing distributions including openSUSE Tumbleweed.

Time Zone Offsets and Type-Ahead on the Desktop – This Week in Plasma

The KDE Blog translates and covers the latest “This Week in Plasma” development report. Plasma 6.7 gains time zone offset display in the Digital Clock widget, type-ahead file selection on the desktop when KRunner is disabled, and the ability to reverse the system tray item order. Performance improvements include reduced OpenGL context creation per application (saving 10–15 MB RAM each) and optimized direct scanout on fullscreen windows.

I Installed Linux on an Apple Silicon MacBook – No Going Back!

The KDE Blog highlights a video by content creator Guillem Cortés documenting his experience running Fedora Asahi Remix natively on a MacBook Pro with an M1 Pro chip. Battery life, audio, and display brightness perform comparably to macOS, though the screen is currently limited to 60 Hz instead of the original 120 Hz.

openSUSE Tumbleweed Weekly Review – Week 12 of 2026

Victorhck and dimstar report on a very active week for Tumbleweed with seven consecutive snapshots (0312 through 0318) delivered without any issues reaching users. Major updates include Mesa 26.0.2, cURL 8.19.0, Linux kernel 6.19.7 and 6.19.8, KDE Frameworks 6.24.0, GIMP 3.2.0, systemd 259.5, Ruby 4.0.2, and pipewire 1.6.2. Upcoming changes include switching the default UEFI bootloader to systemd-boot, GCC 16 as the default compiler, GNOME 50, glibc 2.43, and LLVM 22.

Agama 19 Released – A New Beginning

The Agama Installer Blog announces Agama 19. The release features a major architectural overhaul that establishes a clean, stable API as the foundation for the web UI, command-line tools, and unattended installs alike. Internal components for user and software management have been rewritten from scratch to replace aging YaST modules, and the web UI has been reorganized around a new overview page.

Passing of bear454

The openSUSE project mourns the passing of long-time community member James Mason. James, who is also known amongst the community as bear454, has a long connection with the project that stretches back to its beginnings. He was a member since 2009, an openSUSE Ambassador and dedicated much of his life’s work to open-source. He was often at LinuxFest Northwest helping several in attendance. He will be deeply missed.

James is pictured second in from the right side.

James pictured at LinuxFest Northwest in 2014. left to right: Peter Linnell, Bryan Lunduke, Jon Hall (with the SUSE Chameleon), James Mason, and Michael Miller at LinuxFest Northwest 2014

View more blogs or learn to publish your own on planet.opensuse.org.

a silhouette of a person's head and shoulders, used as a default avatar

Mi escritorio Plasma de marzo 2026 #viernesdeescritorio

Sigo con la iniciativa #viernesdeescritorio. Bienvenidos a mi escritorio Plasma de marzo 2026, que en esta ocasión la voy a realizar sobre mi ultrabook Slimbook Pro, con el que llegamos a las 70 entregas compartiendo «Mi escritorio» de forma mensual.

Mi escritorio Plasma de marzo 2026 #viernesdeescritorio

Esta va a ser la entrega número 70 en la que muestro mi escritorio Plasma 6 en público, lo cual es número nada desdeñable de entradas que sigue creciendo de forma constante.

Como decía en la introducción la realizo de nuevo sobre mi Slimbook Pro, el cual tiene instalado un KDE Neon con Plasma 6.6.3, sobre una versión de KDE Frameworks 6.24.0 y una versión de Qt 6.10.2. El servidor gráfico es Wayland y el Kernel es 6.17.0-19-generic (64 bits).

En este equipo sigo utilizando un aspecto oscuro, utilizando el tema global Brisa Oscura, el que lleva por defecto Plasma, con los iconos Brisa.

La barra de tareas se la tengo en la parte izquierda dado que no tengo dos monitores que me hagan moverme de derecha a izquierda y viceveres, y contiene de arriba a abajo.

  • Selector de escritorios virtuales
  • Reloj digital
  • Widget de Clima Plus
  • Gestor de tareas solo iconos
  • Bandeja del sistema
  • Un pequeño bloc de notas para acceder de forma rápida a esas cosas que pasan por mi mente y necesito apuntar para recordar después.
  • Flex Hub para el control de aspecto de mi plama.
  • Lanzador de aplicaciones.

Para el fondo también tengo e activado el complemento Wallhaven, el cual conecta con la página web https://wallhaven.cc/ y que permite que te cambie dicho fondo según los parámetros que definas.

Mi escritorio Plasma de febrero 2026 #viernesdeescritorio

En esta ocasión he seleccionado uno de una Tormenta (Storm) icónica, uno de mis personajes favoritos de la Patrulla, sobre todo en tiempos de Claremont. Ese contraste de oscuro y blanco creo que los deja muy elegante.

Además, el fondo contiene el reloj ClearClock por Modern Clear Clock, ya que este plasmoide coge bien la información del idioma de mi sistema, y por ello aparece el día de la semana en valenciano.

El resultado de mi escritorio Plasma de marzo de 2026 es un entorno de trabajo oscuro y, como siempre, funcional que podéis ver en la imagen inferior (pinchad sobre ella para verlo un poco más grande).

Mi escritorio Plasma de marzo 2026 #viernesdeescritorio

La entrada Mi escritorio Plasma de marzo 2026 #viernesdeescritorio se publicó primero en KDE Blog.

a silhouette of a person's head and shoulders, used as a default avatar

My new toy: Openwebui First Steps

Once I got hardware-accelerated AI working under Linux on my AI mini workstation from HP, my next goal was to make it easier to use. From this blog, you can read about my initial experiments with Open WebUI on Fedora Linux.

Open WebUI talking about central log collection :-)

Everything in containers

As Open WebUI is not yet available as a package in Fedora, my initial approach was to use containers. I found a Docker compose setup which was tested on Fedora Linux 43 according to its documentation: https://github.com/jesuswasrasta/ollama-rocm-webui-docker. As I (also) use Fedora 43, it sounded like a good choice.

It worked; however, I quickly realized that hardware acceleration for AI was not working. Instead of that, most CPUs were running close to 100%. It was a good test for cooling: I could hear the miniature box from the next room through closed doors :-)

ollama eating CPU :-)

As it turned out, the content of the HSA_OVERRIDE_GFX_VERSION environment variable was incorrect. When I set it according to the docs, hardware acceleration still did not work. Removing the environment variable ollama found the hardware, but never answered a prompt anymore.

Ollama from the system

My next experiment was that I kept using Open WebUI from the container, but I installed ollama from the Fedora package repository directly on the system. The good news? Some smaller models ran really fast, using hardware acceleration. The bad news: most models failed to load with an error message that the given model format is unknown.

Update to Fedora 44 beta

I guessed that ollama was too old in Fedora 43. Solution? Update the whole system to Fedora 44 beta. It seems to have helped. A lot more models work now, including the largest freely available Granite models from IBM.

Why Granite?

First of all: I’m an IBM Champion, and thus using IBM technologies is for granted. But also because I learned some background stories from a friend working at IBM on LSF, which makes it also a personal choice.

What I’ve been showing here is AI inferencing on my HP AI system. But before the model can be used (for inferencing), it needs to be trained. These models are trained on large, GPU rich conpute clusters. To get an idea of the scale of such clusters, you can learn more in this research paper (https://arxiv.org/abs/2407.05467). It duscusses the IBM Blue Vela system which supports IBMs’ GenAI mission. What’s interesting is the Blue Vela uses a more traditional HPC software stack including IBM LSF for workload management and Storage Scale (GPFS) for rapid access to large data sets.

AI in a miniature box :-)

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

a silhouette of a person's head and shoulders, used as a default avatar

My new toy: Open WebUI first steps

Once I got hardware-accelerated AI working under Linux on my AI mini workstation from HP, my next goal was to make it easier to use. From this blog, you can read about my initial experiments with Open WebUI on Fedora Linux.

Open WebUI talking about central log collection :-)

Everything in containers

As Open WebUI is not yet available as a package in Fedora, my initial approach was to use containers. I found a Docker compose setup which was tested on Fedora Linux 43 according to its documentation: https://github.com/jesuswasrasta/ollama-rocm-webui-docker. As I (also) use Fedora 43, it sounded like a good choice.

It worked; however, I quickly realized that hardware acceleration for AI was not working. Instead of that, most CPUs were running close to 100%. It was a good test for cooling: I could hear the miniature box from the next room through closed doors :-)

ollama eating CPU :-)

As it turned out, the content of the HSA_OVERRIDE_GFX_VERSION environment variable was incorrect. When I set it according to the docs, hardware acceleration still did not work. Removing the environment variable ollama found the hardware, but never answered a prompt anymore.

Ollama from the system

My next experiment was that I kept using Open WebUI from the container, but I installed ollama from the Fedora package repository directly on the system. The good news? Some smaller models ran really fast, using hardware acceleration. The bad news: most models failed to load with an error message that the given model format is unknown.

Update to Fedora 44 beta

I guessed that ollama was too old in Fedora 43. Solution? Update the whole system to Fedora 44 beta. It seems to have helped. A lot more models work now, including the largest freely available Granite models from IBM.

Why Granite?

First of all: I’m an IBM Champion, and thus using IBM technologies is for granted. But also because I learned some background stories from a friend working at IBM on LSF, which makes it also a personal choice.

What I’ve been showing here is AI inferencing on my HP AI system. But before the model can be used (for inferencing), it needs to be trained. These models are trained on large, GPU rich conpute clusters. To get an idea of the scale of such clusters, you can learn more in this research paper (https://arxiv.org/abs/2407.05467). It duscusses the IBM Blue Vela system which supports IBMs’ GenAI mission. What’s interesting is the Blue Vela uses a more traditional HPC software stack including IBM LSF for workload management and Storage Scale (GPFS) for rapid access to large data sets.

AI in a miniature box :-)

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

a silhouette of a person's head and shoulders, used as a default avatar

Instalar Firefox Beta en #openSUSE

Veamos cómo añadir el repositorio oficial de Firefox para instalar la versión Beta del navegador Firefox

Imagen con los distintos logotipos del navegador Firefox según sea la versión final, la beta, Nightly

En un artículo anterior, ya publiqué cómo poder instalar Firefox versión Nightly en openSUSE, mediante el repositorio oficial que mantiene Mozilla en el que se pone a disposición de los usuarios esta versión del navegador.

Ahora, también han creado un repositorio oficial desde el que podremos instalar Firefox Beta. Estas versiones experimentales del navegador Firefox se pueden tener instaladas junto con la versión estable y convivir sin problemas. Esto nos servirá para probar las novedades de este navegador.

Veamos cómo añadir el repositorio de Mozilla para instalar Firefox Beta en openSUSE. Si ya tienes configurado el repositorio RPM de Mozilla, puedes instalar Firefox Beta mediante zypper. Si no, sigue estos pasos.

Al instalar Firefox beta desde los repositorios oficiales de Mozilla, esto nos ofrece:

  • Mejor rendimiento gracias a las optimizaciones oficiales avanzadas basadas en compiladores
  • Las actualizaciones son más rápidas porque la gestión.rpm está integrada en el proceso de lanzamiento de Firefox
  • Binarios reforzados con todas las configuraciones de seguridad activadas durante la compilación
  • No hace falta crear tu propio archivo.desktop.

Vamos a añadir la firma de seguridad y el repositorio correspondiente. Para ello abrimos una terminal y ejecutamos los siguiente comandos:

sudo rpm --import https://packages.mozilla.org/rpm/firefox/signing-key.gpg
sudo zypper ar --gpgcheck-allow-unsigned-repo https://packages.mozilla.org/rpm/firefox mozilla
sudo zypper refresh
sudo zypper install firefox-beta

El paquete del idioma se tendría que seleccionar automáticamente como dependencia.

Te vuelvo a recordar que puedes tener instaladas las tres versiones del navegador Firefox en tu equipo sin problema y utilizar una u otra cuando los desees, para probar novedades, reportar problemas, o lo que quieras.

Enlaces de interés

a silhouette of a person's head and shoulders, used as a default avatar

Las nuevas funcionalidades de Plasma 6.6

El pasado 17 de febrero fue lanzado Plasma 6.6, el mejor escritorio del universo conocido (según nosotros). Ha pasado mucho tiempo y es el momento de hablar de las nuevas funcionalidades de Plasma 6.6 que el equipo de promoción consideró poner al inicio del anuncio.

Las nuevas funcionalidades de Plasma 6.6

Esta nueva versión de Plasma 6 sigue centrada en la mejora de la usabilidad y la accesibilidad, y es por ello que nos ofrecen nuevas funciones que combinando estas dos prioridades no hacen la vida más sencilla en nuestros entornos de trabajo.

3 novedades destacadas de Plasma 6.6

En Plasma 6.6 tenemos un nuevo tema global que permite su cambio de modo claro a modo oscuro según la hora del día, además de poder guardar los temas (¿se podía hacer en Plasma 6.5? Y no lo recuerdo y paso de buscar).

Por otra parte, ahora en temas como Brisa se pude configurar el contraste de los marcos, para que sean más visibles o menos al gusto del usario.

También se ha implementado la posibilidad de escoger más fácilmente los tonos de piel para los emojis utilizando Meta+.

Se ha optimizado la forma de conectarnos a una red wifi ya que ahora si nuestro equipo tiene cámara te puedes conectar rápidamente con solo escanear su código QR.

Otra interesante funcionalidad, que se debe activar entrando en la configuración del gestor de tareas solo iconos, es que poniendo el puntero sobre el icono de cualquier aplicación que esté reproduciendo sonido y desplázandlo se puede ajustar su volumen.

Y también podemos ser más eficiente y ahorrarnos un clic activando Abrir al pasar el cursor en el widget Lista de ventanas. Como plus, también puedes filtrar las ventanas que no están en el escritorio o la actividad actual.

Para finalizar, comentar que si mantienes pulsada la tecla Alt y haces doble clic en un archivo o carpeta del escritorio aparecen sus propiedades,

La entrada Las nuevas funcionalidades de Plasma 6.6 se publicó primero en KDE Blog.

a silhouette of a person's head and shoulders, used as a default avatar

Probando #Hyprland por primera vez en #openSUSE Tumbleweed

Pruebo el gestor de ventanas Hyprland por primera vez en mi openSUSE Tumbleweed

Captura de pantalla de Hyprland en openSUSE. çse ven dos terminales mostrando información del sistema

Desde hace un tiempo, dentro del mundo de GNU/Linux, ha despuntado un nuevo gestor de ventanas de tipo tiling llamado Hyprland. La novedad es que utiliza Wayland y ofrece vistosas animaciones composiciones de ventanas, debido a que se puede configurar cada aspecto del escritorio.

Ya había utilizado algún tiempo i3wm como gestor tiling de ventanas, pero Hyprland venía a dar un salto cualitativo y actualizado a esa opción. Así que quise probarlo y ver qué aspecto tiene…

Lo probé hace un tiempo, instalando Hyprland en un equipo de prueba en openSUSE desde sus repositorios y la experiencia fue… nefasta. Hyprland necesita mucha configuración y complementos para empezar a hacer utilizable un sistema.

Pero algo me hizo cambiar de idea y volver a probar Hyprland como un novato…

El pasado febrero de 2026 Lubos Kocman publicó un mensaje en la lista de correo de openSUSE, donde informaba de que se había creado un «patrón» de instalación relativo a Hyprland, para ofrecer esa alternativa a los usuarios de openSUSE. El correo lo puedes encontrar en este enlace:

En resumen comentaba que se iba a crear un «patrón» de instalación de Hyprland y que esto incluiría ciertas configuraciones y complementos necesarios para empezar a utilizar el sistema nada más instalado.

Ese patrón de instalación incluiría:

  • greetd con gtkgreet + cage como gestor de login sencillo (se evitó sddm porque arrastra unas 150 dependencias adicionales).
  • Hyprland con citas de bienvenida (“splash quotes”) de Gertjan.
  • waybar, bien integrada con el sistema.
  • opensuse-welcome-launcher y el binario estático opensuse-welcome.
  • hyprland-qtutils para evitar que Hyprland se queje por la ausencia de `hyprland-guiutils y para ofrecer un diálogo cuando hay una actualización de Hyprland.
  • hyprpaper con un fondo de pantalla de openSUSE basado en un wallpaper de Kraith para Hyprland. (no lo he visto)

Atajos de teclado por defecto:

  • Terminal kitty con Super+q
  • Gestor de archivos Thunar con Super+e.
  • Capturas de pantalla con grim usando la tecla de Impr Pant.
  • Lanzador nwg-drawer con Super+r.
  • Super+m cierra la sesión

Filosofía de configuración

  • La configuración está pensada como mínima, sin imponer demasiado al usuario.
  • Se plantea incluso si se podría prescindir de qtutils.
  • El paquete de “branding” instala configuraciones por defecto en /etc/xdg respetando siempre la configuración del usuario.

Así que ahora sí, quise volver a probarlo. Abrí Myrlyn, el gestor gráfico de paquetes de openSUSE, fui al apartado de Patrones o Patterns y seleccione el compositor Hyprland y Hyprland plugins, lo que selecciona para instalar un montón de software adicional.

Terminado el proceso de instalación de todo el software, cerré la sesión actual y entré en mi recién estrenada sesión de Hyprland. Bueno, ahora ya era otra cosa. Ya había algo más parecido a un sistema.

Recién llegado a Hyprland esto es lo que tienes que hacer

Lo primero saber que todos estos gestores de ventanas tiling, tienen un archivo de configuración desde el cual se gestionan todos o muchos de los aspectos y comportamientos del sistema: atajos de teclado, comportamiento de las ventanas, espacios de trabajo, etc…

En openSUSE Tumbleweed lo primero que hice fue editar el archivo /home/<mi_usuario>/.config/hyp/hyprland.conf.

Si no existe puedes crear la ruta y copiar el archivo que existe en: /usr/share/hypr/hyprland.conf Y ya en tu home editar en ese archivo lo que quieras.

Esto fue lo primero que modifiqué:

  • El esquema del teclado a español. La variable kb_layout = es
  • Yo quiero seguir utilizando Dolphin como gestor de archivos, quiero konsole como terminal y wofi como lanzador de programas. Así que:
    • $terminal = konsole
    • $fileManager = dolphin
    • $menu = wofi --show drun --insensitive
  • El cursor de Hyprland no me gusta nada, así que configuré uno bien conocido como Adwaita (si lo tienes instalado en tu equipo u el que prefieras)
    • env = XCURSOR_THEME,Adwaita
    • env = XCURSOR_SIZE,16
    • env = HYPRCURSOR_THEME,Adwaita
    • env = HYPRCURSOR_SIZE,16
  • Reduje el valor de las «gaps» o la separación de las ventanas entre sí, y de estas con el exteriror y el borde de las ventanas:
    • gaps_in = 3
    • gaps_out = 10
    • border_size = 1

Atajos de teclado

Igual que en i3wm, el escritorio está pensado para utilizarse mediante el teclado. Pulsando la tecla «super» o la del icono de Windows (si la tienes) y otra tecla lanzarás las aplicaciones predeterminadas. En mi caso:

  • Super + Q → Abre la terminal. (Prueba a abrir varias para ver cómo se van haciendo sitio)
  • Super + C → Para cerrar la ventana actual. Aprovecha a cerrar todas las terminales que abriste antes.
  • Super + M → Ejecuta un comando para cerrar la sesión
  • Super + E → Abre el gestor de archivos, en mi caso Dolphin
  • Super + V → Hace la ventana activa flotante y la podrás mover por el escritorio
  • Super + R → Abre el menú lanzador de aplicaciones, en mi caso wofi
  • Super + Espacio → Igual al anterior

Por supuesto todos estos atajos los puedes cambiar a tu preferencia.

Para mover el foco a una ventana u otra, la tecla super y las teclas de las flechas.

Para ir a un espacio de trabajo u otro, mediante la tecla super y los números del 1 al 0, así tienes 10 espacios de trabajo en los que abrir tus ventanas.

Si una ventana la tienes abierta en el espacio 1 y la quieres llevar al 2, mediante Super + Shift + 2 y llevará esa ventana al espacio 2. Similar con cualquiera de los espacios disponibles.

Y con esto más o menos ya puedes empezar a funcionar. Guardas los cambios y si no has tenido errores, los cambios son tomados en cuenta inmediatamente. Por aquí te dejo mi archivo de configuración inicial:

################
### MONITORS ###
################

monitor=,preferred,auto,auto

###################
### MY PROGRAMS ###
###################

$terminal = konsole
$fileManager = dolphin
$menu = wofi --show drun --insensitive

#################
### AUTOSTART ###
#################

exec-once = nm-applet
exec-once = waybar
exec-once = hyprpaper

#############################
### ENVIRONMENT VARIABLES ###
#############################

env = XCURSOR_THEME,Adwaita
env = XCURSOR_SIZE,16
env = HYPRCURSOR_THEME,Adwaita
env = HYPRCURSOR_SIZE,16

#####################
### LOOK AND FEEL ###
#####################

general {
    gaps_in = 3
    gaps_out = 10
    border_size = 1

    col.active_border = rgba(33ccffee) rgba(00ff99ee) 45deg
    col.inactive_border = rgba(595959aa)

    resize_on_border = true
    allow_tearing = false

    layout = dwindle
}

decoration {
    rounding = 5
    rounding_power = 2

    active_opacity = 1.0
    inactive_opacity = 0.95

    shadow {
        enabled = true
        range = 4
        render_power = 3
        color = rgba(1a1a1aee)
    }

    blur {
        enabled = true
        size = 3
        passes = 1
        vibrancy = 0.1696
    }
}

#################
### ANIMATIONS ##
#################

animations {
    enabled = yes

    bezier = easeOutQuint,   0.23, 1,    0.32, 1
    bezier = easeInOutCubic, 0.65, 0.05, 0.36, 1
    bezier = linear,         0,    0,    1,    1
    bezier = almostLinear,   0.5,  0.5,  0.75, 1
    bezier = quick,          0.15, 0,    0.1,  1

    animation = global,        1,     6,    default
    animation = border,        1,     5,    easeOutQuint
    animation = windows,       1,     4,    easeOutQuint
    animation = fade,          1,     2,    quick
    animation = workspaces,    1,     2,    almostLinear
}

#################
### LAYOUTS #####
#################

dwindle {
    pseudotile = true
    preserve_split = true
}

master {
    new_status = master
}

misc {
    force_default_wallpaper = -1
    disable_hyprland_logo = false
}

#############
### INPUT ###
#############

input {
    kb_layout = es
    follow_mouse = 2
    sensitivity = 0

    touchpad {
        natural_scroll = false
        tap-to-click = true
    }
}

gesture = 3, horizontal, workspace

device {
    name = epic-mouse-v1
    sensitivity = -0.5
}

###################
### KEYBINDINGS ###
###################

$mainMod = SUPER

bind = $mainMod, Q, exec, $terminal
bind = $mainMod, C, killactive
bind = $mainMod, M, exec, command -v hyprshutdown >/dev/null 2>&1 && hyprshutdown || hyprctl dispatch exit
bind = $mainMod, E, exec, $fileManager
bind = $mainMod, V, togglefloating
bind = $mainMod, R, exec, $menu
bind = $mainMod, SPACE, exec, $menu
bind = $mainMod, P, pseudo
bind = $mainMod, J, togglesplit

# Focus
bind = $mainMod, left, movefocus, l
bind = $mainMod, right, movefocus, r
bind = $mainMod, up, movefocus, u
bind = $mainMod, down, movefocus, d

# Workspaces
bind = $mainMod, 1, workspace, 1
bind = $mainMod, 2, workspace, 2
bind = $mainMod, 3, workspace, 3
bind = $mainMod, 4, workspace, 4
bind = $mainMod, 5, workspace, 5
bind = $mainMod, 6, workspace, 6
bind = $mainMod, 7, workspace, 7
bind = $mainMod, 8, workspace, 8
bind = $mainMod, 9, workspace, 9
bind = $mainMod, 0, workspace, 10

bind = $mainMod SHIFT, 1, movetoworkspace, 1
bind = $mainMod SHIFT, 2, movetoworkspace, 2
bind = $mainMod SHIFT, 3, movetoworkspace, 3
bind = $mainMod SHIFT, 4, movetoworkspace, 4
bind = $mainMod SHIFT, 5, movetoworkspace, 5
bind = $mainMod SHIFT, 6, movetoworkspace, 6
bind = $mainMod SHIFT, 7, movetoworkspace, 7
bind = $mainMod SHIFT, 8, movetoworkspace, 8
bind = $mainMod SHIFT, 9, movetoworkspace, 9
bind = $mainMod SHIFT, 0, movetoworkspace, 10


# Scratchpad
bind = $mainMod, S, togglespecialworkspace, magic
bind = $mainMod SHIFT, S, movetoworkspace, special:magic

# Scroll workspaces
bind = $mainMod, mouse_down, workspace, e+1
bind = $mainMod, mouse_up, workspace, e-1

# Mouse move/resize
bindm = $mainMod, mouse:272, movewindow
bindm = $mainMod, mouse:273, resizewindow

# Volume / brightness
bindel = ,XF86AudioRaiseVolume, exec, wpctl set-volume -l 1 @DEFAULT_AUDIO_SINK@ 5%+
bindel = ,XF86AudioLowerVolume, exec, wpctl set-volume @DEFAULT_AUDIO_SINK@ 5%-
bindel = ,XF86AudioMute, exec, wpctl set-mute @DEFAULT_AUDIO_SINK@ toggle
bindel = ,XF86AudioMicMute, exec, wpctl set-mute @DEFAULT_AUDIO_SOURCE@ toggle
bindel = ,XF86MonBrightnessUp, exec, brightnessctl -e4 -n2 set 5%+
bindel = ,XF86MonBrightnessDown, exec, brightnessctl -e4 -n2 set 5%-

# Media
bindl = , XF86AudioNext, exec, playerctl next
bindl = , XF86AudioPause, exec, playerctl play-pause
bindl = , XF86AudioPlay, exec, playerctl play-pause
bindl = , XF86AudioPrev, exec, playerctl previous

##############################
### WINDOW RULES #############
##############################

windowrule {
    name = suppress-maximize-events
    match:class = .*
    suppress_event = maximize
}

windowrule {
    name = fix-xwayland-drags
    match:class = ^$
    match:title = ^$
    match:xwayland = true
    match:float = true
    match:fullscreen = false
    match:pin = false
    no_focus = true
}

windowrule {
    name = move-hyprland-run
    match:class = hyprland-run
    move = 20 monitor_h-120
    float = yes
}

Y con esto ya puedes empezar a dar los primeros pasos en Hyprland como los he empezado a dar yo. Si veo que el artículo tiene aceptación, escribiré otro sobre cómo he modificado la configuración de la barra waybar, para mostrarla como se ve en la captura que abre el artículo.

La configuración en estos sistemas se puede volver en algo infinito y que consume nuestro tiempo, puliendo este detalle, dándole ese toque, etc. Puede ser un buen pasatiempo para aprender o puede convertirse en un sumidero de tiempo.

Te aconsejo que te lo tomes con calma y vayas aprendiendo poco a poco sobre Hyprland. Ahora en openSUSE nos han hecho sencillo dar nuestros primeros pasos en Hyprland.

se ve la ilustración de un pingüino con los ojos medio cerrados, encima de una tabla de surf y en su barriga el texto: Linux Inside
a silhouette of a person's head and shoulders, used as a default avatar
a silhouette of a person's head and shoulders, used as a default avatar

My new toy: first steps with AI on Linux

Ever since I bought my AI mini workstation from HP, my goal was to run hardware accelerated artificial intelligence workloads in a Linux environment. Read more to learn how things turned out on Ubuntu and Fedora!

I have been using various AI tools for a while now. Generating pictures about some impossible situations, like a dinosaur climbing the Hungarian parliament building, finding information where a simple web search is useless, or explaining syslog-ng code to me. All these are nice, sometimes even useful, however I prefer to know what is behind the magic. Well, at least part of it :-) I want to get a bottom up view of various components and processes, and getting my hands dirty. Hopefully this miniature but powerful box will help me in getting known with AI better.

AI in a miniature box :-)

Testing AI on Ubuntu

As mentioned in my installing Ubuntu blog, the 24.04 LTS installer did not work on this machine. I found a nice tutorial about AI on the Ryzen AI Max+ 395 which mentioned using 25.10, so I installed that version instead of the LTS. It installed without any troubles, 3D graphics worked out of the box.

However, AI is a different story. ROCm, hardware acceleration for AI workloads on AMD chips, is only packaged for Ubuntu LTS releases. The workaround described in the tutorial was to use distrobox. Unfortunately, the steps described in the tutorial did not work. Containerization brought in various problems with permissions, software availability, and so on. Most likely an experienced distrobox user could resolve these. In my case, after reading the distrobox documentation for hours, I just gave up.

Getting started with hardware accelerated AI on Fedora

Next, I turned to Fedora Linux 43. The wiki page of the Fedora Heterogeneous Computing Special Interest Group proved to be a good starting point. Fedora has ROCm packaged as part of the distro, and the wiki page gives clear instructions how to get started.

Once I set up user rights and installed the necessary packages, I was able to get some info about my hardware. You can see the output of rocminfo and rocm-clinfo at the bottom of this blog. I did not want to shorten those, but given the many lines of output, I was not sure if anyone would read the rest of my blog :-)

Testing with llama

Of course, seeing info about the hardware is nice, but it’s even better to see it in action. The Ubuntu ROCm tutorial mentioned llama, so I started with that one. Luckily Fedora includes it as a ready to install package, so I did not have to compile it from source. I also installed huggingface-hub, also from a package:

dnf install python3-huggingface-hub llama-cpp

This allowed me to download the model mentioned in the tutorial, and ask a few questions from the downloaded LLM. For now I just used the sample command line, but based on the output llama found the hardware and used it. Next up: learn more about the available models.

You can find the output of the following command at the end of this blog:

llama-cli   -m ~/models/llama-2-7b.Q4_K_M.gguf   --no-mmap   -ngl 99   -p "Explain quantum computing in simple terms:"   -n 256

Testing with pytorch

When I mentioned a friend that hardware accelerated AI seems to work on my Linux box, he suggested to me to try it with PyTorch. Luckily this was available as a ready to install package for Fedora as well:

dnf install python3-torch

I was quite a bit surprised, as the above command installed 8 GB worth of RPM packages (texlive accounting for a good part of it). I do not know much about PyTorch, but did a quick test anyway. Here is the really complex Pyhon code I built based on the documentation:

import torch
x = torch.rand(5, 3)
print(x)
print('Is hw AI accel available')
print(torch.cuda.is_available())

And here is the output from the above code:

tensor([[0.1034, 0.0183, 0.1233],
        [0.1787, 0.0097, 0.8426],
        [0.2872, 0.6351, 0.8468],
        [0.8226, 0.2991, 0.8539],
        [0.2061, 0.6422, 0.8146]])
Is hw AI accel available
True

It’s simple, but looks promising :-)

Outputs

Ooutput of rocminfo and rocm-clinfo

czanik@fedora:~$ rocminfo 
ROCk module is loaded
=====================    
HSA System Attributes    
=====================    
Runtime Version:         1.1
Runtime Ext Version:     1.7
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE                              
System Endianness:       LITTLE                             
Mwaitx:                  DISABLED
XNACK enabled:           NO
DMAbuf Support:          YES
VMM Support:             YES

==========               
HSA Agents               
==========               
*******                  
Agent 1                  
*******                  
  Name:                    AMD RYZEN AI MAX+ PRO 395 w/ Radeon 8060S
  Uuid:                    CPU-XX                             
  Marketing Name:          AMD RYZEN AI MAX+ PRO 395 w/ Radeon 8060S
  Vendor Name:             CPU                                
  Feature:                 None specified                     
  Profile:                 FULL_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        0(0x0)                             
  Queue Min Size:          0(0x0)                             
  Queue Max Size:          0(0x0)                             
  Queue Type:              MULTI                              
  Node:                    0                                  
  Device Type:             CPU                                
  Cache Info:              
    L1:                      49152(0xc000) KB                   
  Chip ID:                 0(0x0)                             
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   5187                               
  BDFID:                   0                                  
  Internal Node ID:        0                                  
  Compute Unit:            32                                 
  SIMDs per CU:            0                                  
  Shader Engines:          0                                  
  Shader Arrs. per Eng.:   0                                  
  WatchPts on Addr. Ranges:1                                  
  Memory Properties:       
  Features:                None
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: FINE GRAINED        
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 3                   
      Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 4                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
  ISA Info:                
*******                  
Agent 2                  
*******                  
  Name:                    gfx1151                            
  Uuid:                    GPU-XX                             
  Marketing Name:          Radeon 8060S Graphics              
  Vendor Name:             AMD                                
  Feature:                 KERNEL_DISPATCH                    
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        128(0x80)                          
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          131072(0x20000)                    
  Queue Type:              MULTI                              
  Node:                    1                                  
  Device Type:             GPU                                
  Cache Info:              
    L1:                      32(0x20) KB                        
    L2:                      2048(0x800) KB                     
    L3:                      32768(0x8000) KB                   
  Chip ID:                 5510(0x1586)                       
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          128(0x80)                          
  Max Clock Freq. (MHz):   2900                               
  BDFID:                   50432                              
  Internal Node ID:        1                                  
  Compute Unit:            40                                 
  SIMDs per CU:            2                                  
  Shader Engines:          2                                  
  Shader Arrs. per Eng.:   2                                  
  WatchPts on Addr. Ranges:4                                  
  Coherent Host Access:    FALSE                              
  Memory Properties:       APU
  Features:                KERNEL_DISPATCH 
  Fast F16 Operation:      TRUE                               
  Wavefront Size:          32(0x20)                           
  Workgroup Max Size:      1024(0x400)                        
  Workgroup Max Size per Dimension:
    x                        1024(0x400)                        
    y                        1024(0x400)                        
    z                        1024(0x400)                        
  Max Waves Per CU:        32(0x20)                           
  Max Work-item Per CU:    1024(0x400)                        
  Grid Max Size:           4294967295(0xffffffff)             
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)             
    y                        4294967295(0xffffffff)             
    z                        4294967295(0xffffffff)             
  Max fbarriers/Workgrp:   32                                 
  Packet Processor uCode:: 34                                 
  SDMA engine uCode::      18                                 
  IOMMU Support::          None                               
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    65568416(0x3e87ea0) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    65568416(0x3e87ea0) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 3                   
      Segment:                 GROUP                              
      Size:                    64(0x40) KB                        
      Allocatable:             FALSE                              
      Alloc Granule:           0KB                                
      Alloc Recommended Granule:0KB                                
      Alloc Alignment:         0KB                                
      Accessible by all:       FALSE                              
  ISA Info:                
    ISA 1                    
      Name:                    amdgcn-amd-amdhsa--gfx1151         
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
    ISA 2                    
      Name:                    amdgcn-amd-amdhsa--gfx11-generic   
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
*******                  
Agent 3                  
*******                  
  Name:                    aie2                               
  Uuid:                    AIE-XX                             
  Marketing Name:          AIE-ML                             
  Vendor Name:             AMD                                
  Feature:                 AGENT_DISPATCH                     
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        1(0x1)                             
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          64(0x40)                           
  Queue Type:              SINGLE                             
  Node:                    0                                  
  Device Type:             DSP                                
  Cache Info:              
    L2:                      2048(0x800) KB                     
    L3:                      32768(0x8000) KB                   
  Chip ID:                 0(0x0)                             
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          0(0x0)                             
  Max Clock Freq. (MHz):   0                                  
  BDFID:                   0                                  
  Internal Node ID:        0                                  
  Compute Unit:            0                                  
  SIMDs per CU:            0                                  
  Shader Engines:          0                                  
  Shader Arrs. per Eng.:   0                                  
  WatchPts on Addr. Ranges:0                                  
  Memory Properties:       
  Features:                AGENT_DISPATCH
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: KERNARG, COARSE GRAINED
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    65536(0x10000) KB                  
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:0KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 3                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
  ISA Info:                
*** Done ***             

and

czanik@fedora:~$ rocm-clinfo 
Number of platforms:				 1
  Platform Profile:				 FULL_PROFILE
  Platform Version:				 OpenCL 2.1 AMD-APP (3649.0)
  Platform Name:				 AMD Accelerated Parallel Processing
  Platform Vendor:				 Advanced Micro Devices, Inc.
  Platform Extensions:				 cl_khr_icd cl_amd_event_callback 


  Platform Name:				 AMD Accelerated Parallel Processing
Number of devices:				 1
  Device Type:					 CL_DEVICE_TYPE_GPU
  Vendor ID:					 1002h
  Board name:					 Radeon 8060S Graphics
  Device Topology:				 PCI[ B#197, D#0, F#0 ]
  Max compute units:				 20
  Max work items dimensions:			 3
    Max work items[0]:				 1024
    Max work items[1]:				 1024
    Max work items[2]:				 1024
  Max work group size:				 256
  Preferred vector width char:			 4
  Preferred vector width short:			 2
  Preferred vector width int:			 1
  Preferred vector width long:			 1
  Preferred vector width float:			 1
  Preferred vector width double:		 1
  Native vector width char:			 4
  Native vector width short:			 2
  Native vector width int:			 1
  Native vector width long:			 1
  Native vector width float:			 1
  Native vector width double:			 1
  Max clock frequency:				 2900Mhz
  Address bits:					 64
  Max memory allocation:			 57070749280
  Image support:				 Yes
  Max number of images read arguments:		 128
  Max number of images write arguments:		 8
  Max image 2D width:				 16384
  Max image 2D height:				 16384
  Max image 3D width:				 16384
  Max image 3D height:				 16384
  Max image 3D depth:				 8192
  Max samplers within kernel:			 16
  Max size of kernel argument:			 1024
  Alignment (bits) of base address:		 2048
  Minimum alignment (bytes) for any datatype:	 128
  Single precision floating point capability
    Denorms:					 Yes
    Quiet NaNs:					 Yes
    Round to nearest even:			 Yes
    Round to zero:				 Yes
    Round to +ve and infinity:			 Yes
    IEEE754-2008 fused multiply-add:		 Yes
  Cache type:					 Read/Write
  Cache line size:				 128
  Cache size:					 32768
  Global memory size:				 67142057984
  Constant buffer size:				 57070749280
  Max number of constant args:			 8
  Local memory type:				 Local
  Local memory size:				 65536
  Max pipe arguments:				 16
  Max pipe active reservations:			 16
  Max pipe packet size:				 1236174432
  Max global variable size:			 57070749280
  Max global variable preferred total size:	 67142057984
  Max read/write image args:			 64
  Max on device events:				 1024
  Queue on device max size:			 8388608
  Max on device queues:				 1
  Queue on device preferred size:		 262144
  SVM capabilities:				 
    Coarse grain buffer:			 Yes
    Fine grain buffer:				 Yes
    Fine grain system:				 No
    Atomics:					 No
  Preferred platform atomic alignment:		 0
  Preferred global atomic alignment:		 0
  Preferred local atomic alignment:		 0
  Kernel Preferred work group size multiple:	 32
  Error correction support:			 0
  Unified memory for Host and Device:		 1
  Profiling timer resolution:			 1
  Device endianess:				 Little
  Available:					 Yes
  Compiler available:				 Yes
  Execution capabilities:				 
    Execute OpenCL kernels:			 Yes
    Execute native function:			 No
  Queue on Host properties:				 
    Out-of-Order:				 No
    Profiling :					 Yes
  Queue on Device properties:				 
    Out-of-Order:				 Yes
    Profiling :					 Yes
  Platform ID:					 0x7ffb97d11d80
  Name:						 gfx1151
  Vendor:					 Advanced Micro Devices, Inc.
  Device OpenCL C version:			 OpenCL C 2.0 
  Driver version:				 3649.0 (HSA1.1,LC)
  Profile:					 FULL_PROFILE
  Version:					 OpenCL 2.0 
  Extensions:					 cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program 

Output from llama

root@fedora:~# llama-cli   -m ~/models/llama-2-7b.Q4_K_M.gguf   --no-mmap   -ngl 99   -p "Explain quantum computing in simple terms:"   -n 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32
build: 0 (unknown) with HIP version: 6.4.43484-9999 for x86_64-redhat-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device ROCm0 (Radeon 8060S Graphics) - 64031 MiB free
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /root/models/llama-2-7b.Q4_K_M.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 15
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V2
print_info: file type   = Q4_K - Medium
print_info: file size   = 3.80 GiB (4.84 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 3
load: token to piece cache size = 0.1684 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 4096
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 32
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 1
print_info: n_embd_k_gqa     = 4096
print_info: n_embd_v_gqa     = 4096
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 11008
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 4096
print_info: rope_finetuned   = unknown
print_info: model type       = 7B
print_info: model params     = 6.74 B
print_info: general.name     = LLaMA v2
print_info: vocab type       = SPM
print_info: n_vocab          = 32000
print_info: n_merges         = 0
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 0 '<unk>'
print_info: LF token         = 13 '<0x0A>'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 32 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 33/33 layers to GPU
load_tensors:        ROCm0 model buffer size =  3820.94 MiB
load_tensors:          CPU model buffer size =    70.31 MiB
..................................................................................................
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 2048
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 10000.0
llama_context: freq_scale    = 1
llama_context:  ROCm_Host  output buffer size =     0.12 MiB
llama_kv_cache_unified:      ROCm0 KV buffer size =  2048.00 MiB
llama_kv_cache_unified: size = 2048.00 MiB (  4096 cells,  32 layers,  1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility
llama_context:      ROCm0 compute buffer size =   288.00 MiB
llama_context:  ROCm_Host compute buffer size =    16.01 MiB
llama_context: graph nodes  = 1158
llama_context: graph splits = 2
common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
main: llama threadpool init, n_threads = 16

system_info: n_threads = 16 (n_threads_batch = 16) / 32 | ROCm : NO_VMM = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : LLAMAFILE = 1 | REPACK = 1 | 

sampler seed: 2232334333
sampler params: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 4096
	top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-n-sigma -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist 
generate: n_ctx = 4096, n_batch = 2048, n_predict = 256, n_keep = 1

 Explain quantum computing in simple terms: what is it, how does it work, and what are its potential benefits?
This is a difficult question to answer because quantum computing is not yet a well-defined field of study, and many of the potential applications are still being researched. However, we can say that quantum computing is a type of computation that relies on the principles of quantum mechanics (the branch of physics that describes the behaviour of particles such as electrons and photons).
These particles obey a set of rules that are different from those obeyed by classical computers, which rely on the principles of classical mechanics. Quantum computing uses a particle’s quantum state (such as its spin) to store information. This means that quantum computers can perform computations that are not possible on classical computers.
In the simplest terms, quantum computing is a type of computation that takes advantage of the unique properties of quantum mechanics. These properties include superposition, entanglement, and non-locality. Superposition is the ability of a quantum system to exist in multiple states simultaneously.
This means that a quantum system can be in two different places at the same time, or have two different properties at the same time. Entanglement is the ability of two quantum systems to be inter

llama_perf_sampler_print:    sampling time =       4.27 ms /   265 runs   (    0.02 ms per token, 62075.43 tokens per second)
llama_perf_context_print:        load time =     631.46 ms
llama_perf_context_print: prompt eval time =      63.57 ms /     9 tokens (    7.06 ms per token,   141.57 tokens per second)
llama_perf_context_print:        eval time =    7110.09 ms /   255 runs   (   27.88 ms per token,    35.86 tokens per second)
llama_perf_context_print:       total time =    7184.25 ms /   264 tokens

Closing words

These are just my first steps. Most of the time I was not even fully aware what I was doing, just reused some sample command lines and code. These experiments were good enough to see that AI works on Linux as well, not just on Windows.

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

a silhouette of a person's head and shoulders, used as a default avatar

Lanzados Krita 5.3 y Krita 6.0

El desarrollo de las aplicaciones más destacadas de la Comunidad KDE no se para nunca. ¡Y que siga así para siempre! Hoy me complace compartir con vosotros que han sido lanzados Krita 5.3 y Krita 6.0, una más que feliz noticia que demuestra el buen estado de desarrollo de la mejor (según nosotros y mucha más gente) aplicación para artistas digitales.

Lanzados Krita 5.3 y Krita 6.0

Nada tiene que envidiar Krita a otras aplicaciones de edición de imágenes, siendo el complemento perfecto a Gimp e Inkscape, si queremos tener una suite de edición de imágenes digitales.

Y este párrafo anterior se podría resumir en esta pequeña tabla:

Así que, no puede llenarme de más orgullo que el desarrollo de las tres aplicaciones sigue activo. La última versión estable de GIMP fue publicada el 24 de enero de este año, mientras que la última versión estable de Inkscape (la 5.2.15) ha sido publicada el 28 de enero.

Pues bien, hoy toca hablar la evolución de Krita, que el pasado martes ha lanzado las versiones estables en dos de sus ramas: la 5.x y la 6.x.

Lanzados Krita 5.3 y Krita 6.0

En palabras de los desarrolladores:

Krita 5.3/6.0 es el resultado de muchos años de trabajo por parte de los desarrolladores de Krita.

Algunas funciones se han reescrito desde cero, mientras que otras hacen su primera aparición.
Disfruta de la nueva función de texto completamente renovada: edición directamente sobre el lienzo, compatibilidad total con OpenType y flujo de texto dentro de formas. Ahora es más fácil que nunca crear paneles basados en vectores para páginas de cómics.
Las herramientas se han ampliado: por ejemplo, la herramienta de relleno ahora puede cerrar huecos. El modo de licuar de la herramienta de transformación es mucho más rápido.

Hay nuevos filtros: un filtro de propagación de colores y un filtro de restablecer transparencia. Se ha mejorado el soporte para pintura en HDR.

El panel de grabación ahora puede funcionar en tiempo real. También se ha mejorado la compatibilidad con formatos de archivo, como el soporte para objetos de texto en archivos PSD.

¡Y mucho, mucho, mucho más!
Para una descripción completa de todas las novedades, consulta nuestras notas de lanzamiento.

Video cortesía de David Revoy.

Y entre las novedades más destacadas de la Krita 5.3 podemos hablar, extraídas de las notas de lanzamiento, de:

  • Herramienta de texto completamen/te reescrita, permitiendo edición directa en el lienzo, ajuste en formas vectoriales y soporte avanzado de OpenType.
  • Nueva herramienta de cuchillo para dividir o fusionar objetos vectoriales, ideal para paneles de cómics.

Mientras que para Krita 6.0, que heredará también todas las características de Krita 5.3, tenemos grandes cambios bajo el capó como:

  • Migración completa a Qt6 para mejor rendimiento y compatibilidad moderna, aunque con bugs pendientes.
  • Soporte nativo para Wayland en Linux con gestión de color, escalado fraccional y HDR (10 bits).
  • Mejoras en estabilizador para pixel art en la herramienta de dibujo libre y rendimiento optimizado en licuar.
  • Nuevo modo de textura suave en motores de pincel patrón, más filtros como propagación de colores.

Más información: Krita

La entrada Lanzados Krita 5.3 y Krita 6.0 se publicó primero en KDE Blog.