#openSUSE Tumbleweed revisión de la semana 38 de 2020
Tumbleweed es una distribución “Rolling Release” de actualización contínua. Aquí puedes estar al tanto de las últimas novedades.

openSUSE Tumbleweed es la versión “rolling release” o de actualización continua de la distribución de GNU/Linux openSUSE.
Hagamos un repaso a las novedades que han llegado hasta los repositorios estas semanas.
El anuncio original lo puedes leer en el blog de Dominique Leuenberger, publicado bajo licencia CC-by-sa, en este enlace:
Una semana más a la media que estamos acostumbrados de 5 nuevas “snapshots” publicadas (0910, 0914, 0915, 0916 y 0917). El contenido de estas snapshots, entre otros incluye actualizaciones como:
- KDE Applications 20.08.1
- Qt 5.15.1
- PackageKit 1.2.1
- Systemd 246.4
- Virt-Manager 3.0.0
Y próximamente podremos disfrutar de estas actualizaciones en nuestros repositorios:
- KDE Frameworks 5.74.0
- GNOME 3.36.6
- Linus kernel 5.8.9
- glibc 2.32
- binutils 2.35
- gettext 0.21
- bison 3.7.1
- SELinux 3.1
- openssl 3.0
Si quieres estar a la última con software actualizado y probado utiliza openSUSE Tumbleweed la opción rolling release de la distribución de GNU/Linux openSUSE.
Mantente actualizado y ya sabes: Have a lot of fun!!
Enlaces de interés
-
-
- ¿Por qué deberías utilizar openSUSE Tumbleweed?
- zypper dup en Tumbleweed hace todo el trabajo al actualizar
- ¿Cual es el mejor comando para actualizar Tumbleweed?
- Comprueba la valoración de las “snapshots” de Tumbleweed
- ¿Qué es el test openQA?
- http://download.opensuse.org/tumbleweed/iso/
- https://es.opensuse.org/Portal:Tumbleweed
-

——————————–
openSUSE Tumbleweed – Review of the week 2020/38
Dear Tumbleweed users and hackers,
An average week, with an average number of 5 snapshots (0910, 0914, 0915, 0916, and 0917 – with 0917 just being synced out). The content of these snapshots included:
- KDE Applications 20.08.1
- Qt 5.15.1
- PackageKit 1.2.1
- Systemd 246.4
- Virt-Manager 3.0.0
The staging projects around glibc is barely moving, and as such, the stagings still include:
- KDE Frameworks 5.74.0
- GNOME 3.36.6
- Linus kernel 5.8.9
- glibc 2.32
- binutils 2.35
- gettext 0.21
- bison 3.7.1
- SELinux 3.1
- openssl 3.0
Boston, un tema de iconos minimalista para Plasma
Hace bastante tiempo que no aparecen los temas de iconos en el blog. No obstante eso no significa que dejen de aparecer en la Store de KDE. De esta forma hoy tengo el gusto de presentaros un tema de iconos con los que personalizar nuestro entorno de trabajo llamado Boston, que destaca por ser minimalista y funcional.
Boston, un tema de iconos minimalista para Plasma
Para el escritorio Plasma de la Comunidad KDE hay cientos de temas de todo tipo disponibles para los usuarios: iconos, cursores, emoticones, etc, Y como me gusta cambiar de vez en cuando, en el blog le he dedicado muchos artículos a cada uno de los packs.
No obstante, como suelo decir, cambiar el tema de iconos de un escritorio es una de las formas de adaptación más personal que puedes realizar sobre tu PC, ya que modifica totalmente el aspecto del mismo a la hora de interaccionar con tus aplicaciones, documentos y servicios.
Hoy os presento Boston, un bonito tema de iconos que nos viene de la mano de DChris que nos ofrece un conjunto de iconos para carpetas, aplicaciones y otros símbolos del sistema minimalista y funcional, en parte por el uso de formas básicas, paleta de colores reducida y una jerarquía visual muy definida.

Y como siempre digo, si os gusta el pack de iconos podéis “pagarlo” de muchas formas en la nueva página de KDE Store, que estoy seguro que el desarrollador lo agradecerá: puntúale positivamente, hazle un comentario en la página o realiza una donación. Ayudar al desarrollo del Software Libre también se hace simplemente dando las gracias, ayuda mucho más de lo que os podéis imaginar, recordad la campaña I love Free Software Day 2017 de la Free Software Foundation donde se nos recordaba esta forma tan sencilla de colaborar con el gran proyecto del Software Libre y que en el blog dedicamos un artículo.
Más información: KDE Store
Fast Kernel Builds - 2020
A number of months ago I did an “Ask Me Anything” interview on r/linux on redit. As part of that, a discussion of the hardware I used came up, and someone said, “I know someone that can get you a new machine” “get that person a new machine!” or something like that.
Fast forward a few months, and a “beefy” AMD Threadwripper 3970X shows up on my doorstep thanks to the amazing work of Wendell Wilson at Level One Techs.
Ever since I started doing Linux kernel development the hardware I use has been a mix of things donated to me for development (workstations from Intel and IBM, laptops from Dell) machines my employer have bought for me (various laptops over the years), and machines I’ve bought on my own because I “needed” it (workstations built from scratch, Apple Mac Minis, laptops from Apple and Dell and ASUS and Panasonic). I know I am extremely lucky in this position, and anything that has been donated to me, has been done so only to ensure that the hardware works well on Linux. “Will code for hardware” was an early mantra of many kernel developers, myself included, and hardware companies are usually willing to donate machines and peripherals to ensure kernel support.
This new AMD machine is just another in a long line of good workstations that help me read email really well. Oops, I mean, “do kernel builds really fast”…
For full details on the system, see this forum description, and this video that Wendell did in building the machine, and then this video of us talking about it before it was sent out. We need to do a follow-on one now that I’ve had it for a few months and have gotten used to it.
Benchmark tools
Below I post the results of some benchmarks that I have done to try to show the speed of different systems. I’ve used the tool Fio version fio-3.23-28-g7064, kcbench version v0.9.0 (from git), and perf version 5.7.g3d77e6a8804a. All of these are great for doing real-world tests of I/O systems (fio), kernel build tests (kcbench), and “what is my system doing at the moment” queries (perf). I recommend trying all of these out yourself if you haven’t done so already.
Fast Builds
I’ve been using a laptop for my primary development system for a number of years now, due to travel and moving around a bit, and because it was just “good enough” at the time. I do some local builds and testing, but have a “build machine” in a data center somewhere, that I do all of my normal stable kernel builds on, as it is much much faster than any laptop. It is set up to do kernel builds directly off of a RAM disk, ensuring that I/O isn’t an issue. Given that is has 128Gb of RAM, carving out a 40Gb ramdisk for kernel builds to run on (room for 4-5 at once), this has worked really well, with kernel builds of a full kernel tree in a few minutes.
Here’s the output of kcbench on my data center build box which is running Fedora 32:
Processor: Intel Core Processor (Broadwell) [40 CPUs]
Cpufreq; Memory: Unknown; 120757 MiB
Linux running: 5.8.7-200.fc32.x86_64 [x86_64]
Compiler: gcc (GCC) 10.2.1 20200723 (Red Hat 10.2.1-1)
Linux compiled: 5.7.0 [/home/gregkh/.cache/kcbench/linux-5.7]
Config; Environment: defconfig; CCACHE_DISABLE="1"
Build command: make vmlinux
Filling caches: This might take a while... Done
Run 1 (-j 40): 81.92 seconds / 43.95 kernels/hour [P:3033%]
Run 2 (-j 40): 83.38 seconds / 43.18 kernels/hour [P:2980%]
Run 3 (-j 46): 82.11 seconds / 43.84 kernels/hour [P:3064%]
Run 4 (-j 46): 81.43 seconds / 44.21 kernels/hour [P:3098%]
Contrast that with my current laptop:
Processor: Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz [8 CPUs]
Cpufreq; Memory: powersave [intel_pstate]; 15678 MiB
Linux running: 5.8.8-arch1-1 [x86_64]
Compiler: gcc (GCC) 10.2.0
Linux compiled: 5.7.0 [/home/gregkh/.cache/kcbench/linux-5.7]
Config; Environment: defconfig; CCACHE_DISABLE="1"
Build command: make vmlinux
Filling caches: This might take a while... Done
Run 1 (-j 8): 392.69 seconds / 9.17 kernels/hour [P:768%]
Run 2 (-j 8): 393.37 seconds / 9.15 kernels/hour [P:768%]
Run 3 (-j 10): 394.14 seconds / 9.13 kernels/hour [P:767%]
Run 4 (-j 10): 392.94 seconds / 9.16 kernels/hour [P:769%]
Run 5 (-j 4): 441.86 seconds / 8.15 kernels/hour [P:392%]
Run 6 (-j 4): 440.31 seconds / 8.18 kernels/hour [P:392%]
Run 7 (-j 6): 413.48 seconds / 8.71 kernels/hour [P:586%]
Run 8 (-j 6): 412.95 seconds / 8.72 kernels/hour [P:587%]
Then the new workstation:
Processor: AMD Ryzen Threadripper 3970X 32-Core Processor [64 CPUs]
Cpufreq; Memory: schedutil [acpi-cpufreq]; 257693 MiB
Linux running: 5.8.8-arch1-1 [x86_64]
Compiler: gcc (GCC) 10.2.0
Linux compiled: 5.7.0 [/home/gregkh/.cache/kcbench/linux-5.7/]
Config; Environment: defconfig; CCACHE_DISABLE="1"
Build command: make vmlinux
Filling caches: This might take a while... Done
Run 1 (-j 64): 37.15 seconds / 96.90 kernels/hour [P:4223%]
Run 2 (-j 64): 37.14 seconds / 96.93 kernels/hour [P:4223%]
Run 3 (-j 71): 37.16 seconds / 96.88 kernels/hour [P:4240%]
Run 4 (-j 71): 37.12 seconds / 96.98 kernels/hour [P:4251%]
Run 5 (-j 32): 43.12 seconds / 83.49 kernels/hour [P:2470%]
Run 6 (-j 32): 43.81 seconds / 82.17 kernels/hour [P:2435%]
Run 7 (-j 38): 41.57 seconds / 86.60 kernels/hour [P:2850%]
Run 8 (-j 38): 42.53 seconds / 84.65 kernels/hour [P:2787%]
Having a local machine that builds kernels faster than my external build box has been a liberating experience. I can do many more local tests before sending things off to the build systems for “final test builds” there.
Here’s a picture of my local box doing kernel builds, and the remote machine doing builds at the same time, both running bpytop to monitor what is happening (htop doesn’t work well for huge numbers of cpus). It’s not really all that useful, but is fun eye-candy:

SSD vs. NVME
As shipped to me, the machine booted from a raid array of an NVME disk. Outside of laptops, I’ve not used NVME disks, only SSDs. Given that I didn’t really “trust” the Linux install on the disk, I deleted the data on the disks, and installed a trusty SATA SSD disk and got Linux up and running well on it.
After that was all up and running well (btw, I use Arch Linux), I looked into the NVME disk, to see if it really would help my normal workflow out or not.
Firing up fio, here are the summary numbers of the different disk systems using the default “examples/ssd-test.fio” test settings:
SSD:
Run status group 0 (all jobs):
READ: bw=219MiB/s (230MB/s), 219MiB/s-219MiB/s (230MB/s-230MB/s), io=10.0GiB (10.7GB), run=46672-46672msec
Run status group 1 (all jobs):
READ: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=6855MiB (7188MB), run=60001-60001msec
Run status group 2 (all jobs):
WRITE: bw=177MiB/s (186MB/s), 177MiB/s-177MiB/s (186MB/s-186MB/s), io=10.0GiB (10.7GB), run=57865-57865msec
Run status group 3 (all jobs):
WRITE: bw=175MiB/s (183MB/s), 175MiB/s-175MiB/s (183MB/s-183MB/s), io=10.0GiB (10.7GB), run=58539-58539msec
Disk stats (read/write):
sda: ios=4375716/5243124, merge=548/5271, ticks=404842/436889, in_queue=843866, util=99.73%
NVME:
Run status group 0 (all jobs):
READ: bw=810MiB/s (850MB/s), 810MiB/s-810MiB/s (850MB/s-850MB/s), io=10.0GiB (10.7GB), run=12636-12636msec
Run status group 1 (all jobs):
READ: bw=177MiB/s (186MB/s), 177MiB/s-177MiB/s (186MB/s-186MB/s), io=10.0GiB (10.7GB), run=57875-57875msec
Run status group 2 (all jobs):
WRITE: bw=558MiB/s (585MB/s), 558MiB/s-558MiB/s (585MB/s-585MB/s), io=10.0GiB (10.7GB), run=18355-18355msec
Run status group 3 (all jobs):
WRITE: bw=553MiB/s (580MB/s), 553MiB/s-553MiB/s (580MB/s-580MB/s), io=10.0GiB (10.7GB), run=18516-18516msec
Disk stats (read/write):
md0: ios=5242880/5237386, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=1310720/1310738, aggrmerge=0/23, aggrticks=63986/25048, aggrin_queue=89116, aggrutil=97.67%
nvme3n1: ios=1310720/1310729, merge=0/0, ticks=63622/25626, in_queue=89332, util=97.63%
nvme0n1: ios=1310720/1310762, merge=0/92, ticks=63245/25529, in_queue=88858, util=97.67%
nvme1n1: ios=1310720/1310735, merge=0/3, ticks=64009/24018, in_queue=88114, util=97.58%
nvme2n1: ios=1310720/1310729, merge=0/0, ticks=65070/25022, in_queue=90162, util=97.49%
Full logs of both tests can be found here for the SSD, and here for the NVME array.
Basically the NVME array is up to 3 times faster than the SSD, depending on the specific read/write test, and is faster for everything overall.
But, does my normal workload of kernel builds matter when building on such fast storage? Normally a kernel build is very I/O intensive, but only up to a point. If the storage system can keep the CPU “full” of new data to build, and writes do not stall, a kernel build should be limited by CPU power, if the storage system can go fast enough.
So, is a SSD “fast” enough on a huge AMD Threadripper system?
In short, yes, here’s the output of kcbench running on the NVME disk:
Processor: AMD Ryzen Threadripper 3970X 32-Core Processor [64 CPUs]
Cpufreq; Memory: schedutil [acpi-cpufreq]; 257693 MiB
Linux running: 5.8.8-arch1-1 [x86_64]
Compiler: gcc (GCC) 10.2.0
Linux compiled: 5.7.0 [/home/gregkh/.cache/kcbench/linux-5.7/]
Config; Environment: defconfig; CCACHE_DISABLE="1"
Build command: make vmlinux
Filling caches: This might take a while... Done
Run 1 (-j 64): 36.97 seconds / 97.38 kernels/hour [P:4238%]
Run 2 (-j 64): 37.18 seconds / 96.83 kernels/hour [P:4220%]
Run 3 (-j 71): 37.14 seconds / 96.93 kernels/hour [P:4248%]
Run 4 (-j 71): 37.22 seconds / 96.72 kernels/hour [P:4241%]
Run 5 (-j 32): 44.77 seconds / 80.41 kernels/hour [P:2381%]
Run 6 (-j 32): 42.93 seconds / 83.86 kernels/hour [P:2485%]
Run 7 (-j 38): 42.41 seconds / 84.89 kernels/hour [P:2797%]
Run 8 (-j 38): 42.68 seconds / 84.35 kernels/hour [P:2787%]
Almost the exact same number of kernels built per hour.
So for a kernel developer, right now, a SSD is “good enough”, right?
It’s not just all builds
While kernel builds are the most time-consuming thing that I do on my systems, the other “heavy” thing that I do is lots of git commands on the Linux kernel tree. git is really fast, but it is limited by the speed of the storage medium for lots of different operations (clones, switching branches, and the like).
After I switched to running my kernel trees off of the NVME storage, it “felt” like git was going faster now, so I came up with some totally-artifical benchmarks to try to see if this was really true or not.
One common thing is cloning a whole kernel tree from a local version in a new directory to do different things with it. Git is great in that you can keep the “metadata” in one place, and only check out the source files in the new location, but dealing with 70 thousand files is not “free”.
$ cat clone_test.sh
#!/bin/bash
git clone -s ../work/torvalds/ test
sync
And, to make sure the data isn’t just coming out of the kernel cache, be sure to flush all caches first.
SSD output:
$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
$ perf stat ./clone_test.sh
Cloning into 'test'...
done.
Updating files: 100% (70006/70006), done.
Performance counter stats for './clone_test.sh':
4,971.83 msec task-clock:u # 0.536 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
92,713 page-faults:u # 0.019 M/sec
14,623,046,712 cycles:u # 2.941 GHz (83.18%)
720,522,572 stalled-cycles-frontend:u # 4.93% frontend cycles idle (83.40%)
3,179,466,779 stalled-cycles-backend:u # 21.74% backend cycles idle (83.06%)
21,254,471,305 instructions:u # 1.45 insn per cycle
# 0.15 stalled cycles per insn (83.47%)
2,842,560,124 branches:u # 571.734 M/sec (83.21%)
257,505,571 branch-misses:u # 9.06% of all branches (83.68%)
9.270460632 seconds time elapsed
3.505774000 seconds user
1.435931000 seconds sys
NVME disk:
$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
~/linux/tmp $ perf stat ./clone_test.sh
Cloning into 'test'...
done.
Updating files: 100% (70006/70006), done.
Performance counter stats for './clone_test.sh':
5,183.64 msec task-clock:u # 0.833 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
87,409 page-faults:u # 0.017 M/sec
14,660,739,004 cycles:u # 2.828 GHz (83.46%)
712,429,063 stalled-cycles-frontend:u # 4.86% frontend cycles idle (83.40%)
3,262,636,019 stalled-cycles-backend:u # 22.25% backend cycles idle (83.09%)
21,241,797,894 instructions:u # 1.45 insn per cycle
# 0.15 stalled cycles per insn (83.50%)
2,839,260,818 branches:u # 547.735 M/sec (83.30%)
258,942,077 branch-misses:u # 9.12% of all branches (83.25%)
6.219492326 seconds time elapsed
3.336154000 seconds user
1.593855000 seconds sys
So a “clone” is faster by 3 seconds, nothing earth shattering, but noticable.
But clones are rare, what’s more common is switching between branches, which checks out a subset of the different files depending on what is contained in the branches. It’s a lot of logic to figure out exactly what files need to change.
Here’s the test script:
$ cat branch_switch_test.sh
#!/bin/bash
cd test
git co -b old_kernel v4.4
sync
git co -b new_kernel v5.8
sync
And the results on the different disks:
SSD:
$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
$ perf stat ./branch_switch_test.sh
Updating files: 100% (79044/79044), done.
Switched to a new branch 'old_kernel'
Updating files: 100% (77961/77961), done.
Switched to a new branch 'new_kernel'
Performance counter stats for './branch_switch_test.sh':
10,500.82 msec task-clock:u # 0.613 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
195,900 page-faults:u # 0.019 M/sec
27,773,264,048 cycles:u # 2.645 GHz (83.35%)
1,386,882,131 stalled-cycles-frontend:u # 4.99% frontend cycles idle (83.54%)
6,448,903,713 stalled-cycles-backend:u # 23.22% backend cycles idle (83.22%)
39,512,908,361 instructions:u # 1.42 insn per cycle
# 0.16 stalled cycles per insn (83.15%)
5,316,543,747 branches:u # 506.298 M/sec (83.55%)
472,900,788 branch-misses:u # 8.89% of all branches (83.18%)
17.143453331 seconds time elapsed
6.589942000 seconds user
3.849337000 seconds sys
NVME:
$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
~/linux/tmp $ perf stat ./branch_switch_test.sh
Updating files: 100% (79044/79044), done.
Switched to a new branch 'old_kernel'
Updating files: 100% (77961/77961), done.
Switched to a new branch 'new_kernel'
Performance counter stats for './branch_switch_test.sh':
10,945.41 msec task-clock:u # 0.921 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
197,776 page-faults:u # 0.018 M/sec
28,194,940,134 cycles:u # 2.576 GHz (83.37%)
1,380,829,465 stalled-cycles-frontend:u # 4.90% frontend cycles idle (83.14%)
6,657,826,665 stalled-cycles-backend:u # 23.61% backend cycles idle (83.37%)
41,291,161,076 instructions:u # 1.46 insn per cycle
# 0.16 stalled cycles per insn (83.00%)
5,353,402,476 branches:u # 489.100 M/sec (83.25%)
469,257,145 branch-misses:u # 8.77% of all branches (83.87%)
11.885845725 seconds time elapsed
6.741741000 seconds user
4.141722000 seconds sys
Just over 5 seconds faster on an nvme disk array.
Now 5 seconds doesn’t sound like much, but I’ll take it…
Conclusion
If you haven’t looked into new hardware in a while, or are stuck doing kernel development on a laptop, please seriously consider doing so, the power in a small desktop tower these days (and who is traveling anymore that needs a laptop?) is well worth it if possible.
Again, many thanks to Level1Techs for the hardware, it’s been put to very good use.
Fast Kernel Builds
A number of months ago I did an “Ask Me Anything” interview on r/linux on redit. As part of that, a discussion of the hardware I used came up, and someone said, “I know someone that can get you a new machine” “get that person a new machine!” or something like that.
Fast forward a few months, and a “beefy” AMD Threadwripper 3970X shows up on my doorstep thanks to the amazing work of Wendell Wilson at Level One Techs.
Cómo instalar GitHub CLI en #openSUSE
Veamos cómo instalar la nueva herramienta para la gestión de repositorios en GitHub en openSUSE

A finales de agosto de este (aciago) año 2020 ya publiqué en el blog un adelante de la herramienta para la línea de comandos que estaba desarrollando GitHub, para la gestión de los repositorios en sus servidores.
La herramienta se llama GitHub CLI y está disponible también para GNU/Linux y además de Debian, Fedora, Arch y también openSUSE.
Hoy mismo 17 de septiembre de 2020 han publicado el anuncio de la publicación de la versión 1.0 bajo licencia MIT, de dicha herramienta ya para descargar e instalar y usarla en nuestros equipos.
Veamos cómo instalarla en nuestro openSUSE.
Tal como vemos en las instrucciones de instalación para openSUSE, de su repositorio propio en GitHub, lo mejor es añadir el repositorio oficial de la herramienta para openSUSE y desde ahí instalarla.
Para ello ejecutaremos con privilegios de superusuario los siguientes comandos en la línea de comandos:
zypper addrepo https://cli.github.com/packages/rpm/gh-cli.repo zypper ref zypper in gh
Después de añadir el repositorio, al hacer zypper ref, para refrescar el contenido de los repositorios, nos pedirá aceptar la firma de la clave GPG del nuevo repositorio. Deberemos aceptarla. Y después ya podremos instalar la herramienta.
También podremos descargar el .rpm para nuestra distribución o compilarla desde el código fuente. Aunque yo prefiero la primera opción, para beneficiarme de las actualizaciones que vayan subiendo al repositorio oficial.
Una vez instalada la herramienta, ya podremos gestionar nuestros repositorios en GitHub sin necesidad de abandonar la línea de comandos de nuestro equipo.
Lo primero será conectar nuestra herramienta recién instalada con nuestra cuenta en GitHub, para ello ejecutaremos
gh auth login
Nos irá ofreciendo unas opciones y podremos loguearnos mediante la interfaz web con una contraseña de un solo uso. Y después de seleccionar si preferimos https o SSL para conectar nuestros repositorios, ya estará vinculada.
Ahora podremos gestionar los “issues” de repositorios, hacer “pull request” y consultar muchas otras cosas de los repositorios en GitHub mediante la línea de comandos.
Puedes echar un vistazo al manual de uso de GitHub para ir familiarizándote de esta nueva herramienta que ofrece muchas posibilidades. Yo estoy empezando a utilizar la herramienta…
Enlaces de interés
- https://github.blog/2020-09-17-github-cli-1-0-is-now-available/
- https://cli.github.com/
- https://cli.github.com/manual/

Lanzada la beta de Plasma 5.20, más y mejor
Una vez finalizado el periodo de mantenimiento de Plasma 5.19 es hora de ir preparando el lanzamiento de la siguiente versión. Es por ello que me complace compartir con vosotros que ha sido lanzada la beta de Plasma 5.20, la próxima versión del escritorio de la Comunidad KDE que nos llega con novedades interesantes, muchas de las cuales se han ido desgranando en el blog de Nate Graham. Es el momento de que esta beta sea probada y que se reporten los errores que se encuentren. ¡No pierdas la oportunidad de contribuir al desarrollo de Plasma!
Lanzada la beta de Plasma 5.20
Hoy 17 de setiembre ha sido lanzada la beta de Plasma 5.20. En esta tercera versión liberada del 2020, no apta todavía para el usuario domésticos, se ha centrado en que el escritorio de la Comunidad KDE
Unas pinceladas de algunas de las novedades más destacada son:
- La barra de tareas por defecto será la de Solo Iconos, y además será un poco más ancho (una de las primeras cosas que suelo cambiar cuando configuro mi escritorio)
- Las visualizaciones en pantalla (OSD) que aparecen al cambiar el volumen o el brillo de la pantalla (por ejemplo) se han rediseñado para ser menos intrusivas.
- Ahora se notifica cuando el sistema está a punto de agotar el espacio incluso si el directorio personal es a una partición diferente.
- Ahora se pueden componer mosaicos con las esquinas de las ventanas combinando los atajos de mosaico izquierda/derecha/arriba/abajo. Por ejemplo, pulsando Meta+flecha arriba y después la flecha izquierda para hacer el mosaico de una ventana a la esquina superior izquierda.
- Las páginas de Configuración de Inicio automático, Bluetooth, y Gestión de usuarios se han rediseñado según los estándares modernos de interfaz de usuario y se han reescrito desde cero.
- Notificaciones de monitorización y fallo de discos S.M.A.R.T

Y muchas más pequeñas mejoras que hará las delicias de los usuarios de este entorno de trabajo.
Más información: KDE.org
Pruébalo y reporta errores

Todas las tareas dentro del mundo del Software Libre son importantes: desarrollar, traducir, empaquetar, diseñar, promocionar, etc. Pero hay una que se suele pasar por alto y de la que solo nos acordamos cuando las cosas no nos funcionan como debería: buscar errores.
Desde el blog te animo a que tú seas una de las personas responsables del éxito del nuevo lanzamiento de Plasma 5.20 de la Comunidad KDE. Para ello debes participar en la tarea de buscar y reportar errores, algo básico para que los desarrolladores los solucionen para que el despegue del escritorio esté bien pulido. Debéis pensar que en muchas ocasiones los errores existen porque no le han aparecido al grupo de desarrolladores ya que no se han dado las circunstancias para que lo hagan.
Para ello debes instalarte esta beta y comunicar los errores que salgan en bugs.kde.org, tal y como expliqué en su día en esta entrada del blog.
Digest of YaST Development Sprint 108
In our previous post we reported we were working in some mid-term goals in the areas of AutoYaST and storage management. This time we have more news to share about both, together with some other small YaST improvements.
- Several enhancements in the new MenuBar widget, including better handling and rendering of the hotkey shortcuts and improved keyboard navigation in text mode.
- More steps to add a menu bar to the Partitioner. Check this mail thread to know more about the status and the whole decision making process.
- New helpers to improve the experience of using Embedded Ruby in an AutoYaST profile (introduced in the previous post). Check the documentation of the new helpers for details.
- Huge speed up of the AutoYaST step for “Configuring Software Selections” by moving some filtering operations from Ruby to libzypp. Now the process is almost instant even when using the OSS repository that contains more than 60.000 packages!
- A new log of the packages upgraded via the self-update feature of the installer.
The next SLE and Leap releases are starting to shape and we are already working in new features for them (that you could of course preview in Tumbleweed, as usual). So stay tuned for more news in two weeks!
Tumbleweed Snapshots bring updated Inkscape, Node.js, KDE Applications
Four openSUSE Tumbleweed snapshots were released since the last article.
KDE’s Applications 20.08.1, Node.js, iproute2 and inkscape were updated in the snapshots throughout the week.
The 20200915 snapshot is trending stable at a rating of 97, according to the Tumbleweed snapshot reviewer. Many YaST packages were updated in this snapshot. The 4.3.19 yast2-network package forces a read of the current virtualization network configuration in case it’s not present. The Chinese pinyin character input package libpinyin updated to 2.4.91, which improved auto correction.
Inkscape 1.0.1 made its update in snapshot 20200914; the open source vector graphics editor added an experimental Scribus PDF export extension. The Scribus export is available as one of the many export formats in the ‘Save as’ and ‘Save a Copy’ dialogs. Selectors and the CSS dialogue are also available in the package under the object menu. Support was added for MultiPath TCP netlink interface in the 5.8.0 update of iproute2. Several libqt5 packages were updated to 5.15.1. Important behavior changes were pointed out in the libqt5-qtbase changelog where QSharedPointer objects call custom deleters even when the pointer being tracked is null. The 14.9.0 nodejs14 package upgraded dependencies and fixed compilation on AArch64 with GNU Compiler Collection 10. A major utilities update for random number generation in the kernel was made with ng-tools from version 5 to version 6.10; one of the changes was the conversion of all entropy sources to use OpenSSL instead of gcrypt, which eliminates the need for the gcrypt library. Object-oriented programming language vala updated to version 0.48.10 made improvements and added a TraverseVisitor for traversing the tree with a callback. Other updated packages in the snapshot were rredis 6.0.8, rubygem-rails-6.0 6.0.3.3, xlockmore 5.65, which removed some buffer GCC warnings, and virtualbox 6.1.14, which fixed regression in HDA emulation introduced in 6.1.0. The snapshot is trending at a stable rating of 93.
Applications 20.08.1 arrived in both snapshot 20200910 and snapshot 20200909. Among the changes to the Applications packages were a change to the image viewer Gwenview to sort properly. Video application Kdenlive fixed some broken configurations and fixed the shift click for multiple selections broken in Bin. Document viewer Okular improved the code against corrupted configurations and stored builtin annotations in a new config key.
Snapshot 20200910 brought an update for secure communications; GnuTLS 3.6.15 enabled TLS 1.3 and explicitly disabled TLS 1.2 with “-VERS-TLS1.2”. Utility rsyslog updated from version 8.39.0 to 8.2008.0. The changes were too many to list. One listed in the project’s changelog of the current version states “systemd service file removed from project. This was done as distros nowadays have very different service files and it no longer is useful to provide a “generic” (sic) example.” Dependency management package yarn 1.22.5 made a change so that headers won’t be printed when calling yarn init with the -2 flag. XFS debugger tool xfsprogs 5.8.0 improved reporting and messages and fixed the -D vs -R handling. The snapshot recorded a 99 rating.
Also recording a stable 99 rating was snapshot 20200909. The snapshot brought Common Vulnerabilities and Exposures fixes with the Mozilla Thunderbird 68.12.0 update. Crashes to gnome-music will be avoided when an online account is unavailable in the 3.36.5 version. Another fix in the music player is the selection of an album no longer randomly deselects other albums. The Linux Kernel was also updated to version 5.8.7 in the snapshot.
Incluso la CIA difundía el uso del editor #Vim
Wikileaks filtró algunas de las herramientas que difundían de hacking que difundían en la CIA, entre ellas el editor #Vim

Hoy en el blog no traigo ni tutorial sobre Vim, ni escribiré sobre un complemento o algún truco sobre este editor de texto. Hoy veremos que hasta la CIA enumeraba a Vim junto con otros software como herramientas a utilizar.
Este artículo es una nueva entrega del curso “improVIMsado” que desde hace meses vengo publicando en mi blog sobre el editor Vim y que puedes seguir en estos enlaces:
- https://victorhckinthefreeworld.com/tag/vim/
- https://victorhck.gitlab.io/comandos_vim/articulos.html
Hace unos años Wikileaks entre los documentos que filtraba sobre la CIA, se encontraban uno con nombre en clave “Vault 7” que enumeraba una serie de herramientas de hacking para utilizar.
Entre las herraminetas enumeradas podemos encontrar software bien conocido como: make, Sublime, Git, Docker o el editor Vim.
En el documento compartían información, manuales de comandos, etc sobre este editor de texto. Para algunos un motivo más para no utilizar este editor de texto. Para otros una simple curiosidad.
Y como tal me ha parecido a mí, por eso he querido compartir este corto artículo un poco “offtopic” sobre Vim…
