Skip to main content

the avatar of YaST Team

Digest of YaST Development Sprints 133 & 134

October is being a busy month for the YaST Team. We have fixed quite some bugs and implemented several features. As usual, we want to offer our beloved readers a summary with the most interesting stuff from the lastest couple of development sprints, including:

  • Improved handling of users on already installed systems
  • Progress in the refactoring of software management
  • Better selection of the disk in which to install the operating system
  • More robust handling of LUKS encrypted devices
  • Fixes for libYUI in some rare cases
  • Improvements in time zone management (affecting mainly China)

Improved Handling of Users

Let us start by quoting our latest report: “regarding the management of users, we hope to report big improvements in the next blog post”. Time has indeed come and we can now announce we brought the revamped users management described in this monographic blog post to the last parts of YaST that were still not taking advantage of the new approach. The changes are receiving an extra round of testing with the help of the Quality Assurance team at SUSE before we submit them to openSUSE Tumbleweed. When that happens, both the interactive YaST module to manage users and groups and its corresponding command line interface (not to be confused with the ncurses-powered text mode) will start using useradd and friends to manage users, groups and the related configurations.

There should not be big changes in the behavior apart from the improvements already mentioned in the original blog post presenting the overall idea. But a couple of fields were removed from the UI to reflect the current status:

  • The password for groups, which is an archaic and discouraged mechanism that nobody should be using in the era of sudo and other modern solutions.
  • The fields “Secondary Groups” and “Skeleton Directory” from the tab “Default for New Users”, since those settings are either gone or not directly configurable in recent versions of useradd.

There is still a lot of room for improvements in YaST Users, but we will postpone those to focus on other areas of YaST that need a similar revamping to the one done in users management.

Refactoring Software Management

One of those areas that need a bit of love and would benefit from some internal restructuring is the management of software, which goes much further than just installing and removing packages. We have just started with such a refactoring and we don’t know yet how far we will get on this round, but you can already read about some of the things we are doing in the description of this pull request, although it only shows a very small fraction of the full picture.

New Features in the Storage Area

We also improved the way YaST handles some storage technologies. First of all, we instructed YaST about the existence of BOSS (Boot Optimized Storage Solution) drives in some Dell systems. From now on, such devices will be automatically chosen as the first option to install the operating system, as described in this pull request with screenshots. As a bonus for that same set of changes, YaST will be more sensible regarding SD Cards.

On the other hand, we adapted the way YaST (or libstorage-ng to be fully precise) references LUKS devices in the fstab file to make it easier for systemd to handle some situations. Check the details in this other pull request (sorry, no screenshots this time).

Fixes for libYUI

As usually revealed by our posts, YaST is full of relatively unknown features that were developed to cover quite exceptional use-cases. Those characteristics remain there, used by a few users… and waiting for a chance to attack us! During the recent sprints we fixed the behavior of libYUI (the toolkit powering the YaST user interface) in a couple of rare scenarios. Check the descriptions of this pull request and this other one for more details.

Fun with Flags… err Time Zones

For reasons everybody knows, being able to work from home and to coordinate information with people from different geographical locations has become critical lately. That scenario has increased the relevance of properly configured time zones in the operating system. And that made us realize the time zones handled by YaST for China weren’t fully aligned with international standards. This pull request explains what was the problem and how we fixed it, so applications like MS Teams can work on top of (open)SUSE distributions just fine… everywhere in the globe.

That’s All… Until we Meet Again

As you know, YaST development never stops. And, although we only report the most interesting bits in our blog posts, we keep working in many areas… from very visible features and bug fixes to more internal refactoring. In any case, we promise to keep working and to keep you updated in future blog posts. So stay tuned and have a lot of fun!

a silhouette of a person's head and shoulders, used as a default avatar

Team Profile

What makes a great team? One important factor is that you have a balanced set of skills and personalities in the team. A team which only consists of leaders won't get much work done. A team which only consists of workers will not work into the right direction. So how can you identify the right balance and combination of people?

One answer is the Team Member Profile Test. It's a set of questions which team members answer. They are evaluated to give a result indicating which type of team member the person is and where it lies in the spectrum of possible types.

There are two dimension which are considered there, how much team members are oriented towards tasks and how much they are oriented towards people. This can be visualized in a Results Chart.

Here is an example:

You can see five segments:

  • The center (5,5) is the "worker" who has a set of balanced of attributes, no extremes. These team members are extremely important because they tend to just get stuff done.
  • The top left (9,1) is the "expert" who is focused on the task and its details but doesn't consider people that much. You need these to get the depth of work which is necessary to create great results.
  • The bottom right (1,9) is the "facilitator" who is something like the soul of the team, focused on social interactions and supports the team in creating great results.
  • The top right (9,9) is the "leader" who is strong on task and people and is giving direction to the team. You need these but you don't want to have more than one or two in a team otherwise there are conflicts of leadership.
  • The bottom left (1,1) is the "submarine" who floats along and tries to stay invisible. Not strong on any account. You don't want these in your team.

The test can provide some insight into the balance of the team. You want to have all but the submarine covered with an emphasis on the workers.

How does your team look like on this diagram?


a silhouette of a person's head and shoulders, used as a default avatar

La privacidad como derecho y Tor como herramienta para obtenerla en la red

La privacidad es un derecho fundamental y Tor es una de las herramientas para conseguirla en la red

Un año más la organización sin ánimo de lucro que está detrás del proyecto, y desarrolla la red Tor y el navegador Tor ha puesto en marcha una campaña de donaciones para conseguir dinero y mantener y mejorar ambos proyectos.

Para la campaña de este 2021 el lema que han escogido es: La privacidad es un derecho humano.

La privacidad trata sobre proteger lo que nos hace humanos: nuestro comportamiento diario, nuestra personalidad, nuestros miedos, nuestras relaciones y nuestras vulnerabilidades. Cualquier persona tiene derecho a la privacidad.

Las Naciones Unidad en la Declaración Universal de Derechos Humanos de 1948 en su artículo 12 decía que: “Nadie será objeto de injerencias arbitrarias en su vida privada” y que “toda persona tiene derecho a la protección de la ley contra tales injerencias o ataques“.

Sin embargo, los gobiernos, las corporaciones y otras entidades poderosas nos impiden ejercer nuestro derecho a la privacidad de muchas formas diferentes. Con software espía disponible comercialmente, con rastreo encubierto de nuestras comunicaciones, con anuncios que nos rastrean por Internet, con conjuntos de datos pseudoanónimos comprados y vendidos y utilizados para manipular el sentimiento. Y a pesar de todo esto, todavía tenemos que luchar para defender y luchar para ejercerla.

Y es por eso que Tor está aquí, para ayudar a ejercer el derecho humano a la privacidad en la red incluso cuando no es fácil.

Todos los días, la red Tor ayuda a millones de personas a conectarse a Internet privada y sin censura. Frente a la disminución de la libertad en Internet, el avance sigiloso de gobiernos opresivos y represivos, el avance vertiginoso de la tecnología de vigilancia, Tor sigue siendo un estándar de oro en tecnología de elusión de la censura y favoreciendo la privacidad.

Nuestra red descentralizada, nuestro enfoque de código abierto, nuestra comunidad de voluntarios y nuestro compromiso con el derecho humano a la privacidad significan que Tor puede ofrecer privacidad de una manera que pocas otras herramientas pueden ofrecer.

En 2022, tienen planes para hacer que Tor sea aún más rápido, más fuerte y más fácil de usar. Con tu apoyo les permitirá entre otras cosas:

  • Modernizar la red Tor, haciéndolo más rápido, más seguro y más fácil de integrar en otras aplicaciones. El software de la red Tor de hoy está escrito en el lenguaje de programación C. C, aunque venerable y omnipresente, es notoriamente propenso a errores. La falta de funciones de alto nivel de C también hace que muchas tareas de programación sean más complejas de lo que serían en un lenguaje más moderno. Estamos trabajando en una reescritura completa de Tor en Rust, un lenguaje moderno que brindará beneficios de velocidad y seguridad a los usuarios y, en última instancia, significará que otras aplicaciones y servicios pueden usar Tor mucho más fácilmente. ¡Una victoria para la privacidad!
  • Implementa importantes mejoras en la velocidad de Tor. Durante el año pasado, realizamos experimentos con el control de la congestión para mejorar las velocidades de la red Tor mediante un simulador de red. Estamos viendo resultados extremadamente emocionantes en velocidad y confiabilidad gracias a estos cambios. En 2022, comenzaremos a implementar algunas de estas mejoras en la red en vivo, lo que hará que Tor sea más rápido para los usuarios, especialmente si está en un dispositivo móvil.
  • Mejore la salud de la red Tor e invierta en nuestra comunidad de operadores de servidores de retransmisión (relay). Para seguir haciendo crecer la red Tor y asegurar que esté sana y bien defendida contra ataques, implementaremos una serie de iniciativas para organizar mejor nuestra comunidad de operadores de retransmisión y fortalecer la relación y la confianza entre el Proyecto Tor y los operadores de retransmisión. Además, continuaremos creando herramientas que nos ayuden a monitorear la red Tor en busca de actividad de retransmisiones maliciosas con el fin de eliminar estos retransmisores de la red.
  • Automatizar la experiencia de elusión para nuestros usuarios. Cuando un usuario se enfrenta a la censura contra la red Tor (por ejemplo, su gobierno ha bloqueado todas las direcciones IP públicas de los servidores de Tor), puede resultarle difícil entender por qué no pueden conectarse. ¿Es censura o algún otro problema? Del mismo modo, es difícil para el usuario saber exactamente cómo cambiar la configuración de su navegador Tor para eludir esta censura. Nuestros equipos de Anti-Censura, interfaz de usuario y Aplicaciones han estado trabajando en este problema durante mucho tiempo. En 2022, ofreceremos una experiencia completamente nueva que automatizará el proceso de elusión y detección de censura, simplificando la conexión a Tor para los usuarios que más lo necesitan.

Por todo esto la red Tor y el navegador Tor son unas herramientas importantes a la hora de que hacktivistas, activistas, periodistas y confidentes puedan ejercer con seguridad sus tareas de denuncias de actos abusivos.

Pero también lo es para las personas que se conectan a la red y quieren mantener su privacidad a salvo.

Por eso si puedes y quieres ayudar, puedes colaborar en esta campaña y donar al proyecto:

Este artículo es una traducción/adaptación del artículo en inglés publicado en la web de Tor, que puedes leer en este enlace:

a silhouette of a person's head and shoulders, used as a default avatar

Ampliado el plazo para presentar tu charla para Akademy-es 2021 en línea

Esta edición también va a ser muy especial, aunque esperemos que sea la última que tenga íntegramente estas características. En poco menos de un mes se va a celebrar Akademy-es 2021 en línea, la reunión anual en España de usuarios, simpatizantes y desarrolladores de KDE. Con el objetivo de dar una oportunidad a los rezagados, entre los que me incluyo, Akademy-es 2021 ha ampliado el plazo para presentar las charlas hasta el 5 de noviembre de 2021. Así que no desaproveches la ocasión y muestra a toda el mundo tu proyecto personal o comunitario.

Ampliado el plazo para presentar tu charla para Akademy-es 2021 en línea

Recordemos que en Akademy-es se realizarán las típicas charlas que presentan las novedades tanto de las aplicaciones como de las herramientas de programación, sin olvidar de los proyectos activos o de aquellos de los que se quieren lanzar en un futuro cercano.

En esta ocasión, Akademy-es 2021 se va a realizar en línea, es decir, que no podremos estrechar los lazos presenciales tan necesarios pero la Comunidad KDE ha pensado que esto no debe ser un impedimento para que del 19 al 21 de noviembre podamos compartir un fin de semana a través de las pantallas. Ya tendremos tiempo de compartir cervezas o café en otra ocasión.

Esta edición es una ocasión única para la comunidad latinoamericana ya que puede participar de forma directa pues las limitaciones geográficas han desaparecido, así que esperamos que tengamos ponencias que vengas del otro lado del charco.

En anteriores ediciones en estas charlas (extraído del programa de Akademy-es 2019 de Vigo) se ha hablado de móviles, de aplicaciones convergentes, de privacidad y seguridad, de la visión del desarrollador, de lenguajes de programación, del Software Libre

¿Y qué encontraremos en esta edición? Pues depende de ti, ya que si tienes algún tema que crees que interesará a la Comunidad KDE y sus simpatizantes, no lo dudes a y presenta tu charla para Akademy-es 2021. Estaremos encantados en escuchar tu propuesta.

Si quieres más información visita la web oficial de Akademy-es 2021, no obstante aquí te lo puedo contar yo:
Para proponer actividades se deberá enviar un correo a akademy-es-org@kde-espana.es antes del 5 de noviembre con un resumen breve de la presentación.

Ampliado el plazo para presentar tu charla para Akademy-es 2020 en línea

Es importante tener en cuenta las siguientes consideraciones:

  • Se puede elegir entre dos formatos de charlas:
    • Charlas de 30 minutos.
    • Lightning talk de 10 minutos.
  • Si la duración de las charlas propuestas no se ajusta a ninguno de estos dos esquemas (por ejemplo, si se necesita más tiempo), esto debe indicarse claramente en la comunicación.
  • Se permitirá a KDE España la distribución bajo una licencia libre de todo el material utilizado para la realización de la actividad, así como de la grabación de la misma.
  • La charla puede ser tanto a nivel de usuario como de nivel técnico.

Se trata de una gran oportunidad de darte a conocer en el mundo KDE y en el mundo del Software Libre en general.

Más información: Akademy-es 2021

¿Qué es Akademy-es?

Akademy-es (#akademyes, que es la etiqueta para las redes sociales) es evento más importante para los desarrolladores y simpatizantes de KDE, que se ha ido celebrando desde el 2006 con éxito creciente.

En general, las Akademy-es son el lugar adecuado para conocer a los desarrolladores, diseñadores, traductores, usuarios y empresas  que mueven este gran proyecto.

En ellas se realizan ponencias, se presentan programas, se hace un poco de caja para los proyectos libres (camisetas, chapas, etc) pero sobre todo se conoce a gente muy interesante y se cargan baterías para el futuro.

Podéis repasar las anteriores ediciones en estas entradas del blog (y si, me tengo que poner a actualizar ya!)

a silhouette of a person's head and shoulders, used as a default avatar

Camiseta 25 Aniversario de KDE #KDE25years

Un evento como unas bodas de plata se merecen que quede algo físico para conmemorarlo. Si para celebrar los 20 años se publicó un libro «20 Years of KDE: Past, Present and Future« (en el cual tuve el honor de participar con un capítulo), en esta ocasión se ha pensado diseñar la camiseta 25 Aniversario de KDE, que como es habitual, viene con la garantía de Freewear.

Camiseta 25 Aniversario de KDE #KDE25years

Como llevo diciendo es las últimas entradas , la mayor parte de la Comunidad del Software Libre se ha unido a la celebración de los 25 años de KDE.

De esta forma se han realzado eventos, conferencias, vídeos, lanzamientos de software conmemorativo como Plamsa 5.23 y, la mayoría de las distribuciones, haciendo un esfuerzo suplementario, para actualizarse lo antes posible (como en el caso de Kubuntu y Debian, que no suelen ser rápidos en este tipo de acciones).

No obstante me queda en el tintero comentar otra iniciativa que viene a complementar y dejar para el futuro la huella de que estamos de aniversario.

Se trata de la Camiseta 25 Aniversario de KDE, la cual la podéis adquirir en la tienda oficial de FreeWear y que tiene un precio de 18,50€ (de los cuales, 3€ van directamente a la Asociación KDE en forma de donación)

El diseño es básico: sobre fondo blanco tenemos a Konqi y a Kate jugando con el número 25, el cual es una especie de tronco de un árbol que crece sano y verde. Un increíble dibujo de David Revoy, un excelente ilustrador que colaborara con el proyecto Krita.

No obstante, lo mejor es que veáis la camiseta vosotros mismos ya que incorpora las inscripciones conmemorativas del aniversario. Por cierto, observad que se presenta en dos modelos, entallado o recto y está disponible en tallas desde la S hasta la XXXL)

Camiseta 25 Aniversario de KDE #KDE25years

a silhouette of a person's head and shoulders, used as a default avatar

Software Libre y Ciberseguridad, un enfoque práctico, nueva charla de GNU/Linux València

Me congratula promocionar una nueva actividad de la Asociación GNU/Linux València que lleva por título «Software Libre y Ciberseguridad, un enfoque pŕactico» a cargo de Javier Sepúlveda. Más información, sigue leyendo.

Software Libre y Ciberseguridad, un enfoque práctico, nueva charla de GNU/Linux València

Estos tiempos se está poniendo de manifiesto, con cada vez de forma más evidente, que el Software Libre y la seguridad informática están muy ligados.

Software Libre y Ciberseguridad, un enfoque práctico, nueva charla de GNU/Linux València

Es por ello que me complace compartir con vosotros un nuevo evento del grupo de personas que en València está impulsado el Software Libre gracias a sus reuniones tanto presenciales, cuando se podía, y ahora mismo de forma virtual.

De esta forma os invito a la Conferencia de Javier Sepúlveda, director técnico y administrador de sistemas GNU/Linux en VALENCIATECH, voluntario en GNU.org, y recién elegido presidente de la Asociación GNU/Linux València, que lleva por título «Software Libre y Ciberseguridad, un enfoque práctico».

Software Libre y Ciberseguridad, un enfoque práctico, nueva charla de GNU/Linux València

Como introducción os pongo la reflexión de Javier con la que promociona su charla:

¿Porque ya prohiben WhatsApp las principales entidades bancarias de nuestro país, entre sus empleados? ¿Porque desde los ayuntamientos apuestan por Telegram y no Whatsapp? ¿Porque durante la pandemia muchos docentes optaron por Google Classroom cuando deberían haber optado por Aules? ¿Porque los docentes hicieron uso de Zoom, Skype, Google Meet o Teams, exponiendo así la imagen y audio de sus alumnos a empresas, que avisan en sus términos de servicio, que recogen datos de sus usuarios a cambio del uso de dicho Software?

Resumiendo, la información básica es:

  • Fecha: viernes, 29 de octubre de 2021
  • Horario: 18:00 CEST
  • Lugar: Ca Revolta C/ Santa Teresa, 10 València, 46001
  • ¿Registro necesario? Por cuestiones de aforo os pedimos que reservéis plaza enviando un correo a , con el asunto: «Reserva plaza – Conf.Ciberseguridad», indicando nombre y apellidos, y número de acompañantes.

Si podéis asistir no os lo perdáis, seguro que no quedáis decepcionados.

Más información: GNU/Linux València

¡Únete a GNU/Linux València!

Aprovecho para recordar que desde hace unos meses, los chicos de GNU/Linux Valencia ya tienen su menú propio en el blog, con lo que seguir sus eventos en esta humilde bitácora será más fácil que nunca, y así podréis comprobar su alto nivel de actividades que realizan que destacan por su variedad.

Y que además, GNU/Linux València ha crecido y se ha ¡¡¡convertido en asociación!!! Así que si buscas una forma de colaborar con el Software Libre, esta asociación puede ser tu sitio. ¡Te esperamos!

a silhouette of a person's head and shoulders, used as a default avatar

Setting up Let's Encrypt certificates for the 389-ds LDAP server

In the past months I’ve set up LDAP at home, to avoid having different user accounts for the services that I run on my home hardware. Rather than the venerable OpenLDAP I settled for 389 Directory Server, commercially known as Red Hat Directory Server, mainly because I was more familiar with it. Rather than describing how to set that up (Red Hat’s own documentation is excellent on that regard), this post will focus on the steps required to enable encryption using Let’s Encrypt certificates.

The problem

Even the LDAP server (they’re actually two, but for the purpose of this post it does not matter) was just operating in my LAN, I wanted to reduce the amount of information going around in clear, including LDAP queries. That meant setting up encryption, of course.

The problem was that Red Hat’s docs only cover the “traditional” way of obtaining certificates, that is obtaining a Certificate Authority (CA) certificate, request a server certificate, obtain it and set it up in the server. There is (obviously) no mention of Let’s Encrypt anywhere. I’ve found some guides, but they were either too complicated (lots of fuzzing around with certutil) or they weren’t clear for some steps. Hence, this post.

NOTE: I’ve focused on 389-ds version 2.0 and above, which has a different CLI set of commands than the venerable 1.3 series. All of the steps shown here can also be carried out via the Cockpit Web interface as well, if your distribution carries it (spoiler: openSUSE doesn’t).

Importing the CA

This is arguably one of the most important steps of the process. 389-ds needs also to store the CA for your certificates. As you may (or may not) know, Let’s Encrypt’s CA certificates are two:

  • The actual “root” certificate, by Internet Security Research Group (ISRG);
  • An intermediate certificate, signed by the above root, called “R3”.

Let’s Encrypt certificates, like the one powering this website, are signed by R3. But since you have to follow a “chain of trust”, to validate the certificate you follow these steps (excuse me, security people; this is probably a bad semplification):

  1. Check the actual certificate (e.g. the one on dennogumi.org)
  2. The certificate is signed by “R3”, so move up the chain and check the R3 certificate
  3. The R3 certificate is signed by the ISRG root certificate, so move up the chain and check the ISRG root
  4. The ISRG certificate is trusted by the OS / application using it, so everything stops there.

If any of the steps fails, the whole validation fails.

This long winding explanation is to tell you that 389-ds needs the whole certificate chain for its CA (so ISRG root + R3) in order to properly validate the Let’s Encrypt certificate you’ll use. If you don’t do that, chances are that some software which uses the system CA will work (for example ldapsearch) but others, like SSSD will fail with “Unknown CA” errors (buried deep into debug logs, so at the practical level they’ll just fail and you won’t know why).

Let’s get to business. Access the Chain of Trust page for Let’s Encrypt and download the relevant certificates. I’m not sure if 389-ds supports ECDSA certificates, so I downloaded the RSA ones: ISRG Root X1 and Let’s Encrypt R3 (both in PEM format). Put them somewhere in your server. Then, as root, import the two CA certificates into 389-ds (substitute LDAP_ADDRESS with the LDAP URI of your server):

# ISRG Root

dsconf -v -D "cn=Directory Manager"  LDAP_ADDRESS security ca-certificate \
    add --file /path/to/certificate --name "ISRG"

# Let's Encrypt R3

dsconf -v -D "cn=Directory Manager"  LDAP_ADDRESS security ca-certificate \
    add --file /path/to/certificate --name "R3"

NOTE: This step may not be necessary if you use Let’s Encrypt’s “full chain” certificates, but I did not test that.

Importing the certificates

Then, you have to import a Let’s Encrypt certificate, which means you have to obtain one. There are hundreds of guides and clients that can do the job nicely, so I won’t cover that part. If you use certbot, Let’s Encrypt’s official client, you will have the certificate and the private key for it in /etc/letsencrypt/live/YOURDOMAIN/fullchain.pem and /etc/letsencrypt/live/YOURDOMAIN/privkey.pem.

You need to import the private key first (substitute LDAP_ADDRESS and DOMAIN with the LDAP URI of your server and the Let’s Encrypt domain, respectively):

dsctl LDAP_ADDRESS tls import-server-key-cert \
    /etc/letsencrypt/live/DOMAIN/fullchain.pem \
        /etc/letsencrypt/live/DOMAIN/privkey.pem

Note that you pass the certificate as well as the key to import it (in doubt, check’s Red Hat documentation).

Once the key is done, it is time to import the actual certificate:

dsconf -v -D "cn=Directory Manager" LDAP_ADDRESS security certificate add \
    --file /etc/letsencrypt/live/DOMAIN/fullchain.pem \
        --primary-cert \
        --name "LE"

--primary-cert sets the certificate as the server’s primary certificate.

Then, we switch on TLS in the server:

dsconf -v -D "cn=Directory Manager" LDAP_ADDRESS config replace \
	nsslapd-securePort=636 nsslapd-security=on

And finally, we restart our instance (replace INSTANCE with your configured instance name):

systemctl restart dirsrv@INSTANCE

Testing everything out

You can use ldapsearch to check whether the SSL connection is OK (I’ve used Directory Manager, but you can use any user you want):


# STARTTLS

ldapsearch -H ldap://your.ldap.hostname -W -x -D "cn=Directory Manager" -ZZ "search filter here"

# TLS

ldapsearch -H ldaps://your.ldap.hostname -W -x -D "cn=Directory Manager" "search filter here"

If everything is OK, you should get a result: otherwise, you’ll get an error like “Can’t contact the LDAP server”.

Alternatively, you can use openssl:

openssl s_client -connect your.ldap.hostname:636
[...]
---
SSL handshake has read 5188 bytes and written 438 bytes
Verification: OK

Don’t forget to adjust your applications to connect via ldaps rather than ldap after everything is done.

Renewing the certificates

To renew the certificates you repeat the steps outlined above for the certificates (without the CA part, of course). Make sure you always import your private key: if there is a mismatch between it and the certificate, 389-ds will refuse to start.

If you use certbot you can use a post-renewal hook to trigger the import of the certificate into 389-ds. This is what I’ve been using: bear in mind it’s customized to my setup and does a few more things than needed. Also it does only import the certificate, and not the full chain.

I messed up and 389-ds won’t start! What do I do?

You can disable encryption by editing /etc/dirsrv/slapd-INSTANCE/dse.ldif and changing nsslapd-security to off, then start 389-ds again. Then you can review everything and see what went wrong. But if you can, I recommend the Cockpit Web UI: it makes the first-time setup a breeze.

Wrap up

Importing the certificates is surprisingly simple, but my Internet searches have been frustrating because at least half of what I found was either not applicable, incomplete, or did not work. I hope this small tutorial can be useful for those who want a bit more security in their LDAP setup.

a silhouette of a person's head and shoulders, used as a default avatar

Optimizing LibreOffice for a larger number of users

Have you ever edited a document in LibreOffice in more than one window? Right, neither have I. Who'd think about LibreOffice and more than one user at the same time, right? Except ... somebody did and that's how collaborative editing based on LibreOffice works. For whatever strange reason, somewhen in the past somebody thought that implementing multiple views for one document in OpenOffice (StarOffice?) was a good idea. Just select Window->New Window in the menu and you can edit your favourite document in 50 views that each show a different part of the document and update in real time. And that, in fact, is how collaborative editing such as with Collabora Online works - open a document, create a new view for every user, and there you go.

But, given that this has never really been used that much, how well did the original relevant code perform and scale for more users? Well, not much, it turns out. Not a big surprise, considering that presumably back when that code was written nobody thought the same document could be edited by numerous users at the same time. But I've been looking exactly into this recently as part of optimizing Collabora Online performance, and boy, are there were gems in there. You thought that showing the same document in more views would just mean more painting also in those views? Nah, think again, this is OpenOffice code, the land of programming wonders.

Profiling the code

When running Online's perf-test, which simulates several users typing in the same document, most of the time is actually spent in SwEditShell::EndAllAction(). It's called whenever Writer finishes an action such as adding another typed characters, and one of the things it does is telling other views about the change. So here LO spends a little time adding the character and then the rest of the time is spent in various code parts "talking" about it. A good part of that is that whenever an action is finished, that view tells the others about it happening, and then all those views tell all other views about how they reacted to it, making every change O(n^2) with the number of views. That normally does not matter, since on the desktop n generally tends to be 1, but hey, add few more views, and it can be a magnitude slower or more.

Redrawing, for example, is rather peculiar. When a part of the document changes, relevant areas of the view need redrawing. So all views get told about the rectangles that need repainting. In the desktop case those can be cropped by the window area, but for tiled rendering used by Online the entire document is the "window" area, so every view gets told about every change. And each view collects such rectangles, and later on it processes them and tells all other views about the changes. Yes, again. And it seems that in rare cases each view really needs its own repaint changes (even besides the cropping, as said before). So there's a lot of repeated processing of usually the same rectangles over and over again.

One of the functions prominently taking place in CPU costs is SwRegionRects::Compress(), which ironically is supposed to make things faster by compressing a group of rectangles into a smaller set of rectangles. I guess one of the cases in OpenOffice where the developer theoretically heard about optimizations being supposed to make things faster, but somehow the practical side of things just wasn't there. What happens here is that the function compares each rectangle with each other, checking if they can be merged ... and then, if yes and that changes the set of rectangles, it restarts the entire operation. Which easily makes the entire thing O(n^3). I have not actually found out why the restarting is there. I could eventually think of a rather rare case where restarting makes it possible to compress the rectangles more, but another possibility is that the code dates back to the time when it was not safe to continue after modifying whichever container SwRegionRects was using back then, and it has stayed there even though that class has been using to std::vector since a long time ago.

Another kind of interesting take on things is the SwRegionRects::operator-= in there. Would you expect that rectangles would be collected by simply, well, collecting them and then merging them together? Maybe you would, but that's not how it's done here. See, somebody apparently thought that it'd be better to use the whole area, and then remove rectangles to paint from it, and then at the end invert the whole thing. The document area is limited, so maybe this was done to "easily" crop everything by the area? It works, except, of course, this is way slower. Just not slow enough to really notice when n is 1.

Other code that works fine with small numbers but fails badly with larger ones is VCLEventListeners, a class for getting notified about changes to VCL objects such as windows. It's simply a list of listener objects, and normally there aren't that many of those. But if LO core gets overloaded, this may grow. And since each listener may remove itself from the list at any point, the loop calling all of them always checks for each of them if the listener is still in the list. So, again, O(n^2). And, of course, it's only rarely that any listener removes itself, so the code spends a lot of time doing checks just in case.

But so that I do not talk only about old code, new code can do equally interesting things. Remote rendering uses LOK (LibreOfficeKit), which uses text-based messages to send notifications about changes. And the intuitive choice for writing text are C++ iostreams, which are flexible, and slow. So there will be a lot of time spent in creating text messages, because as said above, there are many changes happening, repeatedly. And since there are so many repeated messages, it makes sense to write extra class CallbackFlushHandler that collects these messages and drops duplicates. Except ... for many of the checks it first needs to decode text mesages back to binary data, using C++ iostreams. And in most cases, it will find out that it can drop some message duplicates, so all these string conversions were done for nothing. Oops.

And there are more ways in which things can get slower rather than faster. CallbackFlushHandler uses an idle timer to first process all data in bulk and flush the data at once only when idle. Except if it gets too busy to keep up, and it can easily get too busy because of all the things pointed out above, it may take a very long time before any data is flushed. To make things even worse, the queue of collected messages will be getting longer and longer, which means searching for duplicates and compressing it will get longer. Which in turn will make everything even slower, which again in turn will make everything even slower. Bummer.

All in all, if unlucky, it may not take that much for everything to slow down very noticeably. Online's perf-test, which simulates only 6 users typing, can easily choke itself for a long time. Admitedly, it simulates them all typing at the same time and rather fast, which is not very a realistic scenario, but typing hitting the keyboard randomly and quickly is exactly how we all test things, right? So I guess it could be said that Collabora Online's perf-test simulates users testing Collabora Online performance :). Realistic scenarios are not going to be this bad.

Anyway. In this YT video you can see in the top part how perf-test performs without any optimizations. The other 5 simulated users are typing elsewhere in the document, so it's not visible, but it affects performance.

Improved performance

But as you can see in the other two parts, this poor performance is actually already a thing of the past. The middle part shows how big a difference can even one change make. In this specific case, the only difference is adding an extra high-priority timer to CallbackFlushHandler, which tries to flush the message queue before it becomes too big.

The bottom part is all the improvements combined, some of them already in git, some of them I'm still cleaning up. That includes changes like:

  • SwRegionRects::Compress() is now roughly somewhere at O(n*log(n)) I think. I've fixed the pointless restarts on any change, and implemented further optimizations such as Noel's idea to first sort the rectangles and not compare ones that cannot possibly overlap.
  • I have also changed the doubly-inverted paint rectangles handling to simply collecting them, cropping them at the end and compressing them.
  • One of the things I noticed when views collect their paint rectangles is that often they are adjacent and together form one large rectangle. So I have added a rather simple optimization of detecting this case and simply growing the previous rectangle.
  • Since it seems each Writer view really needs to collect its own paint rectangles, I have at least changed it so that they do not keep telling each other about them all the time in LOK mode. Now they collect them, and only once at end they are all combined together and compressed, and often thousands of rectangles become just tens of them.
  • Another thing Writer views like to announce all the time in LOK mode are cursor and selection positions. Now they just set a flag and compute and send the data only once at the end if needed.
  • VCLEventListeners now performs checks only if it knows that a listener has actually removed itself. Which it knows, because it manages the list.
  • Even though LOK now uses tiled rendering to send the view contents to clients, LO core was still rendering also to windows, even though those windows are never shown. That's now avoided by ignoring window invalidations in LOK mode.
  • Noel has written a JSON writer which is faster then the Boost one, and made non-JSON parts use our OString, which is faster and better suited for the fixed-format messages.
  • I have converted the from-message conversions to also use our strings, but more importantly I have changed internal LOK communications to be binary rather than text based, so that they usually do not even have to be converted. Only at the end those relatively few non-duplicated text messages are created.
  • Noel has optimized some queue handling in CallbackFlushHandler and then I have optimized it some more. Including the high-priority timer mentioned above.
  • There have been various other improvements from others from Collabora as part of the recent focus on improving performance.
  • While working on all of this, I noticed that even though we have support for Link Time Optimization (LTO), we do not use it, probably because it was broken on Windows. I've fixed this and sorted out few other small problems, and releases in the future should get a couple percent better performance across the board from this.

This is still work in progress, but it already looks much better, as now most of the time is actually spent doing useful things like doing the actual document changes or drawing and sending document tiles to clients. LOK and Collabora Online performance should improve noticeably, recent (Collabora) versions 6.4.x should include already some improvements, and the upcoming Collabora Online 2021 should have all of them.

And even though this's been an exercise in profiling LibreOffice performance for something nobody thought of back when the original OpenOffice code was written, some of these changes should matter even for desktop LibreOffice and will be included starting with LO 7.3.



a silhouette of a person's head and shoulders, used as a default avatar

#openSUSE Tumbleweed revisión de la semana 42 de 2021

Tumbleweed es una distribución “Rolling Release” de actualización contínua. Aquí puedes estar al tanto de las últimas novedades.

Tumbleweed

openSUSE Tumbleweed es la versión “rolling release” o de actualización continua de la distribución de GNU/Linux openSUSE.

Hagamos un repaso a las novedades que han llegado hasta los repositorios esta semana.

El anuncio original lo puedes leer en el blog de Dominique Leuenberger, publicado bajo licencia CC-by-sa, en este este enlace:

Está semana los test openQA han echado para atrás 6 snapshots por diversos problemas y solo se han llegado a publicar 2 nuevas snapshots.

Eso está bien. Quiere decir que los sistemas de testeo funcionan y cazan los errores antes de que estos lleguen a los usuarios y causen problemas en sus sistemas.

Como he dicho se han publicado 2 snapshotso, 1016 y 1019), que contenían entre otras, estas actualizaciones:

  • KDE Gear 21.08.2
  • KDE Plasma 5.23.0
  • KDE Frameworks 5.87.0
  • Apparmor
  • Gimp 2.10.28
  • Linux kernel 5.14.11
  • Mesa 21.2.4

Y en próximas actualizaciones podrás encontrar:

  • KDE Plasma 5.23.1
  • Systemd 249.5
  • Linux kernel 5.14.14
  • Bison 3.8.2
  • RPM 4.1Co
  • openSSL 3.0.0

Si quieres estar a la última con software actualizado y probado utiliza openSUSE Tumbleweed la opción rolling release de la distribución de GNU/Linux openSUSE.

Mantente actualizado y ya sabes: Have a lot of fun!!

Enlaces de interés

Geeko_ascii

——————————–

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE Tumbleweed – Review of the week 2021/42

Dear Tumbleweed users and hackers,

This week has been overshadowed a lot with Snapshots that did not pass openQA – in total, we have had 6 snapshots in QA – of which only 2 made it through and have been published. Of course, all the issues identified have resulted in relevant bug reports which the maintainers will be working on, fixing, resubmitting the packages that needed to be reverted, and then we move forward. After all, openQA does exactly what we want it to do: it protects you, the users, from getting broken snapshots.

So, as said, we published two snapshots (1016 and 1019), containing those updates:

  • KDE Gear 21.08.2
  • KDE Plasma 5.23.0
  • KDE Frameworks 5.87.0
  • Apparmor profile update for Samba printing subsystem
  • Gimp 2.10.28
  • Linux kernel 5.14.11 (5.14.12 was responsible for one of the snapshots being blocked)
  • Mesa 21.2.4

The staging projects are currently filled with:

  • KDE Plasma 5.23.1
  • Systemd 249.5
  • Linux kernel 5.14.14
  • Bison 3.8.2: breaks gdb
  • RPM 4.17: one fix for suse-hpc missing (Fix submitted to devel prj)
  • Coreutils 9.0
  • gpg 2.3.x: some openQA issues detected, https://progress.opensuse.org/issues/101358
  • openSSL 3.0.0: no active progress