Nuevo foro en espanol de openSUSE
Hola Geekos.
Los antiguos moderadores DiabloRojo, Karlggest y Victorhck de ForoSuSE, el foro en español de openSUSE, que actualmente no se encuentra disponible tiene el placer de anunciaros en latín:
Habemus Forum!
Forosuse ha sido el lugar de encuentro de la comunidad hispanohablante de usuarios y usuarias de openSUSE durante muchos años. El administrador Riven se encargó de mantener ese sitio en el que aprender, compartir y ser punto de reunión de quienes una de nuestras pasiones es openSUSE.
Creemos que se debe agradecer su labor, por el tiempo y recursos invertidos de manera altruista en mantener forosuse.org durante tanto tiempo. Pero las cosas cambian, en la vida surgen nuevos intereses, nuevos proyectos acaparan nuestro tiempo y es normal que así sea.
Por cuestiones técnicas nuestro querido forosuse ha sufrido contratiempos que lo han mantenido fuera de línea durante los últimos meses, debido a las actualizaciones del software propio del foro y otros problemas con el hosting. Después de muchos meses en los que no se resolvían esos problemas, los antiguos moderadores de forosuse tomamos una decisión.
Hemos decidido crear el nuevo foro dentro de la infraestructura oficial del proyecto openSUSE en lugar de crear un nuevo dominio, de este modo nos centralizamos para ofrecer nuestra ayuda a los usuarios del Geeko y dejamos fuera de nuestras manos el mantenimiento del foro, que será labor de gente más sabia que nosotros.
También cabe destacar y agradecer aquí el papel que han jugado los administradores del foro, que se han mostrado siempre con ganas de ayudarnos y nos han dado las herramientas y han realizado el trabajo relacionado con la administración de sistemas poniendo en marcha el foro y con la interfaz disponible en español.
Estamos emocionados con el nuevo foro y esperamos contar con vuestra ayuda para lograr una comunidad en nuestra lengua de nuestro querido Geeko. Nos tenéis disponibles en:
https://forums.opensuse.org/forumdisplay.php/957-Espa%C3%B1ol-(Spanish)
Dentro del foro propio en español. No dudéis en crearos una nueva cuenta. Este nuevo lugar quiere volver a reunir a la comunidad de openSUSE en español y seguir brindando apoyo mutuo y volver a compartir en torno a openSUSE.
Esperamos que el sitio poco a poco vaya creciendo y mejorando con vuestra ayuda. Sentiros libres para escribir en el nuevo foro, contar lo que te ocurre con el Geeko y sobre todo:
Diviértete un montón!
#English
Hello Geekos.
Former moderators DiabloRojo, Karlggest and Victorhck of ForoSuSE, openSUSE’s Spanish forum, which is currently unavailable, are pleased to announce to you in Latin:
Habemus Forum!
Forosuse has been the meeting place for the Spanish-speaking community of openSUSE users for many years. The administrator Riven was in charge of maintaining that place where we can learn, share and be a meeting point for those of us who one of our passions is openSUSE.
We believe that his work is to be appreciated, with the time and resources invested altruistically in maintaining forosuse.org for so long. But things change, new interests arise in life, new projects take up our time and it is normal that this is the case.
Due to technical issues, our beloved forosuse has suffered setbacks, due to updates to the forum’s own software and other problems with hosting. After many months of unresolved issues, we former forum moderators made a decision.
We have decided to create the new forum within the official infrastructure of the openSUSE project instead of creating a new domain, in this way we centralize ourselves to offer our help to Geeko users and leave the maintenance of the forum out of our hands, which will be work of people wiser than us.
It is also worth highlighting and thanking here the role played by the forum administrators, who have always been eager to help us and have given us the tools and have carried out the work related to the sys admin by launching the forum and with the interface available in Spanish.
We are excited about the new forum and we look forward to your help to build a community in our language for our beloved geeko. You have us available at:
https://forums.opensuse.org/forumdisplay.php/957-Espa%C3%B1ol-(Spanish)
Within the own forum in Spanish. Do not hesitate to create a new account. This new place wants to reunite the openSUSE community in Spanish and continue to provide mutual support and share around openSUSE again.
We hope that the site will gradually grow and improve with your help. Feel free to write in the new forum, tell what happens to you with the Geeko and above all:
Have a lot of fun!
Install VirtualBox Guest Additions for openEuler 20.03 SP1
This article mainly discusses encountered issues and solutions when I installed VirtualBox Guest Additions for OpenEuler 20.03 SP1.
- After installing openEuler 20.03 SP1, log in to openEuler (I login as root here, if you are a regular user, you may need to use
sudowhen executing commands below), and then clickDevice->Insert Guest Additions CD image...to load the latest version of the Guest Additions CD into the system. - Mount the CD:
mkdir -p /run/media/openeuler/VBoxAdditions
mount /dev/sr0 /run/media/openeuler/VBoxAdditions
cd /run/media/openeuler/VBoxAdditions

- If you choose to install directly as usual, it may prompt an error for extraction because the system lacks the
tarextraction program in the case of minimal installation. So install it:
dnf update
dnf install tar

- Run the installation:
./VBoxLinuxAdditions.run

You can see there exists an error. Let’s take a look at the details of the error:
cat /var/log/vboxadd-setup.log

It can be seen that it is a compilation error. According to Linux kernel documentation, the access_ok function has been changed from passing three parameters to passing two parameters after the Linux kernel version 5. VBoxAdditions judges whether to use three or two parameters by the Linux version number. However, the kernel version number used by openEuler 20.03 SP1 is 4, but the patches later than the Linux kernel version 5 for this have been applied. As a result, the access_ok function only needs two parameters in this case, so we only need to change the VBoxAdditions source code to let it call access_ok with two parameters.
- Modify the source code of VBoxAdditions (note that the corresponding path is changed according to the version number of VBoxAdditions):
cd /opt/VBoxGuestAdditions-6.1.22/src/vboxguest-6.1.22/vboxguest/r0drv/linux
vi memuserkernel-r0drv-linux.c
Use i to edit, and change the following two places RTLNX_VER_MIN(5,0,0) to RTLNX_VER_MIN(4,0,0):

Then press the ESC key, :wq save and exit.
There is also a same issue in another file, so continue to execute the command:
cd ../../../vboxsf
vi regops.c
Then press the ESC key, :wq save and exit.
- Finally continue the installation:
/sbin/rcvboxadd setup

Installation is complete!
Finally, the VirtualBox Guest Addition can be used after restarting the system!
A solution to rEFInd unable to load using shim when Secure Boot is enabled
Background
Ubuntu 21.10 can load on my computer through secure boot, and the shim version is 15.4. Then refer to the official tutorial, I installed rEFInd v0.13.2 (the latest version when I posted this blog) via PPA in Ubuntu 21.10. However, when I restart the system and load rEFInd, it always fails with the message Verification failed:(0x1A)Security Violation. I’m sure that both refind_local.cer and refind.cer under the EFI/refind/keys/ have been enrolled through MokManager (Although only refind_local.cer is needed through PPA installation).
Cause
From this post, I got that rEFInd currently (v0.13.2) lacks the .sbat section. For shim 15.3 and later versions, SBAT is mandatory, resulting in failure to start rEFInd.
The post also indicates that the author of rEFInd is currently studying how to solve the related problems. I hope the later versions can fix this issue.
Solution
To conclude, you need to use shim 15 to solve this problem. For this purpose, you can perform the following steps (applicable to amd64. Steps are also similar if you are on other architectures):
- Obtain MokManager and the shim efi file signed by Microsoft from Ubuntu launchpad. To achieve this, download shim_15+1552672080.a4a1fbe-0ubuntu2_amd64.deb and shim-signed_1.45+15+1552672080-064ubuntub_bed64.deb.
- Unpack the downloaded shim_15+1552672080.a4a1fbe-0ubuntu2_amd64.deb and take out the
mmx64.efifile. (data.tar.xz->.->usr/lib/shim/mmx64.efi) - Unpack the download shim-signed_1.45+15+1552672080.a4a1fbe-0ubuntu2_amd64.deb, take out the
shimx64.efi.dualsignedfile. (data.tar.xz->.->usr/lib/shim/shimx64.efi.dualsigned) Rename it toshimx64.efi. - Go to download refind-bin-0.13.2.zip. Then create a new folder, and put the two files taken out together with the downloaded zip file into the new folder.
- Open terminal in the fore-mentioned folder, then execute the following commands:
unzip refind-bin-0.13.2.zip
cd refind-bin-0.13.2
sudo ./refind-install --shim ../shimx64.efi
If you encounter any confirmation during the installation process, just enter y to confirm.
- After restarting, if it prompts
Verification failed, refer to step 9 of the official tutorial. SelectEnroll key from disk, and then select the ESP disk where you installed rEFInd. Finally, choose the file of pathEFI/refind/keys/refind.certo import. - If you use a non-Ubuntu Linux system on your computer, you can continue to import the cer files corresponding to your distributions in
EFI/refind/keysas above. Failure to do so may cause your Linux distribution to be unable to boot via rEFInd.
Keras Model Errors on Loading using TF2.3 – IndexError: list index out of range
Here is an example to solve similar questions from the issue #43561
When I was trying to load the sequential model here using tf.keras.models.load_model in TF 2.3.1, an error is thrown at the following location:
~/.local/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py in _should_skip_first_node(layer)
1031 return (isinstance(layer, Functional) and
1032 # Filter out Sequential models without an input shape.
-> 1033 isinstance(layer._layers[0], input_layer_module.InputLayer))
1034
1035
IndexError: list index out of range
The model is believed to be trained using keras and under TF1.9, and the model definition can be found here, and here’s the code for training.
Then I downgraded to TF 2.2 and 2.1 with the same code above, it threw the error just as #35934 Keras Model Errors on Loading – ‘list’ object has no attribute ‘items’
Then I downgraded to TF 2.0, the code was executing indefinitely. Finally I had to manually stop it:
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py in IsMapping(o)
2569
2570 """
-> 2571 return _pywrap_tensorflow_internal.IsMapping(o)
2572
2573 def IsMappingView(o):
KeyboardInterrupt:
Then I have tried to use keras instead of tf.keras with TF 2.3.1 and Keras 2.3.1, first I encountered an error that can be solved in this way: https://github.com/tensorflow/tensorflow/issues/38589#issuecomment-665930503 . Then another error occurs:
~/.local/lib/python3.7/site-packages/tensorflow/python/keras/backend.py in function(inputs, outputs, updates, name, **kwargs)
3931 if updates:
3932 raise ValueError('`updates` argument is not supported during '
-> 3933 'eager execution. You passed: %s' % (updates,))
3934 from tensorflow.python.keras import models # pylint: disable=g-import-not-at-top
3935 from tensorflow.python.keras.utils import tf_utils # pylint: disable=g-import-not-at-top
ValueError: `updates` argument is not supported during eager execution. You passed: [<tf.Variable 'UnreadVariable' shape=() dtype=int64, numpy=0>, <tf.Variable 'UnreadVariable' shape=(3, 3, 3, 32) dtype=float32, numpy=
array([[[[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0.],
......
So this way fails.
Solutions
One way is to use TF 1.15.4 and Keras 2.3.1, and finally it worked out fine, inputs, outputs, summary etc. are all parsed correctly, as well as being able to run data through the model.
Another is to modify the TF 2.3.1 source code so that the model can be used in latest version using tensorflow keras. You have to redefining _should_skip_first_node in file tensorflow/python/keras/engine/functional.py:
def _should_skip_first_node(layer):
"""Returns True if the first layer node should not be saved or loaded."""
# Networks that are constructed with an Input layer/shape start with a
# pre-existing node linking their input to output. This node is excluded from
# the network config.
if layer._layers:
return (isinstance(layer, Functional) and
# Filter out Sequential models without an input shape.
isinstance(layer._layers[0], input_layer_module.InputLayer))
else:
return isinstance(layer, Functional)
Afterwards
I have submitted a PR #43570 to tensorflow, and it get fixed in Tensorflow 2.5.0.
Virtual Conferences: a love-hate relationship
I love conferences. Now, that most conferences are either virtual or hybrid (both virtual and on-premises), people often say that it must be heaven for me. I can visit many more conferences and give many more talks. Well, it is not just this simple. Virtual conferences are a love-hate relationship for me. Of course, there are some advantages, but also disadvantages.
Giving virtual talks
Yes, I could give more talks. Even overlapping conferences are not a problem any more: I can give a talk at a European conference in the morning, and give another talk at a US conference in the evening. Most of the time I give talks about sudo and syslog-ng, and there are relevant conferences almost every other day.
Does it make sense to give a talk this often? No. Repeating the same talk all over again is boring both for me and for the audience, especially that talks are often recorded and published. Both sudo and syslog-ng are complex but small utilities, covering only a limited number of use cases. Creating a brand new talk for each event would be a lot of effort and not even possible due to the limited number of topics.
Also, from the speaker point of view, giving a talk virtually is not the same, as being there in person. I love teaching and spreading knowledge. However giving the actual talk, transferring knowledge, is just a third of the value. The hallway track, talking to (potential) users and collecting feedback is at least as important from a product management point of view. And learning the latest technology trends first hand from fellow speakers at dinners and other events is also a huge bonus. None of these are possible at virtual events, or so far not that effective.
See my opensource.com article about being an open source evangelist for a few ideas how to make virtual events better for speakers: https://opensource.com/article/21/1/open-source-evangelist
Giving virtual talks does not work really well for me. I am used to being able to see my audience and not just to a screen with a mic and sometimes a camera. All I need is occasional eye contact and being able to check how my audience reacts. Ten or a thousand people is no different. After giving a few talks where the technical content was fantastic according to feedback, but the presentation boring and monotonous, I found a workaround. My talks are given or pre-recorded in the office in a meeting room with one or two colleagues listening. From the talk quality point of view, this setting is almost as good as a real conference audience.
Of course, there are also cases where being virtual is the only way I can present at a conference. When I give a talk on sudo or syslog-ng, I travel on company budget. However I am happy talk about other topics as well. Last year I could give a talk at the OpenPOWER North America Summit about POWER and open source this way. I am a POWER enthusiast, but it is not part of my job. Traveling there on my own would have been a bit too expensive for me, but being virtual allowed me to participate and give a talk.
Participating at events
There is an open source and / or an IT security focused conference almost every day. I am happy to participate in Arm and POWER related events too. However participating each and every relevant event simply does not make sense. While I try not to repeat the exact same talk over and over again, many speakers do. And of course participation takes a lot of time too.
I often hear that I should just watch the recordings. That way I can watch them at my own convenience, when it does not clash with meetings or hiking plans. It does not work. One of the values of attending virtual conferences live is that you can ask questions. When you watch a recording, you cannot ask questions. And the Q&A part of sessions is often not part of the recordings. With overwhelming amount of recordings available and in my “to watch” list, my experience is that I either consider a talk important enough to watch live or it will be long irrelevant by the time I get to the recording.
Virtual conferences are great when I do not have the time or resources to be at the event in person. This is how I can participate OpenPOWER events, or All Things Open this year with the travel restrictions still in place. However in-person events are better in many ways. Physical presence helps in focusing, your whole mind is there. Not to mention conference t-shirts and stickers :-) You can talk to like minded people. Chat cannot fully replace that.
In-person conferences are a fantastic place to finally meet people you have worked with over the Internet for many years. I met various FreeBSD developers, ARM guys, syslog-ng users while at FOSDEM, All Things Open or SCALE. Talking to someone in person, even just for a few minutes, make these work collaborations even closer. Talking to engineers at the exhibition booths is also different. They are a lot more open and happy to talk about topics not available in official communication, like how syslog-ng is integrated in their product, what they consider our main strengths and weaknesses. At one conference an AMD engineer explained me how they are still working on ARM CPUs, even if nothing is seen from the outside. None of these are available in a chat window.
What’s next?
I know that it is more difficult from the organizers point of view, but I hope that most events will go hybrid. As a speaker I definitely want to present in person, whenever possible. It is a lot more value from the speaker’s point of view. But hybrid gives the possibility to talk or participate even if I cannot be there for financial or scheduling reasons. So, as much as I hate the word “hybrid” thanks to cloud-related marketing materials, I’d love to see hybrid conferences in the long term! :)
Sending logs to Humio using the elasticsearch-http() destination of syslog-ng
One of the most popular syslog-ng destinations is Elasticsearch. Humio, a log management provider, supports a broad range of ingest options and interfaces, including an Elasticsearch-compatible API. Last week, Humio announced Humio Community Edition, which provides the full Humio experience for free, with some limitations on daily ingestion and retention time. I tested the Community Edition, and it works perfectly well with syslog-ng.
If you come from the Humio side, you might wonder what syslog-ng is. It is an application for high performance central log collection. Traditionally, syslog messages were collected centrally and saved to text files. Nowadays, syslog-ng acts more like a log management layer: collects log messages from hosts, saves them for long term storage, but also forwards them to multiple destinations, like SIEMs and other log analysis solutions. This way, it is enough to collect log messages only once, and syslog-ng delivers the right log messages to the right destinations in the right format, after some initial processing.
Humio is available as a cloud service or self-hosted, where you can send all your logs for storage and analysis. It has an easy-to-use interface to query log messages which can be extended with further analytics possibilities from the Humio marketplace.
From this blog, you can learn how to get started with Humio and syslog-ng. While Humio provides many other APIs for log ingestion, I focus on the elasticsearch-http() destination of syslog-ng, demonstrating that there is no vendor lock-in: the same driver works equally well for Elastic’s Elasticsearch, AWS’s OpenSearch and for Humio.
Digest of YaST Development Sprints 133 & 134
October is being a busy month for the YaST Team. We have fixed quite some bugs and implemented several features. As usual, we want to offer our beloved readers a summary with the most interesting stuff from the lastest couple of development sprints, including:
- Improved handling of users on already installed systems
- Progress in the refactoring of software management
- Better selection of the disk in which to install the operating system
- More robust handling of LUKS encrypted devices
- Fixes for libYUI in some rare cases
- Improvements in time zone management (affecting mainly China)
Improved Handling of Users
Let us start by quoting our latest report:
“regarding the management of users, we hope to report big improvements in the next blog post”.
Time has indeed come and we can now announce we brought the revamped users management described in
this monographic blog post to the last parts
of YaST that were still not taking advantage of the new approach. The changes are receiving an extra
round of testing with the help of the Quality Assurance team at SUSE before we submit them to
openSUSE Tumbleweed. When that happens, both the interactive YaST module to manage users and groups
and its corresponding command line interface (not to be confused with the ncurses-powered text mode)
will start using useradd and friends to manage users, groups and the related configurations.
There should not be big changes in the behavior apart from the improvements already mentioned in the original blog post presenting the overall idea. But a couple of fields were removed from the UI to reflect the current status:
- The password for groups, which is an archaic and discouraged mechanism that nobody should be
using in the era of
sudoand other modern solutions. - The fields “Secondary Groups” and “Skeleton Directory” from the tab “Default for New Users”,
since those settings are either gone or not directly configurable in recent versions of
useradd.
There is still a lot of room for improvements in YaST Users, but we will postpone those to focus on other areas of YaST that need a similar revamping to the one done in users management.
Refactoring Software Management
One of those areas that need a bit of love and would benefit from some internal restructuring is the management of software, which goes much further than just installing and removing packages. We have just started with such a refactoring and we don’t know yet how far we will get on this round, but you can already read about some of the things we are doing in the description of this pull request, although it only shows a very small fraction of the full picture.
New Features in the Storage Area
We also improved the way YaST handles some storage technologies. First of all, we instructed YaST about the existence of BOSS (Boot Optimized Storage Solution) drives in some Dell systems. From now on, such devices will be automatically chosen as the first option to install the operating system, as described in this pull request with screenshots. As a bonus for that same set of changes, YaST will be more sensible regarding SD Cards.
On the other hand, we adapted the way YaST (or libstorage-ng to be fully precise) references LUKS
devices in the fstab file to make it easier for systemd to handle some situations. Check the
details in this other pull request (sorry, no
screenshots this time).
Fixes for libYUI
As usually revealed by our posts, YaST is full of relatively unknown features that were developed to cover quite exceptional use-cases. Those characteristics remain there, used by a few users… and waiting for a chance to attack us! During the recent sprints we fixed the behavior of libYUI (the toolkit powering the YaST user interface) in a couple of rare scenarios. Check the descriptions of this pull request and this other one for more details.
Fun with Flags… err Time Zones
For reasons everybody knows, being able to work from home and to coordinate information with people from different geographical locations has become critical lately. That scenario has increased the relevance of properly configured time zones in the operating system. And that made us realize the time zones handled by YaST for China weren’t fully aligned with international standards. This pull request explains what was the problem and how we fixed it, so applications like MS Teams can work on top of (open)SUSE distributions just fine… everywhere in the globe.
That’s All… Until we Meet Again
As you know, YaST development never stops. And, although we only report the most interesting bits in our blog posts, we keep working in many areas… from very visible features and bug fixes to more internal refactoring. In any case, we promise to keep working and to keep you updated in future blog posts. So stay tuned and have a lot of fun!
Team Profile
You can see five segments:
- The center (5,5) is the "worker" who has a set of balanced of attributes, no extremes. These team members are extremely important because they tend to just get stuff done.
- The top left (9,1) is the "expert" who is focused on the task and its details but doesn't consider people that much. You need these to get the depth of work which is necessary to create great results.
- The bottom right (1,9) is the "facilitator" who is something like the soul of the team, focused on social interactions and supports the team in creating great results.
- The top right (9,9) is the "leader" who is strong on task and people and is giving direction to the team. You need these but you don't want to have more than one or two in a team otherwise there are conflicts of leadership.
- The bottom left (1,1) is the "submarine" who floats along and tries to stay invisible. Not strong on any account. You don't want these in your team.
The test can provide some insight into the balance of the team. You want to have all but the submarine covered with an emphasis on the workers.
How does your team look like on this diagram?
Setting up Let's Encrypt certificates for the 389-ds LDAP server
In the past months I’ve set up LDAP at home, to avoid having different user accounts for the services that I run on my home hardware. Rather than the venerable OpenLDAP I settled for 389 Directory Server, commercially known as Red Hat Directory Server, mainly because I was more familiar with it. Rather than describing how to set that up (Red Hat’s own documentation is excellent on that regard), this post will focus on the steps required to enable encryption using Let’s Encrypt certificates.
The problem
Even the LDAP server (they’re actually two, but for the purpose of this post it does not matter) was just operating in my LAN, I wanted to reduce the amount of information going around in clear, including LDAP queries. That meant setting up encryption, of course.
The problem was that Red Hat’s docs only cover the “traditional” way of obtaining certificates, that is obtaining a Certificate Authority (CA) certificate, request a server certificate, obtain it and set it up in the server. There is (obviously) no mention of Let’s Encrypt anywhere. I’ve found some guides, but they were either too complicated (lots of fuzzing around with certutil) or they weren’t clear for some steps. Hence, this post.
NOTE: I’ve focused on 389-ds version 2.0 and above, which has a different CLI set of commands than the venerable 1.3 series. All of the steps shown here can also be carried out via the Cockpit Web interface as well, if your distribution carries it (spoiler: openSUSE doesn’t).
Importing the CA
This is arguably one of the most important steps of the process. 389-ds needs also to store the CA for your certificates. As you may (or may not) know, Let’s Encrypt’s CA certificates are two:
- The actual “root” certificate, by Internet Security Research Group (ISRG);
- An intermediate certificate, signed by the above root, called “R3”.
Let’s Encrypt certificates, like the one powering this website, are signed by R3. But since you have to follow a “chain of trust”, to validate the certificate you follow these steps (excuse me, security people; this is probably a bad semplification):
- Check the actual certificate (e.g. the one on dennogumi.org)
- The certificate is signed by “R3”, so move up the chain and check the R3 certificate
- The R3 certificate is signed by the ISRG root certificate, so move up the chain and check the ISRG root
- The ISRG certificate is trusted by the OS / application using it, so everything stops there.
If any of the steps fails, the whole validation fails.
This long winding explanation is to tell you that 389-ds needs the whole certificate chain for its CA (so ISRG root + R3) in order to properly validate the Let’s Encrypt certificate you’ll use. If you don’t do that, chances are that some software which uses the system CA will work (for example ldapsearch) but others, like SSSD will fail with “Unknown CA” errors (buried deep into debug logs, so at the practical level they’ll just fail and you won’t know why).
Let’s get to business. Access the Chain of Trust page for Let’s Encrypt and download the relevant certificates. I’m not sure if 389-ds supports ECDSA certificates, so I downloaded the RSA ones: ISRG Root X1 and Let’s Encrypt R3 (both in PEM format). Put them somewhere in your server. Then, as root, import the two CA certificates into 389-ds (substitute LDAP_ADDRESS with the LDAP URI of your server):
# ISRG Root
dsconf -v -D "cn=Directory Manager" LDAP_ADDRESS security ca-certificate \
add --file /path/to/certificate --name "ISRG"
# Let's Encrypt R3
dsconf -v -D "cn=Directory Manager" LDAP_ADDRESS security ca-certificate \
add --file /path/to/certificate --name "R3"
NOTE: This step may not be necessary if you use Let’s Encrypt’s “full chain” certificates, but I did not test that.
Importing the certificates
Then, you have to import a Let’s Encrypt certificate, which means you have to obtain one. There are hundreds of guides and clients that can do the job nicely, so I won’t cover that part. If you use certbot, Let’s Encrypt’s official client, you will have the certificate and the private key for it in /etc/letsencrypt/live/YOURDOMAIN/fullchain.pem and /etc/letsencrypt/live/YOURDOMAIN/privkey.pem.
You need to import the private key first (substitute LDAP_ADDRESS and DOMAIN with the LDAP URI of your server and the Let’s Encrypt domain, respectively):
dsctl LDAP_ADDRESS tls import-server-key-cert \
/etc/letsencrypt/live/DOMAIN/fullchain.pem \
/etc/letsencrypt/live/DOMAIN/privkey.pem
Note that you pass the certificate as well as the key to import it (in doubt, check’s Red Hat documentation).
Once the key is done, it is time to import the actual certificate:
dsconf -v -D "cn=Directory Manager" LDAP_ADDRESS security certificate add \
--file /etc/letsencrypt/live/DOMAIN/fullchain.pem \
--primary-cert \
--name "LE"
--primary-cert sets the certificate as the server’s primary certificate.
Then, we switch on TLS in the server:
dsconf -v -D "cn=Directory Manager" LDAP_ADDRESS config replace \
nsslapd-securePort=636 nsslapd-security=on
And finally, we restart our instance (replace INSTANCE with your configured instance name):
systemctl restart dirsrv@INSTANCE
Testing everything out
You can use ldapsearch to check whether the SSL connection is OK (I’ve used Directory Manager, but you can use any user you want):
# STARTTLS
ldapsearch -H ldap://your.ldap.hostname -W -x -D "cn=Directory Manager" -ZZ "search filter here"
# TLS
ldapsearch -H ldaps://your.ldap.hostname -W -x -D "cn=Directory Manager" "search filter here"
If everything is OK, you should get a result: otherwise, you’ll get an error like “Can’t contact the LDAP server”.
Alternatively, you can use openssl:
openssl s_client -connect your.ldap.hostname:636
[...]
---
SSL handshake has read 5188 bytes and written 438 bytes
Verification: OK
Don’t forget to adjust your applications to connect via ldaps rather than ldap after everything is done.
Renewing the certificates
To renew the certificates you repeat the steps outlined above for the certificates (without the CA part, of course). Make sure you always import your private key: if there is a mismatch between it and the certificate, 389-ds will refuse to start.
If you use certbot you can use a post-renewal hook to trigger the import of the certificate into 389-ds. This is what I’ve been using: bear in mind it’s customized to my setup and does a few more things than needed. Also it does only import the certificate, and not the full chain.
I messed up and 389-ds won’t start! What do I do?
You can disable encryption by editing /etc/dirsrv/slapd-INSTANCE/dse.ldif and changing nsslapd-security to off, then start 389-ds again. Then you can review everything and see what went wrong. But if you can, I recommend the Cockpit Web UI: it makes the first-time setup a breeze.
Wrap up
Importing the certificates is surprisingly simple, but my Internet searches have been frustrating because at least half of what I found was either not applicable, incomplete, or did not work. I hope this small tutorial can be useful for those who want a bit more security in their LDAP setup.
Optimizing LibreOffice for a larger number of users
Have you ever edited a document in LibreOffice in more than one window? Right, neither have I. Who'd think about LibreOffice and more than one user at the same time, right? Except ... somebody did and that's how collaborative editing based on LibreOffice works. For whatever strange reason, somewhen in the past somebody thought that implementing multiple views for one document in OpenOffice (StarOffice?) was a good idea. Just select Window->New Window in the menu and you can edit your favourite document in 50 views that each show a different part of the document and update in real time. And that, in fact, is how collaborative editing such as with Collabora Online works - open a document, create a new view for every user, and there you go.
But, given that this has never really been used that much, how well did the original relevant code perform and scale for more users? Well, not much, it turns out. Not a big surprise, considering that presumably back when that code was written nobody thought the same document could be edited by numerous users at the same time. But I've been looking exactly into this recently as part of optimizing Collabora Online performance, and boy, are there were gems in there. You thought that showing the same document in more views would just mean more painting also in those views? Nah, think again, this is OpenOffice code, the land of programming wonders.
Profiling the code
When running Online's perf-test, which simulates several users typing in the same document, most of the time is actually spent in SwEditShell::EndAllAction(). It's called whenever Writer finishes an action such as adding another typed characters, and one of the things it does is telling other views about the change. So here LO spends a little time adding the character and then the rest of the time is spent in various code parts "talking" about it. A good part of that is that whenever an action is finished, that view tells the others about it happening, and then all those views tell all other views about how they reacted to it, making every change O(n^2) with the number of views. That normally does not matter, since on the desktop n generally tends to be 1, but hey, add few more views, and it can be a magnitude slower or more.
Redrawing, for example, is rather peculiar. When a part of the document changes, relevant areas of the view need redrawing. So all views get told about the rectangles that need repainting. In the desktop case those can be cropped by the window area, but for tiled rendering used by Online the entire document is the "window" area, so every view gets told about every change. And each view collects such rectangles, and later on it processes them and tells all other views about the changes. Yes, again. And it seems that in rare cases each view really needs its own repaint changes (even besides the cropping, as said before). So there's a lot of repeated processing of usually the same rectangles over and over again.
One of the functions prominently taking place in CPU costs is SwRegionRects::Compress(), which ironically is supposed to make things faster by compressing a group of rectangles into a smaller set of rectangles. I guess one of the cases in OpenOffice where the developer theoretically heard about optimizations being supposed to make things faster, but somehow the practical side of things just wasn't there. What happens here is that the function compares each rectangle with each other, checking if they can be merged ... and then, if yes and that changes the set of rectangles, it restarts the entire operation. Which easily makes the entire thing O(n^3). I have not actually found out why the restarting is there. I could eventually think of a rather rare case where restarting makes it possible to compress the rectangles more, but another possibility is that the code dates back to the time when it was not safe to continue after modifying whichever container SwRegionRects was using back then, and it has stayed there even though that class has been using to std::vector since a long time ago.
Another kind of interesting take on things is the SwRegionRects::operator-= in there. Would you expect that rectangles would be collected by simply, well, collecting them and then merging them together? Maybe you would, but that's not how it's done here. See, somebody apparently thought that it'd be better to use the whole area, and then remove rectangles to paint from it, and then at the end invert the whole thing. The document area is limited, so maybe this was done to "easily" crop everything by the area? It works, except, of course, this is way slower. Just not slow enough to really notice when n is 1.
Other code that works fine with small numbers but fails badly with larger ones is VCLEventListeners, a class for getting notified about changes
to VCL objects such as windows. It's simply a list of listener objects,
and normally there aren't that many of those. But if LO core gets
overloaded, this may grow. And since each listener may remove itself from the list at any point, the loop calling all of them always checks for each of them if the listener is still in the list. So, again, O(n^2). And, of course, it's only rarely that any listener removes itself, so the code spends a lot of time doing checks just in case.
But so that I do not talk only about old code, new code can do equally interesting things. Remote rendering uses LOK (LibreOfficeKit), which uses text-based messages to send notifications about changes. And the intuitive choice for writing text are C++ iostreams, which are flexible, and slow. So there will be a lot of time spent in creating text messages, because as said above, there are many changes happening, repeatedly. And since there are so many repeated messages, it makes sense to write extra class CallbackFlushHandler that collects these messages and drops duplicates. Except ... for many of the checks it first needs to decode text mesages back to binary data, using C++ iostreams. And in most cases, it will find out that it can drop some message duplicates, so all these string conversions were done for nothing. Oops.
And there are more ways in which things can get slower rather than faster. CallbackFlushHandler uses an idle timer to first process all data in bulk and flush the data at once only when idle. Except if it gets too busy to keep up, and it can easily get too busy because of all the things pointed out above, it may take a very long time before any data is flushed. To make things even worse, the queue of collected messages will be getting longer and longer, which means searching for duplicates and compressing it will get longer. Which in turn will make everything even slower, which again in turn will make everything even slower. Bummer.
All in all, if unlucky, it may not take that much for everything to slow down very noticeably. Online's perf-test, which simulates only 6 users typing, can easily choke itself for a long time. Admitedly, it simulates them all typing at the same time and rather fast, which is not very a realistic scenario, but typing hitting the keyboard randomly and quickly is exactly how we all test things, right? So I guess it could be said that Collabora Online's perf-test simulates users testing Collabora Online performance :). Realistic scenarios are not going to be this bad.
Anyway. In this YT video you can see in the top part how perf-test performs without any optimizations. The other 5 simulated users are typing elsewhere in the document, so it's not visible, but it affects performance.
Improved performance
But as you can see in the other two parts, this poor performance is actually already a thing of the past. The middle part shows how big a difference can even one change make. In this specific case, the only difference is adding an extra high-priority timer to CallbackFlushHandler, which tries to flush the message queue before it becomes too big.
The bottom part is all the improvements combined, some of them already in git, some of them I'm still cleaning up. That includes changes like:
- SwRegionRects::Compress() is now roughly somewhere at O(n*log(n)) I think. I've fixed the pointless restarts on any change, and implemented further optimizations such as Noel's idea to first sort the rectangles and not compare ones that cannot possibly overlap.
- I have also changed the doubly-inverted paint rectangles handling to simply collecting them, cropping them at the end and compressing them.
- One of the things I noticed when views collect their paint rectangles is that often they are adjacent and together form one large rectangle. So I have added a rather simple optimization of detecting this case and simply growing the previous rectangle.
- Since it seems each Writer view really needs to collect its own paint rectangles, I have at least changed it so that they do not keep telling each other about them all the time in LOK mode. Now they collect them, and only once at end they are all combined together and compressed, and often thousands of rectangles become just tens of them.
- Another thing Writer views like to announce all the time in LOK mode are cursor and selection positions. Now they just set a flag and compute and send the data only once at the end if needed.
-
VCLEventListeners now performs checks only if it knows that a listener has actually removed itself. Which it knows, because it manages the list.
- Even though LOK now uses tiled rendering to send the view contents to clients, LO core was still rendering also to windows, even though those windows are never shown. That's now avoided by ignoring window invalidations in LOK mode.
- Noel has written a JSON writer which is faster then the Boost one,
and made non-JSON parts use our OString, which is faster and better
suited for the fixed-format messages.
- I have converted the from-message conversions to also use our strings, but more importantly I have changed internal LOK communications to be binary rather than text based, so that they usually do not even have to be converted. Only at the end those relatively few non-duplicated text messages are created.
- Noel has optimized some queue handling in CallbackFlushHandler and then I have optimized it some more. Including the high-priority timer mentioned above.
- There have been various other improvements from others from Collabora as part of the recent focus on improving performance.
- While working on all of this, I noticed that even though we have support for Link Time Optimization (LTO), we do not use it, probably because it was broken on Windows. I've fixed this and sorted out few other small problems, and releases in the future should get a couple percent better performance across the board from this.
This is still work in progress, but it already looks much better, as now most of the time is actually spent doing useful things like doing the actual document changes or drawing and sending document tiles to clients. LOK and Collabora Online performance should improve noticeably, recent (Collabora) versions 6.4.x should include already some improvements, and the upcoming Collabora Online 2021 should have all of them.
And even though this's been an exercise in profiling LibreOffice performance for something nobody thought of back when the original OpenOffice code was written, some of these changes should matter even for desktop LibreOffice and will be included starting with LO 7.3.



