Lanzado Inkscape 1.2, ahora multipágina
Hoy me congratula comentaros que ha sido lanzado Inkscape 1.2. De esta forma el magnífico editor de dibujos vectoriales sigue su evolución con jugosas novedades como la posibilidad de tener multippáginas en un solo documento, un gran avance en mi modesta opinión.
Lanzado Inkscape 1.2, ahora multipágina
Una de las aplicaciones que más me gustan cuando tengo de diseñar alguna cosa es Inkscape. Su potencia y versatibilidad la hacen maravillosa, y sus posibilidades simplemente están limitadas por tu imaginación y arte… los cuales son escasos en mi caso.
Pero eso no es impedimento para que este maravilloso proyecto siga adelante y nos acabe de presentar su versión 1.2, que según leemos en la página oficial de Inkscape, llega con las siguientes novedades:
- Los documentos de Inkscape ahora pueden contener múltiples páginas, que se gestionan con la nueva herramienta Página
- Marcadores editables y patrones de guiones
- Diálogo de capas y objetos fusionado
- Rediseño de la alineación en el lienzo y de los paráemetros de los ajustes
- Nuevo efecto «Tiling» Live Path
- Diálogo de exportación rediseñado con vista previa y capacidad para seleccionar objetos/capas/páginas e incluso múltiples formatos de archivo a los que exportar
- Importación de imágenes SVG desde Open Clipart, Wikimedia Commons y otras fuentes en línea
- Origen del objeto seleccionable para escalar y mover numéricamente
- Todas las opciones de alineación en un único cuadro de diálogo
- Edición de degradados en el cuadro de diálogo Relleno y Trazo
- Difuminación de degradados
- Actualización del editor de fuentes SVG
- Flujo de texto alrededor de las formas y relleno de texto
- Práctica operación booleana para dividir trazados
- Barra de herramientas configurable, escala de iconos continua y muchas más opciones de personalización nuevas
- Mejoras en el rendimiento de muchas partes de la interfaz y de muchas funciones diferentes
- Muchas mejoras en la interfaz de usuario
- Numerosas correcciones de fallos y errores en el programa principal de Inkscape y en las extensiones de stock
¿Qué es Inkscape?

Para quien no lo sepa, Inkscape es un editor de gráficos vectoriales de código abierto, similar a programas como Adobe Illustrator, Corel Draw, Freehand o Xara X. Lo que lo hace único es que usa como formato nativo el Scalable Vector Graphics (SVG), un estándar abierto de W3C basado en XML.
En la actualidad, es perfectamente usable para todo tipos de ilustraciones y su potencia está más que demostrada, como podemos ver en este vídeo del Diseño de un logo con degradados, calados y reflejo de Ger Garpe.
La entrada Lanzado Inkscape 1.2, ahora multipágina se publicó primero en KDE Blog.
Disponible openSUSE Leap 15.4 RC
Cada vez más cerca la publicación final de la distribución de GNU/Linux openSUSE Leap 15.4

La comunidad de openSUSE ha publicado la versión «Release Candidate» o RC de su distribución de GNU/Linux openSUSE Leap 15.4, cuya versión final se publicará a principios de junio de 2022.
Después de que la versión en desarrollo de openSUSE Leap 15.4 pasara todos los tests automáticos de openQA que aseguran y comprueban varias facetas de estabilidad de la distribución de GNU/Linux, se ha publicado esta versión RC.
Esta nueva fase y versión también implica que hay una nueva versión de Leap Micro, un sistema operativo moderno y ligero, ideal para cargas de trabajo virtualizadas y de contenedor de host, que también pasa a su fase Release Candidate.
Esta versión RC de openSUSE Leap 15.4 marca la fecha para la congelación de los paquetes del software que se incluirá en la distribución, que podrás utilizar en servidores, estaciones de trabajo, escritorios y para virtualización y uso de contenedores.
El próximo hito, será la publicación de la versión «Golden Master» o GM de openSUSE Leap 15.4 que se publicará el 27 de mayo de 2022.
Durante la etapa de desarrollo de las versiones de Leap, los colaboradores, los empaquetadores y el equipo de lanzamiento utilizan un método de desarrollo continuo (similar a openSUSE TUmbleweed) que se clasifica en fases en lugar de un solo lanzamiento de hito.
Las snapshots se lanzan con actualizaciones de software de versiones menores una vez que pasan las pruebas automatizadas hasta el lanzamiento final del GM. En ese momento, la distribución cambia de un método de desarrollo continuo a un ciclo de publicaciones puntuales en el que recibe actualizaciones hasta el final de su vida útil (EOL).
Por tanto si ya tienes openSUSE Leap 15.4 probando en tu equipo, recuerda que el comando recomendado para actualizar a esta nueva versión es «zypper dup» hasta que se publique la versión final.
Tienes toda la información de fechas previstas de los próximos lanzamientos en este enlace:
Puedes leer el anuncio oficial:
Sin duda esta nueva versión de la distribución de GNU/Linux openSUSE Leap 15.4, será otro buen ejemplo de estabilidad, ¡para disfrutar tu GNU/Linux sin sobresaltos! en tu portátil, tu servidor, tu máquina virtual, tu Raspberry Pi, …

Raptor CS: Fully Owner Controlled Computing using OpenPOWER
This week I am talking to Timothy Pearson of Raptor Engineering. He is behind the Talos II and Blackbird boards for IBM POWER9 CPUs. His major claim is creating the first fully owner controlled general purpose computer in a long while. My view of the Talos II and Blackbird systems is that these boards helped to revitalize the open source ecosystem around POWER more than any other efforts (See also: https://peter.czanik.hu/posts/cult-amiga-sgi-workstations-matter/). Most open source developers I talked to say that coding on a remote server is just work. Doing the same on your local workstation adds an important ingredient: passion. This is why the re-introduction of POWER workstations was a very important step: developers started to improve support for POWER also in their free time, not just in their regular working hours. I asked Tim how the idea of creating their POWER board was born, how Covid affected them and also a bit about their future plans.

Talos II mainboard
Raptor Engineering seems to focus on machine vision. How can a company that’s focused on other activities start to develop a mainboard for POWER CPUs?
So to start with, Raptor Engineering’s public facing website was always more of a way to market technologies we’d already developed for internal purposes to the general public. Raptor Engineering is, and always has been, more of a FPGA HDL/firmware/OS level design company focused on providing those services to those that need them. Sometimes we do engage in internal development projects to showcase certain technology capabilities publicly, and one of those was the open source machine vision system you allude to earlier.
More relevant to the current technology offerings, we’ve always had some degree of focus on security and what we now tend to call “owner control”. This was borne out of several long-running research and development projects, where significant investment was made in technologies that could be years, if not decades, from practical application. As a result of this longer term focus, it was realized early on that two issues had to be addressed: something we call “continuance” (the ability to seamlessly move from one generation of technology to another as needed, with no loss of data) and also the data security / privacy aspects of keeping internal R&D results private and out of the hands of potential competition.
Around 2008-2009 we realized that a fully owner controlled system would meet all of these objectives, specifically a system over which we had full control of the firmware, OS, and application layer, a system where we could modify any aspect of its operation independently as required to support the overall objectives. There’s a true story I like to tell, which dates from the early days of the coreboot port efforts on our side to support the AMD K8 systems in use at the time: the proprietary BIOS would not, for some infurating reason, allow a boot without a CMOS battery and a keyboard connected! When spread across supercomputer racks, this was a perfect opportunity to highlight just what owner control actually meant in terms of benefit for maintainance and overall scalability – the rather obvious resulting boot problem could have been solved with a single line of code, if only there had been code available.
Once we had a completely free stack for K8/Family 10h, we continued on with AMD systems until around 2013/2014, when it became clear that AMD would force the Platform Security Processor (an unauditable low-level AMD-controlled black box that presents a severe security risk) on all CPUs going forward. This kicked off a multi-year evaluation of anything and everything that might be able to replace the x86 systems we had in terms of owner control, overall performance, and ease of administration (ecosystem compatibility). Over those years we evaluated everything from ARM to RISC-V to MIPS, and eventually settled on the then-new OpenPOWER systems as the only solution to actually check all of the boxes. The rest, as they say, is history – the nascent computer ODM arm of Raptor Engineering was spun off as Raptor Computing Systems to allow it to grown into the role demanded of it with the Talos II systems it was bringing to market.
Six years ago I first learned about the Talos plans from the Raptor Engineering website. Now all POWER-related activity is on the Raptor Computing website. How are the two related?
As of right now I am involved with both companies, my role at Raptor Computing Systems being that of CTO. Basically I help ensure RCS brings new owner-controlled, blob-free devices to market, and keep fairly close tabs on available silicon and its limitations (mostly centering around blobs) as a result. On the Raptor Engineering side I still provide specialized consulting services at a low level (typically HDL / firmware), as an example you’ll see my name here and there on projects like OpenBMC, LibreSoC, and more recently the Xen hypervisor port for POWER systems that is just spinning up now.
What do you mean by “fully owner controlled”?
An owner-controlled device is best defined as a tool that answers only to its physical owner, i.e. its owner (and only its owner) has full control over every aspect of its operation. If something is mutable on that device, the owner must be able to make those changes to alter its operation without vendor approval or indeed any vendor involvement at all. This is in stark contrast with the standard PC model, where e.g. Intel or AMD are allowed to make changes on the device but the owner is expressly forbidden to change the device’s operation through various means (legal restrictions, lack of source code, vendor-locked cryptographic signing keys, etc.). In our opinion, such devices never really left the control of the vendor, yet somehow the owner is still legally responsible for the data stored on them – to me, this seems like a rather strange arrangement on which to build an entire modern digital economy and infrastructure.
What are the main differences between the Talos II and Blackbird boards?
The Talos II is a high end full EATX server mainboard, dual socket, designed for 24/7 use in a datacenter or workstation type environment. Blackbird is a much smaller uATX single CPU system with more standard consumer-type interfaces (e.g. audio, HDMI, SATA, etc.) available directly on the mainboard.
You mention that Blackbird is consumer focused. It has three Ethernet ports, and support for remote management. Did you also have servers / appliances in mind?
Yes, there is the ability to use the Blackbird for a home NAS or similar appliance. The remote management sort of comes “for free” due to the POWER9 processor being paired with the AST2500 BMC ASIC and OpenBMC, it is more of a baseline POWER9 feature than anything else.
I know that quite a few Linux distribution maintainers now use Talos II workstations. Who else are your typical users?
In general we see the same overall subset of people that would use Linux on x86, with a strong skew toward those concerned about privacy and security and a notable (expected) cutout of the normal “content consumer” market (gamers, streaming service consumers, etc.). This holds true both for individuals and organizations, though the skew is much less notable among organizations simply because organizations are not generally purchasing desktop / server systems for gaming and media consumption!
For quite a few years POWER9 was the best CPU to run syslog-ng. What are the typical software your users are running on your POWER workstations and servers?
For the most part, the standard software you would see on comparable x86 boxes. That’s always been one of our requirements, that using the POWER system be as close to using a standard PC as possible, except the POWER system isn’t putting digital handcuffs on its owner and potentially exposing their data for monetization (or, for that matter, to more nefarious actors).
Covid affected most families and businesses around the world. How Raptor was / is affected?
We were hit hard by the shutdowns and subsequent inflation, in common with most manufacturers across the world. We did have to cancel some of the more ambitions POWER9 systems under development (Condor) as well as take on increased ecosystem maintainance load. Some of that, along with the continuous rise in cost of parts and rolling shortages of components, is reflected in the price increases and long lead times that have occured over the past couple of years.
Could you explain how Condor compares to Talos II and Blackbird?
Condor was a pre-COVID development project to try to create a high end standard ATX (vs. EATX) desktop board with OpenCAPI brought out to the appropriate connector. With COVID shutdowns, industry adoption of CXL, and more importantly POWER10 requiring closed source binaries, we didn’t see a path to actually bring the product to market post-COVID. The completed designs are sitting in our archives, but no hardware was manufactured.
The latest x86 CPUs now beat POWER9 in many use cases. Which is no wonder, these CPUs are four years old now. Is there something where POWER9 still has an advantage?
Absolutely! POWER9 still does what we originally intended it to do – it gives reasonable performance on a stable, standardized ecosystem, in a familiar PC-style form factor, using standard PC components, while providing full owner control. As a secure computing platform, both on server and desktop, it simply cannot be beat – there is literally nothing else on the market that is both 100% blob free and can be used as a daily driver for basically every task that a Linux x86 machine can be used for.
For example, I’m responding to this interview using a POWER9 workstation with hundreds of Chromium tabs open, media running, Libreoffice in the background, and am even compiling some software. If you didn’t tell me it was a POWER system, I wouldn’t be able to easily tell just from using it, yet at the same time it’s not restricting what I can do with it, attempting to monetize my data, or otherwise waiting for commands from a potentially hostile (at least to my interests) third party.
The other major selling point is that the ISA is both open and standardized. The standardization means that I could migrate this system as-is – no reinstallation – to anything that is POWER ISA 3.0 compliant, which is a huge advantage only available for the “big three” architectures (x86, SBSA ARM, and OpenPOWER). That said, being an open ISA, anyone is also free to create a new compliant CPU if they want to or need to. This neatly avoids the entire problem with x86 and ARM, where various technologies (ME, PSP, TrustZone w/ bootloader locking) could be (and eventually were) unilaterally forced onto all users regardless of the grave issues they introduce for specific use cases.
Also interesting is how the OpenPOWER ISA is governed – a neutral entity (the OpenPOWER Foundation) controls the standards documents that implementations must adhere to, and anyone is free to propose an extension to the ISA. If accepted, it becomes part of the ISA compliance requirements for a future version, and the requisite IP rights are transferred to the Foundation such that anyone is still free to implement that instruction per the specification without needing to go back and license with the entity that proposed the extension. This is a great model in my mind, it should allow the best of both worlds going forward. The standardization and compliance requirements mean we should see the same level of binary support normally expected on x86, while the extension proposal mechanism allows the ISA to morph and adapt in a backward-compatibile way in response to external needs.
The POWER 10 CPUs from IBM are manufactured with the latest technologies allowing higher performance with lower power consumption. Do you have any plans to have a new board with POWER 10 support?
At this time we do not have plans to create a POWER10 system. The reasoning behind this is that somehow, during the COVID19 shutdowns and subsequent Global Foundries issues, IBM ended up placing two binary blobs into the POWER10 system. One is loaded onto the Microsemi OMI to DDR4 memory bridge chip, and the other is loaded into what appears to be a Synopsis IP block located on the POWER10 die itself. Combined, they mean that all data flowing into and out of the POWER10 cores over any kind of high speed interface is subject to inspection and/or modfication by a binary firmware component that is completely unauditable – basically a worst-case scenario that is strangely reminiscent of the Intel Management Engine / AMD Platorm Security Processor (both have a similar level of access to all data on the system, and both are required to use the processor). Our general position is that if IBM considered these components potentially unstable enough to require future firmware updates, the firmware must be open source so that entities and owners outside of IBM can also modify those components to fit their specific needs.
Were IBM to either open source the firmware or produce a device that did not require / allow mutable firmware components in those locations, we would likely reconsider this decision. For now, we continue to work in the background on potential pathways off of POWER9 that retain full compatibility with the existing POWER9 software ecosystem, so stay tuned!
Does “potential pathways off of POWER9” have something to do with the LibreSoc project?
We’re going to remain a bit coy on this, but LibreSoC is definitely one project that would fit that requirement for at least low-end devices. In fact, you can see some of my fingerprints in the LibreSoC GIT history…
Photo credits:
Copyright: Vikings GmbH License: CC BY 4.0
YaST Development Report - Chapter 4 of 2022
As our usual readers know, the YaST team is lately involved in many projects not limited to YaST itself. So let’s take a look to some of the more interesting things that happened in those projects in the latest couple of weeks.
Improvements in YaST
As you can imagine, a significant part of our daily work is invested in polishing the versions of YaST that will be soon published as part of SUSE Linux Enterprise 15-SP4 and openSUSE Leap 15.4, whose first Release Candidate version is already available.
That includes, among many other things, improving the behavior of
yast2-kdump in systems with Firmware-Assisted Dump
(fadump) or adding a bit of extra information to
the installation progress screen that we recently simplified.
YaST in a Box
So the present of YaST is in good hands… but we also want to make sure YaST keeps being useful in the future. And the future of Linux seems to be containerized applications and workloads. So we decided to investigate a bit whether it would be possible to configure a system using YaST… but with YaST running in a container instead of directly on the system being configured.
And turns out it’s possible and we actually managed to run several YaST modules with different levels of success. Take a look to this repository at Github including not only useful scripts and Docker configurations, but also a nice report explaining what we have achieved so far and what steps could we take in the future if we want to go further into making YaST a containerizable tool.
Evolution of D-Installer
Running into containers may be the future for YaST as a configuration tool (or not!). But we are also curious about what the future will bring for YaST as an installer. In that regard, you know we have been toying with the idea of using the existing YaST components as back-end for a new installer based on D-Bus temporarily nicknamed D-Installer. And, as you would expect, we also have news to share about D-Installer.
On the one hand, we used the recently implemented infrastructure for bi-directional communication (which allows the back-end of the installer to ask questions to the user when some input is needed) to handle some situations that could happen while analyzing the existing storage setup of the system. On the other hand, we added the possibility of modifying the D-Installer configuration via boot arguments, with the option to get some parts of that configuration from a given URL.
See you Soon!
If you want to get more first-hand information about recent (Auto)YaST changes, about D-Installer development or any other of the topics we usually cover in this blog, bear in mind openSUSE Conference 2022 is just around the corner and a big part of the YaST Team will be there presenting these and other topics. We hope to see as many of you there in person. But don’t worry much if you cannot attend, we will keep blogging and will stay reachable at all the usual channels. So stay tuned for more news… and more fun!
openSUSE Leap 15.4 Enters Release Candidate Phase
The openSUSE Project has entered the Release Candidate phase for the next minor release version of the openSUSE Leap distribution.
The upcoming release of Leap 15.4 transitioned from its Beta phase to Release Candidate phase after Build 230.2 passed openQA quality assurance testing.
“Test results look pretty solid as you’d expect from RC,” wrote release manager Lubos Kocman in an email on the openSUSE Factory mailing list yesterday. “My original ETA was Wednesday, but we managed to get (the) build finished and tested sooner.”
The new phase also changes the new offering of Leap Micro, which is a modern lightweight operating system ideal for host-container and virtualized workloads, into its Release Candidate phase.
The RC signals the package freeze for software that will make it into the distribution, which is used on servers, workstations, desktops and for virtualization and container use.
The Leap 15.4 Gold Master (GM) build is currently scheduled for May 27, according to Kocman, which is expected to give time for SUSE Linux Enterprise changes and its Services Pack 4 GM acceptance; this will allow for the pulling in of any translation updates and finish pending tasks such as another security audit.
Kocman recommends Beta and RC testers use the “zypper dup” command in the terminal when upgrading to the General Availability (GA) once it’s released.
During the development stage of Leap versions, contributors, packagers and the release team use a rolling development method that are categorized into phases rather than a single milestone release; snapshots are released with minor version software updates once passing automated testing until the final release of the GM. At that point, the distribution shifts from a rolling development method into a supported release cycle where it receives updates until its End of Life (EOL). View the openSUSE Roadmap for more details on public availability of the release.
The community is supportive and engages with people who use older versions of Leap through community channels like the mailing lists, Matrix, Discord, Telegram, and Facebook. Visit the wiki to find out more about openSUSE community’s communication channels.
KDE Connect está disponible para iOS
La noticia es algo vieja pero creo que suficientemente interesante para tener cabida en el blog. Y es que desde el 10 de mayo KDE Connect está disponble para iOS, el sistema operativo de los iPhone. Una buena noticias para estos usuarios y para la Comunidad KDE ya que acerca a los primeros al Software Libre.
KDE Connect disponible para iOS

Tener una aplicación única y sobresaliente en muchos aspectos es muy importante para poder demostrar que tu ecosistema funciona. En el mundo KDE esto está sobradamente demostrado con aplicaciones como Dolphin, Kdenlive, Okular o Kate, por citar unas pocas de un club selecto que destacan por sus funconalidades, versatiblidad, sencillez y potencia…. Y dentro de este grupo, sin duda alguna, podríamos incorporar KDE Connect.
Pues bien, a esta gran aplicación se le acaba de unir un complemento para los usuarios del sistema iOS de los teléfonos iPhone, concretamente por fin tendrán su cliente para poder conectar su smartphone con el escritorio Plasma.
Más información y descarga: APP Store
¿Qué es KDE Connect?
Por si alguien no lo sabe, KDE Connect es un proyecto de Albert Vaca (que se inició dentro del programa GSoC y se presentó en Akademy-es 2013 de Bilbao) con el que la comunicación entre tu smartphone o teléfono móvil Android y tu ordenador con KDE es casi perfecta.
De hecho con KDE Connect se puede, entre otras cosas.
- Notificar llamadas
- Notificar SMS
- Indicar el estado de la batería del móvil
- Sincronizar portapapeles
- Control Multimedia (audio/vídeo)
- Notificaciones de pings
- Notificar todo tipo de mensajes del dispositivo móvil
- Transferir archivos
- Transferir enlaces web
La entrada KDE Connect está disponible para iOS se publicó primero en KDE Blog.
Friday the 13th: a lucky day :-)
I’m not superstitious, so I never really cared about black cats, Friday the 13th, and other signs of (imagined) trouble. Last Friday (which was the 13th) I had an article printed in a leading computer magazine in Hungary, and I gave my first IRL talk at a conference in well over two years. Best of all, I also met many people, some for the first time in real life.
Free Software Conference: sudo talk
Last Friday, I gave a talk at the Free Software Conference in Szeged. It was my first IRL conference talk in well over two years. I gave my previous non-virtual talk in Pasadena at SCALE; after that, I arrived Hungary only a day before flights between the EU and the US were shut down due to Covid.
I must admit that I could not finish presenting all my slides. I practiced my talk many times, so in the end, I could fit my talk into my time slot. However, I practiced the talk by talking to my screen. That gives no feedback, which is one of the reasons I hate virtual talks. At the event, I could see my audience and read from their faces when something was really interesting, or something was difficult to follow. In both cases, I improvised and added some more details. In the end, I had to skip three of my slides, including the summary. Luckily, all important slides were already shown. The talk was short, so the summary was probably not really missing. Once my talk was over, many people came to me for stickers, and to explain which of the features they learned about they plan to implement once they are back home.

Sudo logo
My talk was in Hungarian. Everything about sudo is in English in my head. I had to do some simultaneous interpreting from English to Hungarian, at least when I started practicing my talk. I gave an earlier version of this talk at FOSDEM in English. So, if you want to learn about some of the latest sudo features in English, you can watch it at the FOSDEM website at https://fosdem.org/2022/schedule/event/security_sudo/.
ComputerWorld: syslog-ng article
Once upon a time, I learned journalism in Hungarian. I even wrote a couple of articles in Hungarian half a decade ago. However, I’ve been writing only in English ever since. The syslog-ng blog, the sudo blog, and even my own personal blog where you read this, are all in English. Other than a few chats and e-mails, all my communication is in English.
Last week the Hungarian edition of ComputerWorld prepared with a few extra pages for the Free Software Conference. It also featured an article I wrote about some little known facts about syslog-ng. Writing in Hungarian was quite a challenge, just like talking in Hungarian. I tried to find a balance in the use of English words. Some people use English expressions for almost everything, so just a few words are actually in Hungarian. I hate the other extreme even more: when all words are in Hungarian, and I need to guess what the author is trying to say. I hope I found an enjoyable compromise.
I must admit, it was a great feeling to see my article printed. :-)
Meeting people: Turris, Fedora community
Last Friday I also met many people for the first time in person, or for the first time in person in a long while. I am a member of the Hungarian Fedora community. We met regularly earlier but not any more. I keep in touch with individual members over the Internet, but in Szeged, I could meet some of them in person and have some longer discussions.
If you checked the website of the conference, you could see that it was the first ever international version of the event. When I learned that the conference is not just for Hungarians but for the V4 countries as well, I reached out to the Turris guys. Their booth was super busy during the conference, but luckily, I had a chance to chat with them a bit. Two of their talks were at the same time as my talk, but I could listen to their third talk. It was really nice: I learned about the history of the project.
As you can see, my Friday the 13th was a fantastic day. I knock on wood hoping that Friday the 13th stays a lucky day. OK, just kidding :-)
5 lugares en los que encontrar tareas cron en Linux
En GNU/Linux puedes programar tus tareas cron de diferentes maneras

Puede que le sorprenda que la versión de cron que se ejecuta en su servidor hoy en día es en gran medida compatible con la especificación crontab escrita en la década de 1970.
Una desventaja de esta cuidadosa compatibilidad con versiones anteriores es que los trabajos, incluso en el mismo servidor, se pueden crear y programar de manera diferente.
Este artículo, es una traducción/adaptación de un artículo en inglés escrito por Shane Harter que puedes encontrar en este enlace:
1.- El crontab de usuario
En sistemas GNU/Linux, cada usuario tiene su propio crontab que puede ver y actualizar con el comando crontab. Para ver un listado de las tareas cron de tu usuario, puedes ejecutar:
crontab -l
El crontab de cada usuario es el lugar más fácil para añadir un nuevo trabajo para que se ejecute de manera automática cuando quiera, por lo que es el que se usa de manera frecuente y es el primer lugar en el que buscar cuando no se está seguro de cómo se ha programado una tarea.
Para editar una tarea crontab y añadir y/o modificar nuevas tareas puedes hacerlo mediante:
crontab -e
En los «logs» o registros de los sistemas GNU/Linux queda registrados, además de muchos otros eventos, qué tareas crontab se han realizado y de qué usuario del sistema son.
2.- El crontab del usuario root
Como cualquier otro usuario del sistema, también el usuario «root» de GNU/Linux puede tener establecidos sus propias tareas crontab.
Puedes iniciar sesión como «root» y listar si hay tareas de cron con el comando que hemos visto anteriormente.
3.- El archivo crontab del sistema
La forma original y que todavía se sigue utilizando para programar trabajos cron en todo el sistema, es agregando entradas en el archivo: /etc/crontab.
Para escribir en /etc/crontab requiere privilegios de usuario «root» y los trabajos programados se pueden ejecutar como cualquier usuario del sistema.
Si la primera palabra en su comando de trabajo coincide con una cuenta de usuario del sistema, el comando se ejecuta como ese usuario. Si no se especifica ningún usuario, se ejecuta como root.
Aquí el usuario tux realizará la siguiente tarea cron:
* 0 * * * tux /var/scripts/backup.sh
4.- Un archivo crontab en el propio directorio /etc/cron.d
En algún momento los desarrolladores se dieron cuenta de que era una mala práctica tener un único crontab en todo el sistema que pudiera ser cambiado por instaladores y herramientas automatizadas y nació el directorio /etc/cron.d/.
Los trabajos se programan usando /etc/cron.d copiando o creando un enlace simbólico a un archivo crontab. Al igual que los trabajos programados en /etc/crontab, la primera palabra del comando puede especificar opcionalmente el usuario para ejecutar el trabajo.
5.- Un script o comando en /etc/cron.hourly, /etc/cron.daily, etc
Cuando la programación de una tarea al detalle es menos importante que la simple frecuencia, se pueden crear trabajos cron copiando o enlazando un script en los directorios:
- /etc/cron.hourly – realiza tareas de a cada hora
- /etc/cron.daily – realiza tareas de a cada día
- /etc/cron.weekly – realiza tareas de a cada semana
- /etc/cron.monthly – realiza tareas de a cada mes
Bola extra
Como información extra, cuando los crontabs de usuarios individuales se editan mediante el comando crontab -e que hemos visto anteriormente, los archivos crontab se almacenan en /var/spool/cron.
No es buena idea editar esos archivos directamente y, en su lugar, es recomendable ejecutar crontab -e como el usuario que ejecutará el trabajo programado.
Si editas esos archivos en /var/spool/cron directamente, perderás el beneficio de la verificación de sintaxis que proporciona crontab -e y es posible que tu distribución de GNU/Linux no detecte los cambios realizados sin una recarga explícita del demonio cron.
¿Te ha resultado interesante la lectura? No hemos entrado al detalle de cómo crear tareas cron, algo que por la red se puede encontrar sin mucho problema, pero sí hemos visto donde podemos programar nuestras tareas cron.
Si aprendiste algo nuevo, o si hay algo que sabes que no se ha comentado aquí y que complementa lo escrito, usa los comentarios del blog para compartir lo que quieras.

Programa de Ujilliurex 2022 #ujilliurex
Hoy toca hacer más promoción del evento presencial que se va a celebrar el próximo 18 y 19 de mayo en la Universidad Jaume I de Castellón. Se trata de dar a conocer el programa de Ujilliurex 2022 un evento que une educación, Software Libre y Conocimiento Compartido a parte iguales.
Programa de Ujilliurex 2022 #ujilliurex
Aunque lo anuncié hace poco quiero seguir haciendo promoción de este evento que ya vuelve en la forma tradicional, es decir, presencial en el Aula Magna (HA1012AA) de la Facultad de Ciencias Humanas y Sociales de la Universidad Jaume I, conocida por casi todos como UJI.
Con14 ponencias tiene como principal objetivo difundir el uso y manejo de las TIC en la distribución LliureX entre la comunidad educativa (universitaria y no universitaria; infantil, primaria, secundaria y formación profesional).
Pero no me enrollo más y comparto con vosotros el listado de ponencias que podéis encontrar en su programa y que en mi humilde opinión dan una buena muestra de las muchas cosas que podemos hacer con nuestro alumnado, la tecnología y el Software Libre.
Miércoles 18 de mayo
- 17:00 La Competencia Digital Docente, análisis críTICo (seguimos sentando las bases y sin pisar la realidad) por Juan Francisco Álvarez Herrero
- 18:00 La competencia digital de los estudiantes en el contexto de la educación superior por Anna Sánchez-Caballé
- 19:15 Digitalización de la FP en la nube por David Montalva Furió
- 20:00 La importancia del código abierto en la CDD por Tàfol Nebot Gómez
- 20:15 Modelo de centro con Proxmox en un pequeño centro de secundaria por Eduardo Millán Martínez
Miércoles 18 de mayo
- 16:30 ASKER, tu generado de preguntas para Moodle por Baltasar Ortega Bort
- 16:50 Accesibilidad para todos por Juan Manuel Navarro Máñez
- 17:10 El ámbito de las TIC de la Subdirección General de Formación del Profesorado por Ignasi Climent Mateu
- 17:30 Plan Digital de Centro por Vicent Part Julio
- 19:00 Markdown y Apuntes por Angel Berlanas Vicente
- 19:200 Experiencia en la transformación de un centro educativo a distancia a Microsoft(CDC) desde el punto de vista docente por Piedad Soriano Montiel
- 19:40 I love Ruby: Scripting feliz! por David Vargas Ruiz
- 20:00 Casa domótica educativa por Jordi Mayné Grau
- 20:15 Uso de H5P19 por Tamara Expósito Ponce
¿Qué os parece el programa? ¿tenía razón? Ya lo sabéis, ayudad al éxito de este evento con vuestra difusión: Ujilliurex se lo merece.
La entrada Programa de Ujilliurex 2022 #ujilliurex se publicó primero en KDE Blog.
Zabbix Server 6.0 with container in GCP 安裝小記
Zabbix Server 6.0 with container in GCP 安裝小記
OS: openSUSE Leap 15.3 in GCP
Zabbix: 6.0 docker image
Cloud SQL: Postgres SQL
今天這篇 Zabbix 的安裝小記應該會更貼近提供服務的架構
先談一下規劃的想法
Zabbix 的服務提供使用容器的方式來提供
DB 的部份使用 Cloud SQL with Pregres SQL 來提供服務, 好處是全代管 / HA / 以及方便擴充 …
Cloud SQL 的介紹 https://cloud.google.com/sql
規劃兩個 Load Balancer 來對應 Zabbix 管理 以及 Zabbix agent 回報資訊
Zabbix 管理者或是 Zabbix agent 回報對應到 LB, 對之後的資源調度以及調整保留彈性
首先來建立 Cloud SQL
登入 GCP 並且到 Cloud SQL 頁面
點選 +CREATE INSTANCE
點選 Choose PostgreSQL
輸入 instance ID 名稱
輸入 DB密碼
選取 Database 版本 ( 我這邊使用 PostgreSQL 13, 請依照自己的需求調整 )
選取 Region ( 我這邊選的是 Taiwan )
Zonal availability 選 Multiple zone (Highly available) ( Primary zone 要不要指定看個人 )
Machine type 的部份
因為是實驗使用
我使用 Custom ( 1 vCPU / 3.75 GB Memory ) ( 請依照專案需求調整 )
Storage 使用 HDD 並自訂大小 500 GB ( 請依照專案需求調整 )
Connections 的部份
專案本身的 VPC 已經有建立自訂的 VPC 以及 Subnet
取消勾選 Public IP
勾選 Private IP
選取自訂的 VPC
如果遇到必須使用私人服務存取連線的提示
點選 SET UP CONNECTION
點選 ENABLE API
點選 Select one or more existing IP ranges or create a new one
點選 下拉式選單
點選 ALLOCATE A NEW IP RANGE
輸入 名稱, 輸入 分配的 IP 位址範圍
請勿與地端的 IP 位址重複
點選 CONTINUE
確認相關資訊
點選 CREATE CONNECTION
確認相關資訊
其他的部份按照預設
點選 CREATE INSTANCE
需要一點時間來建立
確認 Instance 已經建立完成
點選左方的 Databases 選單
點選 CREATE DATABASE
輸入 Database 名稱 zabbix
點選 CREATE
這邊大概就是使用 Cloud SQL 的好處之一, 可以直接在介面上建立 database
如果是使用 openSUSE 可能要使用 #zypper install postgresql 然後使用 psql -h xxx.xxx.xxx.xxx -U postgres 來進行連線, 然後再下 CREATE DATABASE zabbix;
接下來進行 Zabbix with container 的部份
在 GCP 建立 openSUSE leap 15.3 的 VM ( Compute Engine )
使用 Web SSH 或是 SSH 連入 openSUSE Leap 15.3, 並切換為 root
啟用 docker 服務, 設定開機啟動
# systemctl start docker
# systemctl enable docker
建立 zabbix-net 網路
# docker network create --subnet 172.20.0.0/16 --ip-range 172.20.240.0/20 zabbix-net
啟動相關 container
# docker run --name zabbix-snmptraps -t \
-v /zbx_instance/snmptraps:/var/lib/zabbix/snmptraps:rw \
-v /var/lib/zabbix/mibs:/usr/share/snmp/mibs:ro \
--network=zabbix-net \
-p 162:1162/udp \
--restart unless-stopped \
-d zabbix/zabbix-snmptraps:alpine-6.0-latest
# docker run --name zabbix-server-pgsql -t \
-e DB_SERVER_HOST="YOUR_DB_IP" \
-e POSTGRES_USER="postgres" \
-e POSTGRES_PASSWORD="YOUR_PW" \
-e POSTGRES_DB="zabbix" \
-e TZ=Asia/Taipei \
-e ZBX_ENABLE_SNMP_TRAPS="true" \
--network=zabbix-net \
-p 10051:10051 \
--volumes-from zabbix-snmptraps \
--restart unless-stopped \
-d zabbix/zabbix-server-pgsql:alpine-6.0-latest
DB_SERVER_HOST 的部份請對應到 Cloud SQL 的內部IP
POSTGRES_USER 對應到 CLOUD SQL 的 User name
POSTGRES_PASSWORD 對應到當初建立 Cloud SQL 輸入的密碼
加入 TZ=Asia/Taipei 之後發通知的時候才會對應到正確的時區
可以參考 Cloud SQL 的頁面資訊
然後記得要在 GCP 的 Firewall 上面開放 port 10051 的存取
這樣之後 zabbix client 來存取的時候才不會 Timeout
# docker run --name zabbix-web-nginx-pgsql -t \
-e ZBX_SERVER_HOST="zabbix-server-pgsql" \
-e DB_SERVER_HOST="YOUR_DB_IP" \
-e POSTGRES_USER="postgres" \
-e POSTGRES_PASSWORD="YOUR_PW" \
-e POSTGRES_DB="zabbix" \
-e PHP_TZ="Asia/Taipei" \
--network=zabbix-net \
-p 443:8443 \
-p 80:8080 \
-v /etc/ssl/nginx:/etc/ssl/nginx:ro \
--restart unless-stopped \
-d zabbix/zabbix-web-nginx-pgsql:alpine-6.0-latest
確認 container 狀態
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
238e05935125 zabbix/zabbix-web-nginx-pgsql:alpine-6.0-latest "docker-entrypoint.sh" 14 seconds ago Up 12 seconds 0.0.0.0:80->8080/tcp, :::80->8080/tcp, 0.0.0.0:443->8443/tcp, :::443->8443/tcp zabbix-web-nginx-pgsql
b6d16347aa21 zabbix/zabbix-server-pgsql:alpine-6.0-latest "/sbin/tini -- /usr/…" 5 minutes ago Up 5 minutes 0.0.0.0:10051->10051/tcp, :::10051->10051/tcp zabbix-server-pgsql
812544455ecd zabbix/zabbix-snmptraps:alpine-6.0-latest "/usr/sbin/snmptrapd…" 12 minutes ago Up 12 minutes 0.0.0.0:162->1162/udp, :::162->1162/udp zabbix-snmptraps
等等要安裝 Zabbix agent 監控 Zabbix Server 自己, 先觀察剛剛建立 zabbix-net 網段使用狀況
# docker network inspect zabbix-net
[
{
"Name": "zabbix-net",
"Id": "4af902735988ad791a12dbbb98afd1c88ec00de061c6c5ffb712ef78503e19d0",
"Created": "2022-05-12T13:53:29.638267706Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"IPRange": "172.20.240.0/20"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"238e05935125afddf96794c674d3961a3de06741891bfe071c2c350ab0a764c1": {
"Name": "zabbix-web-nginx-pgsql",
"EndpointID": "0508d9d715ba3c78add4fc22d02e93af9cd740664e4a08e6639d8e8ce5a57bcc",
"MacAddress": "02:42:ac:14:f0:03",
"IPv4Address": "172.20.240.3/16",
"IPv6Address": ""
},
"812544455ecdfc79882cd1ec57e98a29dfd37b7c3c748b727e5884b2d5a15491": {
"Name": "zabbix-snmptraps",
"EndpointID": "2f1612243e2cc97ff942204b9c7c19779f5e45d446e8f3d5f4fce49cec29e018",
"MacAddress": "02:42:ac:14:f0:01",
"IPv4Address": "172.20.240.1/16",
"IPv6Address": ""
},
"b6d16347aa216ca6ff2f683bde533604207f5eb7285899ab4a06adebb1f12d57": {
"Name": "zabbix-server-pgsql",
"EndpointID": "206cdf98adda30bdea8f0584749e974b8d347d5e2dcbb335200e8a607e49d85a",
"MacAddress": "02:42:ac:14:f0:02",
"IPv4Address": "172.20.240.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
這邊可以觀察到 目前 zabbix-server-pgsql 的 IP 為 172.20.240.2
# docker run --name zabbix-agent \
--network=zabbix-net \
-e ZBX_HOSTNAME="zabbix-server" \
-e ZBX_SERVER_HOST="172.20.240.2" \
--privileged \
--restart unless-stopped \
-d zabbix/zabbix-agent2:alpine-6.0-latest
ZBX_HOSTNAME 為要登記到 host 的主機名稱, 必須與 Configuration -- > Host 上面的 Host name 一致
ZBX_SERVER_HOST 為Server 的 IP 或是 FQDN
這個部份我用 #docker network inspect zabbix-net 查詢 Server IP
使用 --privileged 來啟用 Privileged mode, 有使用 Privileged 的話 Graph 會多了磁碟的相關資訊
因為 預設的 Zabbix Server host 設定無法移除 Interface 設定, 所以這個部份採取 Passive 的方式來進行, 然後是在此主機的 zabbix-net 比較沒有相關顧慮
執行 Zabbix 管理者密碼變更
在 GCP 的 Firewall 建立只允許自己固定IP 連入 port 80 的規則
我個人比較喜歡用 Tag 方式來套用, 不是全部套用
在剛剛建立的 openSUSE Leap 15.3 套用 Network tags, 讓防火牆生效
開啟網頁 http://YOUR_SERVER_IP
登入 Zabbix
預設帳號 Admin
密碼 zabbix
變更 Zabbix Admin 密碼 ( 建議馬上變更 )
User settings -- > Profile -- > Change password
Zabbix Server 的主機設定更新 ( 讓 zabbix agent 對應正確 )
在 Zabbix 頁面
Configuration -- > Hosts
點選 Zabbix server
Host name 的部份填入剛剛 zabbix agent 2 的 ZBX_HOSTNAME ( zabbix-server )
Interfaces Agent 的 IP 從 127.0.0.1 改為目前 agent IP, 例如 172.20.240.4 ( 透過 #docker network inspect zabbix-net 查詢 Client IP )
點選 Update
放一段時間就會看到正常了
最後來建置 Load Balancer
如同我們上面的架構圖, 因為目前我們是將角色使用容器的方式放在 1 台 GCE 上面來執行, 上面有兩個角色會使用到
zabbix-web-nginx-pgsql
進行 Zabbix 服務管理與資訊呈現 -- port 80
zabbix-server-pgsql
接收 Zabbix agent 主動回報的資訊 -- port 10051
這邊規劃使用 2 個 Load Balancer 來接收上述兩個需求, 使用 LB 的好處是可以保持機器調度的彈性, Client 只要單一指向 LB 的 IP 或是 FQDN, 後面的架構要如何調整不會受到影響
但是由於我們是將這兩個角色放在一個 GCE 上面, 原則上 GCE 不能同時被 2 個 LB 連接, 所以另外一個使用 Network Endpoint Group (NEG) 來處理.
目前規劃如下
建立 TCP Load Balancer ( L4 )
接收 Zabbix Agent 主動回報的資訊
後端使用 NEG 來連接 GCE
建立 HTTP(S) Load Balancer ( L7 )
用來管理 Zabbix Server 相關設定與資訊的呈現
後端使用 Unmanaged Instance Group 來指定
在 Load Balancer 上面綁定憑證, 讓外部連線使用 Https 連線
首先來建立 TCP Load Balancer
在建立 Load balancer 之前, 我們先來建立 Unmanaged Instance Group 來指向我們的 GCE, 好讓等等的 LB 可以指定到我們的 GCE.
點選 左上角選單 Compute Engine -- > Instance groups
點選 CREATE INSTANCE GROUP
點選 New unmanaged instance group
輸入名稱
選取 Region and Zone
Network and instance 部份
選取 VM 所在的 Network 與 Subnet
選取 Zabbix 所在的 VM
Port mapping 的部份
點選 ADD PORT
輸入 Port name 以及對應的 port
點選 CREATE 建立 Unmanaged Instace Group
確認建立資訊
接下來建立 TCP Load Balancer
點選左上角 選單, 點選 Network services -- > Load balancing
點選 CREATE LOAD BALANCER
點選 TCP Load Balancing 的 START CONFIGURATION
這邊使用預設值即可
點選 CONTINUE
輸入名稱
選取 Region
右邊要設定 Backends
New backend 部份選取剛剛建立的 Unmanaged instance group
接下來要利用這個畫面建立 Health Check
點選 Health check 的下拉式選單, 點選 CREATE A HEALTH CHECK
輸入名稱
將 Port 改為 10051
點選 SAVE
確認 Backend 已經設定 UIG 以及 Health Check
點選 Frontend configuration
這邊主要要將 Port 指定到 10051
在 Port number 輸入 10051
IP address 的部份可以考慮設定 Static IP
點選 DONE
點選 Review and finalize
觀察是否有缺漏?
點選 CREATE 建立
補充資訊
建立完成之後, 可能會遇到 Health Check 沒過的狀況
那是因為我們使用自己建立的 VPC and Subnet, 要另外開 Firewall Rules 允許 Google進行 Health Check, 請見官方文件
建立相對應的 Firewall 檢查就過了
接下來處理 HTTP(S) Load Balancer 建立
跟之前一樣, 在建立 HTTP(S) Load Balancer 之前, 我們先來建立 Network endpoint group (NEG), 以方便等等 LB 的連接
點選左上角 選單, 點選 Compute Engine -- > Network endpoint groups
點選 CREATE NETWORK ENDPOINT GROUP
輸入 名稱
選取 Network / Subnet / Zone
輸入 Default port 80
點選 CREATE
觀察建立後的資訊
目前連接的 Network endpoints 為 0
點選剛剛建立的 Network endpoint group ( neg-zabbix )
點選 ADD NETWORK ENDPOINT
在 VM Instance 的地方, 選取 Zabbix 所在的 VM
IP address 的部份, 輸入 Zabbix VM 的內部IP
點選 CREATE
接下來建立 HTTP(S) Load Balancer
點選左上角 選單, 點選 Network services -- > Load balancing
點選 CREATE LOAD BALANCER
點選 HTTP(S) Load Balancing 的 START CONFIGURATION
這邊一樣使用預設值
點選 CONTINUE
輸入 名稱
點選 Backend service & backend buckets 的下拉式選單
點選 CREATE A BACKEND SERVICE
輸入名稱
Backend type: 選取 Zonal network endpoint group
Protocol: 選取 HTTP
點選 Network endpoint group 的下拉式選單
選取剛剛建立的 Network endpoint group
設定 Maximum RPS (請按照需求設定)
點選 DONE
接下來設定 Health check
點選 Health Check 的下拉式選單
點選 CREATE A HEALTH CHECK
輸入名稱
其他按照預設值
點選 SAVE
點選 CREATE
點選 OK
這樣 Backend 的部份就設定完成
點選 Frontend configuration
輸入名稱
Protocol: 選取 HTTPS(includes HTTP/2)
接下來處理憑證的部份
點選 Certificate 的下拉式選單
點選 CREATE A NEW CERTIFICATE
輸入名稱
這邊我是用自己的憑證
Certificate 的部份要貼上 公鑰與中繼憑證合併
Private Key 就貼上私鑰
憑證的部份可以參考我之前的文章 https://sakananote2.blogspot.com/2022/04/certbot-ssl-with-opensuse-leap-153-in.html
Certificate 就貼上 fullchain.pem 的內容
點選 CREATE
點選 Review and finalize
確認資訊無誤
點選 CREATE
確認 Load Balancer 狀態
Firewall 要允許的部份, 已經在上面說過了, 記得要開
最後一個步驟
建立一筆 FQDN 指向剛剛建立的 HTTP(S) Load balancer
例如 zabbix.YOUR.DOMAIN -- > 指向 LP IP
開啟 https://zabbix.YOUR.DOMAIN
確認可以正常使用 https 登入 Zabbix
~ 收工
~ Enjoy it
Reference
https://sakananote2.blogspot.com/2022/02/zabbix-server-60-with-container-in-azure.html
Microfusion 夥伴的幫忙 / 強者同事的筆記
https://sakananote2.blogspot.com/2022/04/certbot-ssl-with-opensuse-leap-153-in.html


