Incluso la CIA difundía el uso del editor #Vim

Wikileaks filtró algunas de las herramientas que difundían de hacking que difundían en la CIA, entre ellas el editor #Vim

Hoy en el blog no traigo ni tutorial sobre Vim, ni escribiré sobre un complemento o algún truco sobre este editor de texto. Hoy veremos que hasta la CIA enumeraba a Vim junto con otros software como herramientas a utilizar.

Este artículo es una nueva entrega del curso “improVIMsado” que desde hace meses vengo publicando en mi blog sobre el editor Vim y que puedes seguir en estos enlaces:

Hace unos años Wikileaks entre los documentos que filtraba sobre la CIA, se encontraban  uno con nombre en clave “Vault 7” que enumeraba una serie de herramientas de hacking para utilizar.

Entre las herraminetas enumeradas podemos encontrar software bien conocido como: make, Sublime, Git, Docker o el editor Vim.

En el documento compartían información, manuales de comandos, etc sobre este editor de texto. Para algunos un motivo más para no utilizar este editor de texto. Para otros una simple curiosidad.

Y como tal me ha parecido a mí, por eso he querido compartir este corto artículo un poco “offtopic” sobre Vim…

Parsing PAN-OS logs using syslog-ng

Version 3.29 of syslog-ng was released recently including a user-contributed feature: the panos-parser(). It is parsing log messages from PAN-OS (Palo Alto Networks Operating System). Unlike some other networking devices, the message headers of PAN-OS syslog messages are standards-compliant. However, if you want to act on your messages (filtering, alerting), you still need to parse the message part. The panos-parser() helps you create name-value pairs from the message part of the logs.

From this blog you can learn why it is useful to parse PAN-OS log messages and how to use the panos-parser().

Before you begin

In order to use the panos-parser(), you need to install syslog-ng 3.29 or later. Most Linux distributions feature earlier versions, but the https://www.syslog-ng.com/3rd-party-binaries page of the syslog-ng website has some pointers to 3rd party repositories featuring up-to-date binaries.

You also need some PAN-OS log messages. If you are reading this blog, you most likely have some Palo Alto Networks devices at hand and your ultimate goal is to collect logs from those devices. In this blog I will use the sample log messages found in the configuration snippet implementing the panos-parser(). Even if you have “real” PAN-OS logs, it is easier to get started configuring and testing syslog-ng this way.

Why is it useful?

As mentioned earlier, PAN-OS devices send completely valid syslog messages. You are not required to do any additional parsing on them. Syslog-ng can interpret the message headers without any additional configuration and save the logs properly:

Apr 14 16:48:54 localhost 1,2020/04/14 16:48:54,unknown,SYSTEM,auth,0,2020/04/14 16:48:54,,auth-fail,,0,0,general,medium,failed authentication for user \'admin\'. Reason: Invalid username/password. From: 10.0.10.55.,1718,0x0,0,0,0,0,,paloalto
Apr 14 16:54:18 localhost 1,2020/04/14 16:54:18,unknown,CONFIG,0,0,2020/04/14 16:54:18,10.0.10.55,,set,admin,Web,Succeeded, deviceconfig system,127,0x0,0,0,0,0,,paloalto

If you look at these logs, you can see that the message part is a list of comma-separated values. That should be easy for the csv-parser(), but the two lists above have a different set of fields and there are a few more message types not included here. You can find the parsers describing these message types in /usr/share/syslog-ng/include/scl/paloalto/panos.conf or in the same directory under /usr/local on most Linux distributions. The panos-parser() can detect what type of log it is and create name-value pairs accordingly. If none of the types match, the parser drops the log. Once you have name-value pairs, it is much easier to create alerts (filters) in syslog-ng or reports in Kibana (if you use the Elasticsearch destination of syslog-ng).

Configuring syslog-ng

In most Linux distributions, syslog-ng is configured in a way so that you can extend it by dropping a configuration file with a .conf extension into the /etc/syslog-ng/conf.d/ directory. In other cases, simply append the below configuration to syslog-ng.conf.

source s_regular { tcp(port(5141)); };
source s_net {
    default-network-drivers(flags(store-raw-message));
};
source s_panosonly { tcp(port(5140) flags(no-parse,store-raw-message)); };

template t_jsonfile {
    template("$(format-json --scope rfc5424 --scope dot-nv-pairs
        --rekey .* --shift 1 --scope nv-pairs --key ISODATE)\n\n");
};
parser p_panos { panos-parser(); };
destination d_frompanos {
    file("/var/log/frompanos" template(t_jsonfile));
};
destination d_other {
    file("/var/log/other");
};
destination d_raw {
    file("/var/log/raw" template("${RAWMSG}\n"));
};
log {
    source(s_regular);
    destination(d_other);
};
log {
    source(s_net);
    destination(d_raw);
    if ("${.app.name}" eq "panos") {
        destination(d_frompanos);
    } else {
        destination(d_other);
    };
};
log {
    source(s_panosonly);
    destination(d_raw);
    parser(p_panos);
    destination(d_frompanos);
};

If you follow my blogs, the above configuration will look familiar: it is based on the configuration I prepared for Cisco in one of my recent blogs, I just replaced the Cisco-specific parts. Some of the text below is also reused, but there are some obvious differences, as the purpose of the parsers is different.

In the following section we will go over this configuration in detail, in the order the statements appear in the configuration. To make your life easier, I copied the relevant snippets below before explaining them.

Sources

source s_regular { tcp(port(5141)); };

This is a pretty regular TCP legacy syslog source on a random high port that I used to create the logs in the “Why is it useful” section. By default, the tcp() source handles all incoming log messages as if they were legacy (RFC3164) formatted and in case of PAN-OS logs it results in properly formatted logs.

source s_net {
    default-network-drivers(flags(store-raw-message));
};

The default-network-drivers() source driver is a kind of a wild card. It opens different UDP and TCP ports using both the legacy and the new RFC5424 syslog protocols. Instead of just expecting everything to be regular syslog, it attempts a number of different parsers on incoming logs, including the panos-parser(). Of course, the extra parsing creates some overhead, but it is not a problem unless you have a very high message rate.

The store-raw-message flag means that syslog-ng preserves the original log message as is. It might be useful for debugging or if a log analysis software expects unmodified PAN-OS messages.

source s_panosonly { tcp(port(5140) flags(no-parse,store-raw-message)); };

The third source, as its name also implies, is only for PAN-OS log messages. Use it when you have a high message rate, you only send PAN-OS log messages at the given port and you are sure that the panos-parser() of syslog-ng can process all of your PAN-OS logs correctly. The no-parse flag means that incoming messages are not parsed automatically as they arrive. As you will see later, panos-parser() parses the incoming messages.

Templates

template t_jsonfile {
    template("$(format-json --scope rfc5424 --scope dot-nv-pairs
        --rekey .* --shift 1 --scope nv-pairs --key ISODATE)\n\n");
};

This is the template, which is often used together with Elasticsearch (without the line breaks at the end). This blog does not cover Elasticsearch, as there are many other blogs covering the topic. However, this template using the JSON template function is also useful here because it shows the name-value pairs parsed from PAN-OS log messages. These JSON formatted logs include all syslog-related fields, name-value pairs parsed from the message, and the date using the ISO standardized format. There is one little trick that might confuse you: the rekey and shift part removes the dots from the front of the name-value pair names. Syslog-ng uses the dot for name-value pairs created by parsers in the syslog-ng configuration library (SCL).

Parsers

parser p_panos { panos-parser(); };

This parser can extract name-value pairs from the log messages of PAN-OS devices. By default, these are stored in name-value pairs starting with .panos., but you can change the prefix using the prefix() parameter. Note that the panos-parser() drops the message if it does not match the rules. This can be a problem when you use it directly instead of using the default-network-drivers() where messages go through a long list of parsers.

Destinations

destination d_frompanos {
    file("/var/log/frompanos" template(t_jsonfile));
};

This is a file destination where logs are JSON-formatted using the template from above. This way you can see all the name-value pairs syslog-ng creates. We use it for PAN-OS log messages.

destination d_other {
    file("/var/log/other");
};

This is a flat file destination using regular syslog formatting. We use it to store non-PAN-OS log messages.

destination d_raw {
    file("/var/log/raw" template("${RAWMSG}\n"));
};

This is yet another file destination. The difference here is that it uses a special template defined in-line. Using the RAWMSG macro, syslog-ng stores the log message without any modifications. This possibility is enabled by utilizing the store-raw-message flag on the source side and it is useful for debugging or when a SIEM or any other analysis software needs the original log message.

Log statements

Previously we defined the building blocks of the configuration. Using the log statements below,we are going to connect these building blocks together. The different building blocks can be used multiple times in the same configuration.

log {
    source(s_regular);
    destination(d_other);
};

This is the simplest log statement: it connects the regular tcp() source with a flat file destination. You can see the results from this in the “Why is it useful” section. The logs look OK, but at a closer look you can also see that they contain tons of structured information. That is where the panos-parser() comes in handy.

log {
    source(s_net);
    destination(d_raw);
    if ("${.app.name}" eq "panos") {
        destination(d_frompanos);
    } else {
        destination(d_other);
    };
};

Unless you have a high message rate, above 50,000 to 200,000 events per second (EPS), or not enough hardware capacity, the recommended way of receiving PAN-OS messages is using the default-network-drivers(). It uses slightly more resources, but if a message does not match the expectations of the panos-parser(), it is still kept. The above log statement receives the message and then stores it immediately in raw format. It can be used for debugging and disabled later.

The if statement below checks if the message is recognized as a PAN-OS log message. If it is, the log is saved as a JSON formatted file. If not, it is saved as a flat file.

Note that the name-value pair is originally called .app.name, but in the output it appears as app.name, as the template removes the dot in front.

log {
    source(s_panosonly);
    destination(d_raw);
    parser(p_panos);
    destination(d_frompanos);
};

If you have a high message rate and you are sure that the panos-parser() detects all of your logs, you can use this solution to collect logs from your PAN-OS devices. For safety and debugging purposes I inserted the raw file destination in front of the parser. This way you can compare the number of lines in the two different file destinations. If they are not equal, check the logs that the panos-parser() discarded.

Testing

For testing I used a few sample log messages from the syslog-ng configuration snippet containing the panos-parser() and saved these messages into a file in /root/panoslogs.txt. Logger generated the regular syslog messages and netcat submitted the PAN-OS messages. Here is the content of /root/panoslogs.txt:

<12>Apr 14 16:48:54 paloalto.test.net 1,2020/04/14 16:48:54,unknown,SYSTEM,auth,0,2020/04/14 16:48:54,,auth-fail,,0,0,general,medium,failed authentication for user \'admin\'. Reason: Invalid username/password. From: 10.0.10.55.,1718,0x0,0,0,0,0,,paloalto
<14>Apr 14 16:54:18 paloalto.test.net 1,2020/04/14 16:54:18,unknown,CONFIG,0,0,2020/04/14 16:54:18,10.0.10.55,,set,admin,Web,Succeeded, deviceconfig system,127,0x0,0,0,0,0,,paloalto

You will get different results epending on to which port you send the logs. You have already seen what happens when you send logs to port 5141. It looks great, but you can also see that a bit of parsing could do wonders to it.

Here is an example of sending logs to port 514:

logger -T --rfc3164 -n 127.0.0.1 -P 514 this is a regular syslog message
cat /root/panoslogs.txt | netcat -4 -n -N -v 127.0.0.1 514

In this case you should see in the files:

localhost:/etc/syslog-ng/conf.d # cat /var/log/other
Sep 14 15:25:54 localhost root: this is a regular syslog message
localhost:/etc/syslog-ng/conf.d # cat /var/log/frompanos
{"panos":{"vsys_name":"","vsys":"","type":"SYSTEM","time_generated":"2020/04/14 16:48:54","subtype":"auth","severity":"medium","serial":"unknown","seqno":"1718","receive_time":"2020/04/14 16:48:54","opaque":"failed authentication for user \\'admin\\'. Reason: Invalid username/password. From: 10.0.10.55.","object":"","module":"general","future_use4":"0","future_use3":"0","future_use2":"0","future_use1":"1","eventid":"auth-fail","dg_hier_level_4":"0","dg_hier_level_3":"0","dg_hier_level_2":"0","dg_hier_level_1":"0","device_name":"paloalto","actionflags":"0x0"},"app":{"name":"panos"},"SOURCE":"s_net","RAWMSG":"<12>Apr 14 16:48:54 paloalto.test.net 1,2020/04/14 16:48:54,unknown,SYSTEM,auth,0,2020/04/14 16:48:54,,auth-fail,,0,0,general,medium,failed authentication for user \\'admin\\'. Reason: Invalid username/password. From: 10.0.10.55.,1718,0x0,0,0,0,0,,paloalto","PROGRAM":"paloalto_panos","PRIORITY":"warning","MESSAGE":"1,2020/04/14 16:48:54,unknown,SYSTEM,auth,0,2020/04/14 16:48:54,,auth-fail,,0,0,general,medium,failed authentication for user \\'admin\\'. Reason: Invalid username/password. From: 10.0.10.55.,1718,0x0,0,0,0,0,,paloalto","ISODATE":"2020-04-14T16:48:54+02:00","HOST_FROM":"localhost","HOST":"paloalto.test.net","FACILITY":"user","DATE":"Apr 14 16:48:54"}

{"panos":{"vsys_name":"","vsys":"","type":"CONFIG","time_generated":"2020/04/14 16:54:18","subtype":"0","serial":"unknown","seqno":"127","result":"Succeeded","receive_time":"2020/04/14 16:54:18","path":" deviceconfig system","host":"10.0.10.55","future_use2":"0","future_use1":"1","dg_hier_level_4":"0","dg_hier_level_3":"0","dg_hier_level_2":"0","dg_hier_level_1":"0","device_name":"paloalto","cmd":"set","client":"Web","admin":"admin","actionflags":"0x0"},"app":{"name":"panos"},"SOURCE":"s_net","RAWMSG":"<14>Apr 14 16:54:18 paloalto.test.net 1,2020/04/14 16:54:18,unknown,CONFIG,0,0,2020/04/14 16:54:18,10.0.10.55,,set,admin,Web,Succeeded, deviceconfig system,127,0x0,0,0,0,0,,paloalto","PROGRAM":"paloalto_panos","PRIORITY":"info","MESSAGE":"1,2020/04/14 16:54:18,unknown,CONFIG,0,0,2020/04/14 16:54:18,10.0.10.55,,set,admin,Web,Succeeded, deviceconfig system,127,0x0,0,0,0,0,,paloalto","ISODATE":"2020-04-14T16:54:18+02:00","HOST_FROM":"localhost","HOST":"paloalto.test.net","FACILITY":"user","DATE":"Apr 14 16:54:18"}

localhost:/etc/syslog-ng/conf.d # cat /var/log/raw
<13>Sep 14 15:25:54 localhost root: this is a regular syslog message
<12>Apr 14 16:48:54 paloalto.test.net 1,2020/04/14 16:48:54,unknown,SYSTEM,auth,0,2020/04/14 16:48:54,,auth-fail,,0,0,general,medium,failed authentication for user \'admin\'. Reason: Invalid username/password. From: 10.0.10.55.,1718,0x0,0,0,0,0,,paloalto
<14>Apr 14 16:54:18 paloalto.test.net 1,2020/04/14 16:54:18,unknown,CONFIG,0,0,2020/04/14 16:54:18,10.0.10.55,,set,admin,Web,Succeeded, deviceconfig system,127,0x0,0,0,0,0,,paloalto

When you send logs to port 5140 where the panos-parser() is the only parser, the results should be pretty similar. The only difference is that the regular syslog message is only saved to the raw file used for debugging, as it is discarded by the panos-parser().

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For contact information, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

OpenExpo Virtual Experience en Compilando Podcast #49

No sé como se me pasó este episodio de Compilando Podcast, supongo que sería por el maremagnum de final de curso . Así que os presento el episodio 49 titulado «OpenExpo Virtual Experience econ Philippe Lardy» donde se habla del evento que, por motivos que todos sabemos (COVID19) se celebró online.

OpenExpo Virtual Experience en Compilando Podcast #49

En palabras del gran Paco Estrada que sirven de introducción del episodio 49 de Compilando Podcast:

OpenExpo Virtual Experience en Compilando Podcast

«OpenExpo Europe, es la mayor cita con la innovación en abierto, el open source y el software libre en el sur de Europa. Cada año conocemos las novedades y los pormenores de este evento en Compilando Podcast .

Debido a la pandemia del coronavirus, la edición presencial del 2020 tuvo que suspenderse para este año y por ello se ha reinventando.

OpenExpo, ha querido seguir siendo fiel a su cita y se ha reconvertido para ofrecer una muy atractiva experiencia virtual entre los días 17 Y 21 de Junio y en horario de 15:45 a 20:30 CET.

En esta edición de Compilando Podcast , hablamos con su CEO Philippe Lardy, sobre lo que supone esta primera edición bajo formato vitural online y las ventajas que puede aportarnos, pues no sólo se trata de una colección de webinars, sino que la plataforma para el encuentro facilita la realización de contactos, charlas y networking entre los asistentes. [..]»

Como siempre os invito a escuchar el podcast completo y compartirlo con vuestro entorno cercano y en vuestras redes sociales.

Más información: OpenExpo Virtual Experience con Philippe Lardy

¿Qué es Compilando Podcast?

Dentro del mundo de los audios de Software Libre, que los hay muchos y de calidad, destaca uno por la profesionalidad de la voz que lo lleva, el gran Paco Estrada, y por el mimo con el que está hecho. No es por nada que ganó el Open Awards’18 al mejor medio, un reconocimiento al trabajo realizado por la promoción .

A modo de resumen, Compilando Podcast es un proyecto personal de su locutor Paco Estrada que aúna sus pasiones y que además, nos ofrece una voz prodigiosa y una dicción perfecta.

Más información: Compilando Podcast

Sep 16th, 2020

Build multi-architecture container images using Kubernetes

Recently I’ve added some Raspberry Pi 4 nodes to the Kubernetes cluster I’m running at home.

The overall support of ARM inside of the container ecosystem improved a lot over the last years with more container images made available for the armv7 and the arm64 architectures.

But what about my own container images? I’m running some homemade containerized applications on top of this cluster and I would like to have them scheduled both on the x64_64 nodes and on the ARM ones.

There are many ways to build ARM container images. You can go from something as simple, and tedious, as performing manual builds on a real/emulated ARM machines or you can do something more structured like using this GitHub Action, relying on something like the Open Build Service,…

My personal desire was to leverage my mixed Kubernetes cluster and perform the image building right on top of it.

Implementing this design has been a great learning experience, something IMHO worth to be shared with others. The journey has been too long to fit into a single blog post; I’ll split my story into multiple posts.

Our journey begins with the challenge of building a container image from within a container.

Image building

The most known way to build a container image is by using docker build. I didn’t want to use docker to build my images because the build process will take place right on top of Kubernetes, meaning the build will happen in a containerized way.

Some people are using docker as the container runtime of their Kubernetes clusters and are leveraging that to mount the docker socket inside of some of their containers. Once the docker socket is mounted, the containerized application has full access to the docker daemon that is running on the host. From there it’s game over the container can perform actions such as building new images.

I’m a strong opponent of this approach because it’s highly insecure. Moreover I’m not using docker as container runtime and I guess many people will stop doing that in the near future once dockershim gets deprecated. Translated: the majority of the future Kubernetes cluster will either have containerd, CRI-O or something similar instead of docker - hence bye bye to the docker socket hack.

There are however many other ways to build containers that are not based on docker build.

If you do a quick internet search about containerized image building you will definitely find kaniko. kaniko does exactly what I want: it performs containerized builds without using the docker daemon. There are also many examples covering image building on top of Kubernetes with kaniko. Unfortunately, at the time of writing, kaniko supports only the x86_64 architecture.

Our chances are not over yet because there’s another container building tool that can help us: buildah.

Buildah is part of the “libpod ecosystem”, which includes projects such as podman, skopeo and CRI-O. All these tools are available for multiple architectures: x86_64, aarch64 (aka ARM64), s390x and ppc64le.

Running buildah containerized

Buildah can build container images starting from a Dockerfile or in a more interactive way. All of that without requiring any privileged daemon running on your system.

During the last years the buildah developers spent quite some efforts to support the use case of “containerized buildah”. This is just the most recent blog post that discusses this scenario in depth.

Upstream has even a Dockerfile that can be used to create a buildah container image. This can be found here.

I took this Dockerfile, made some minor adjustments and uploaded it to this project on the Open Build Service. As a result I got a multi architecture container image that can be pulled from registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest.

The storage driver

As some container veterans probably know, there are several types of storage drivers that can be used by container engines.

In case you’re not familiar with this topic you can read these great documentation pages from Docker:

Note well: despite being written for the docker container engine, this applies also to podman, buildah, CRI-O and containerd.

The most portable and performant storage driver is the overlay one. This is the one we want to use when running buildah containerized.

The overlay driver can be used in safe way even inside of a container by leveraging fuse-overlay; this is described by the buildah blog post I linked above.

However, using the overlay storage driver inside of a container requires Fuse to be enabled on the host and, most important of all, it requires the /dev/fuse device to be accessible by the container.

The share operation cannot be done by simply mounting /dev/fuse as a volume because there are some extra “low level” steps that must be done (like properly instructing the cgroup device hierarchy).

These extra steps are automatically handled by docker and podman via the --device flag of the run command:

$ podman run --rm -ti --device /dev/fuse buildahimage bash

This problem will need to be solved in a different way when buildah is run on top of Kubernetes.

Kubernetes device plugin

Special host devices can be shared with containers running inside of a Kubernetes POD by using a recent feature called Kubernetes device plugins.

Quoting the upstream documentation:

Kubernetes provides a device plugin framework that you can use to advertise system hardware resources to the Kubelet .

Instead of customizing the code for Kubernetes itself, vendors can implement a device plugin that you deploy either manually or as a DaemonSet . The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand adapters, and other similar computing resources that may require vendor specific initialization and setup.

This Kubernetes feature is commonly used to allow containerized machine learning workloads to access the GPU cards available on the host.

Luckily someone wrote a Kubernetes device plugin that exposes /dev/fuse to Kubernetes-managed containers: fuse-device-plugin.

I’ve forked the project, made some minor fixes to its Dockerfile and created a GitHub action to build the container image for amd64, armv7 and amd64 (a PR is coming soon). The images are available on the Docker Hub as: flavio/fuse-device-plugin.

The fuse-device-plugin has to be deployed as a Kubernetes DaemonSet via this yaml file:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fuse-device-plugin-daemonset
  namespace: kube-system
spec:
  selector:
    matchLabels:
      name: fuse-device-plugin-ds
  template:
    metadata:
      labels:
        name: fuse-device-plugin-ds
    spec:
      hostNetwork: true
      containers:
      - image: flavio/fuse-device-plugin:latest
        name: fuse-device-plugin-ctr
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop: ["ALL"]
        volumeMounts:
          - name: device-plugin
            mountPath: /var/lib/kubelet/device-plugins
      volumes:
        - name: device-plugin
          hostPath:
            path: /var/lib/kubelet/device-plugins

This is basically this file, with the flavio/fuse-device-plugin image being used instead of the original one (which is built only for x86_64).

Once the DaemonSet PODs are running on all the nodes of the cluster, we can see the Fuse device being exposed as an allocatable resource identified by the github.com/fuse key:

$ kubectl get nodes -o=jsonpath=$'{range .items[*]}{.metadata.name}: {.status.allocatable}\n{end}'
jam-2: map[cpu:4 ephemeral-storage:224277137028 github.com/fuse:5k memory:3883332Ki pods:110]
jam-1: map[cpu:4 ephemeral-storage:111984762997 github.com/fuse:5k memory:3883332Ki pods:110]
jolly: map[cpu:4 ephemeral-storage:170873316014 github.com/fuse:5k gpu.intel.com/i915:1 hugepages-1Gi:0 hugepages-2Mi:0 memory:16208280Ki pods:110]

The Fuse device can then be made available to a container by specifying a resource limit:

apiVersion: v1
kind: Pod
metadata:
  name: fuse-example
spec:
  containers:
  - name: main
    image: alpine
    command: ["ls", "-l", "/dev"]
    resources:
      limits:
        github.com/fuse: 1

If you look at the logs of this POD you will see something like that:

$ kubectl logs fuse-example
total 0
lrwxrwxrwx    1 root     root            11 Sep 15 08:31 core -> /proc/kcore
lrwxrwxrwx    1 root     root            13 Sep 15 08:31 fd -> /proc/self/fd
crw-rw-rw-    1 root     root        1,   7 Sep 15 08:31 full
crw-rw-rw-    1 root     root       10, 229 Sep 15 08:31 fuse
drwxrwxrwt    2 root     root            40 Sep 15 08:31 mqueue
crw-rw-rw-    1 root     root        1,   3 Sep 15 08:31 null
lrwxrwxrwx    1 root     root             8 Sep 15 08:31 ptmx -> pts/ptmx
drwxr-xr-x    2 root     root             0 Sep 15 08:31 pts
crw-rw-rw-    1 root     root        1,   8 Sep 15 08:31 random
drwxrwxrwt    2 root     root            40 Sep 15 08:31 shm
lrwxrwxrwx    1 root     root            15 Sep 15 08:31 stderr -> /proc/self/fd/2
lrwxrwxrwx    1 root     root            15 Sep 15 08:31 stdin -> /proc/self/fd/0
lrwxrwxrwx    1 root     root            15 Sep 15 08:31 stdout -> /proc/self/fd/1
-rw-rw-rw-    1 root     root             0 Sep 15 08:31 termination-log
crw-rw-rw-    1 root     root        5,   0 Sep 15 08:31 tty
crw-rw-rw-    1 root     root        1,   9 Sep 15 08:31 urandom
crw-rw-rw-    1 root     root        1,   5 Sep 15 08:31 zero

Now that this problem is solved we can move to the next one. 😉

Obtaining the source code of our image

The source code of the “container image to be built” must be made available to the containerized buildah.

As many people do, I keep all my container definitions versioned inside of Git repositories. I had to find a way to clone the Git repository holding the definition of the “container image to be built” inside of the container running buildah.

I decided to settle for this POD layout:

  • The main container of the POD is going to be the one running buildah.
  • The POD will have a Kubernetes init container that will git clone the source code of the “container image to be built” before the main container is started.

The contents produced by the git clone must be placed into a directory that can be accessed later on by the main container. I decided to use a Kubernetes volume of type emptyDir to create a shared storage between the init and the main containers. The emptyDir volume is just perfect: it doesn’t need any fancy Kubernetes Storage Class and it will automatically vanish once the build is done.

To checkout the Git repository I decided to settle on the official Kubernetes git-sync container.

Quoting its documentation:

git-sync is a simple command that pulls a git repository into a local directory. It is a perfect “sidecar” container in Kubernetes - it can periodically pull files down from a repository so that an application can consume them.

git-sync can pull one time, or on a regular interval. It can pull from the HEAD of a branch, from a git tag, or from a specific git hash. It will only re-pull if the target of the run has changed in the upstream repository. When it re-pulls, it updates the destination directory atomically. In order to do this, it uses a git worktree in a subdirectory of the –root and flips a symlink.

git-sync can pull over HTTP(S) (with authentication or not) or SSH.

This is just what I was looking for.

I will start git-sync with the following parameters:

  • --one-time: this is needed to make git-sync exit once the checkout is done; otherwise it will keep running forever and it will periodically look for new commits inside of the repository. I don’t need that, plus this would cause the main container to wait indefinitely for the init container to exit.
  • --depth 1: this is done to limit the checkout to the latest commit. I’m not interested in the history of the repository. This will make the checkout faster and use less bandwidth and disk space.
  • --repo <my-repo: the repo I want to checkout.
  • --branch <my-branch>: the branch to checkout.

The git-sync container image was already built for multiple architectures, but unfortunately it turned out the non x86_64 images were broken. The issue has been recently solved with the v3.1.7.

While waiting for the issue to be fixed I just rebuilt the container image on the Open Build Service. This is no longer needed, everybody can just use the official image.

Trying the first build

It’s now time to perform a simple test run. We will define a simple Kubernetes POD that will:

  1. Checkout the source code of a simple container image
  2. Build the container iamge using buildah

This is the POD definition:

apiVersion: v1
kind: Pod
metadata:
  name: builder-amd64
spec:
  nodeSelector:
    kubernetes.io/arch: "amd64"
  initContainers:
  - name: git-sync
    image: k8s.gcr.io/git-sync/git-sync:v3.1.7
    args: [
      "--one-time",
      "--depth", "1",
      "--dest", "checkout",
      "--repo", "https://github.com/flavio/guestbook-go.git",
      "--branch", "master"]
    volumeMounts:
      - name: code
        mountPath: /tmp/git
  volumes:
  - name: code
    emptyDir:
      medium: Memory
  containers:
  - name: main
    image: registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest
    command: ["/bin/sh"]
    args: ["-c", "cd code; cd $(readlink checkout); buildah bud -t guestbook ."]
    volumeMounts:
      - name: code
        mountPath: /code
    resources:
      limits:
        github.com/fuse: 1

Let’s break it down into pieces.

Determine image architecture

The POD uses a Kubernetes node selector to ensure the build happens on a node with the x86_64 architecture. By doing that we will know the architecture of the final image.

Checkout the source code

As said earlier, the Git repository is checked out using an init container:

  initContainers:
  - name: git-sync
    image: k8s.gcr.io/git-sync/git-sync:v3.1.7
    args: [
      "--one-time",
      "--depth", "1",
      "--dest", "checkout",
      "--repo", "https://github.com/flavio/guestbook-go.git",
      "--branch", "master"]
    volumeMounts:
      - name: code
        mountPath: /tmp/git

The Git repository and the branch are currently hard-coded into the POD definition, this is going to be fixed later on. Right now that’s good enough to see if things are working (spoiler alert: they won’t 😅).

The git-sync container will run before the main container and it will write the source code of the “container image to be built” inside of a Kubernetes volume named code.

This is how the volume will look like after git-sync has ran:

$ ls -lh <root of the volume>
drwxr-xr-x 9 65533 65533 300 Sep 15 09:41 .git
lrwxrwxrwx 1 65533 65533  44 Sep 15 09:41 checkout -> rev-155a69b7f81d5b010c5468a2edfbe9228b758d64
drwxr-xr-x 6 65533 65533 280 Sep 15 09:41 rev-155a69b7f81d5b010c5468a2edfbe9228b758d64

The source code is stored under the rev-<git commit ID> directory. There’s a symlink named checkout that points to it. As you will see later, this will lead to a small twist.

Shared volume

The source code of our application is stored inside of a Kubernetes volume of type emptyDir:

  volumes:
  - name: code
    emptyDir:
      medium: Memory

I’ve also instructed Kubernetes to store the volume in memory. Behind the scene Kubelet will use tmpfs to do that.

The buildah container

The POD will have just one container running inside of it. This is called main and its only purpose is to run buildah.

This is the definition of the container:

  containers:
  - name: main
    image: registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest
    command: ["/bin/sh"]
    args: ["-c", "cd /code; cd $(readlink checkout); buildah bud -t guestbook ."]
    volumeMounts:
      - name: code
        mountPath: /code
    resources:
      limits:
        github.com/fuse: 1

As expected the container is mounting the code Kubenetes volume too. Moreover, the container is requesting one resource of type github.com/fuse; as explained above this is needed to make /dev/fuse available inside of the container.

The container executes a simple bash script. The oneliner can be expanded to that:

cd /code
cd $(readlink checkout)
buildah bud -t guestbook .

There’s one interesting detail in there. As you can see I’m not “cd-ing” straight into /code/checkout, instead I’m moving into /code and then resolving the actual target of the checkout symlink.

We can’t move straight into /code/checkout because that would give us an error:

builder:/ # cd /code/checkout
bash: cd: /code/checkout: Permission denied

This happens because /proc/sys/fs/protected_symlinks is turned on by default. As you can read here, this is a way to protect from specific type of exploits. Not even root inside of the container can jump straight into /code/checkout, this is why I’m doing this workaround.

One last note, as you have probably noticed, buildah is just building the container image, it’s not pushing it to any registry. We don’t care about that right now.

An unexpected problem

Our journey is not over yet, there’s one last challenge ahead of us.

Before digging into the issue, let me provide some background. My local cluster was initially made by one x86_64 node running openSUSE Leap 15.2 and by two ARM64 nodes running the beta ARM64 build of Rasperry Pi OS (formerly known as raspbian).

I used the POD definition shown above to define two PODs:

  • builder-amd64: the nodeSelector constraint targets the amd64 architecture
  • builder-arm64: the nodeSelector constraint targets the arm64 architecture

That lead to an interesting finding: the builds on ARM64 nodes worked fine, while all the builds on the x86_64 node failed.

The failure was always the same and happened straight at the beginning of the process:

$ kubectl logs -f builder-amd64
mount /var/lib/containers/storage/overlay:/var/lib/containers/storage/overlay, flags: 0x1000: permission denied
level=error msg="exit status 125"

To me, that immediately smelled like a security feature blocking buildah.

Finding the offending security check

I needed something faster then kubectl to iterate over this problem. Luckily I was able to reproduce the same error while running buildah locally using podman:

$ sudo podman run \
    --rm \
    --device /dev/fuse \
    -v <path-to-container-image-sources>:/code \
    registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest \
    /bin/sh -c "cd /code; buildah bud -t foo ."

I was pretty sure the failure happened due to some tight security check. To prove my theory I ran the same container in privileged mode:

$ sudo podman run \
    --rm \
    --device /dev/fuse \
    --privileged \
    -v <path-to-container-image-sources>:/code \
    registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest \
    /bin/sh -c "cd /code; buildah bud -t foo ."

The build completed successfully. Running a container in privileged mode is bad and makes me hurt, it’s not a long term solution but at least that proved the build failure was definitely caused by some security constraint.

The next step was to identify the security measure at the origin of the failure. That could be either something related with seccomp or AppArmor. I immediately ruled out SELinux as the root cause because it’s not used on openSUSE by default.

I then ran the container again, but this time I instructed podman to not apply any kind of seccomp profile; I basically disabled seccomp for my containerized workload.

This can be done by using the unconfined mode for seccomp:

$ sudo podman run \
    --rm \
    --device /dev/fuse \
    -v <path-to-container-image-sources>:/code \
    --security-opt=seccomp=unconfined \
    registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest \
    /bin/sh -c "cd /code; buildah bud -t foo ."

The build failed again with the same error. That meant seccomp was not causing the failure. AppArmor was left as the main suspect.

Next, I just run the container but I instructed podman to not apply any kind of AppArmor profile; again, I basically disabled AppArmor for my containerized workload.

This can be done by using the unconfined mode for AppArmor:

$ sudo podman run \
    --rm \
    --device /dev/fuse \
    -v <path-to-container-image-sources>:/code \
    --security-opt=apparmor=unconfined \
    registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest \
    /bin/sh -c "cd /code; buildah bud -t foo ."

This time the build completed successfully. Hence the issue was caused by the default AppArmor profile.

Create an AppArmor profile for buildah

All the container engines (docker, podman, CRI-O, containerd) have an AppArmor profile that is applied to all the containerized workloads by default.

The containerized Buildah is probably doing something that is not allowed by this generic profile. I just had to identify the offending operation and create a new tailor-made AppArmor profile for buildah.

As a first step I had to obtain the default AppArmor profile. This is not as easy as it might seem. The profile is generated at runtime by all the container engines and is loaded into the kernel. Unfortunately there’s no way to dump the information stored into the kernel and have a human-readable AppArmor profile.

After some digging into the source code of podman and some reading on docker’s GitHub issues, I produced a quick PR that allowed me to print the default AppArmor profile on to the stdout.

This is the default AppArmor profile used by podman:

#include <tunables/global>


profile default flags=(attach_disconnected,mediate_deleted) {

  #include <abstractions/base>


  network,
  capability,
  file,
  umount,


  # Allow signals from privileged profiles and from within the same profile
  signal (receive) peer=unconfined,
  signal (send,receive) peer=default,


  deny @{PROC}/* w,   # deny write for all files directly in /proc (not in a subdir)
  # deny write to files not in /proc/<number>/** or /proc/sys/**
  deny @{PROC}/{[^1-9],[^1-9][^0-9],[^1-9s][^0-9y][^0-9s],[^1-9][^0-9][^0-9][^0-9]*}/** w,
  deny @{PROC}/sys/[^k]** w,  # deny /proc/sys except /proc/sys/k* (effectively /proc/sys/kernel)
  deny @{PROC}/sys/kernel/{?,??,[^s][^h][^m]**} w,  # deny everything except shm* in /proc/sys/kernel/
  deny @{PROC}/sysrq-trigger rwklx,
  deny @{PROC}/kcore rwklx,

  deny mount,

  deny /sys/[^f]*/** wklx,
  deny /sys/f[^s]*/** wklx,
  deny /sys/fs/[^c]*/** wklx,
  deny /sys/fs/c[^g]*/** wklx,
  deny /sys/fs/cg[^r]*/** wklx,
  deny /sys/firmware/** rwklx,
  deny /sys/kernel/security/** rwklx,


  # suppress ptrace denials when using using 'ps' inside a container
  ptrace (trace,read) peer=default,

}

A small parenthesis, this AppArmor profile is the same generated by all the other container engines. Some poor folks keep this file in sync manually, but there’s a discussion upstream to better organize things.

Back to the build failure caused by AppArmor… I saved the default profile into a text file named containerized_buildah and I changed this line

profile default flags=(attach_disconnected,mediate_deleted) {

to look like that:

profile containerized_buildah flags=(attach_disconnected,mediate_deleted,complain) {

This changes the name of the profile and, most important of all, changes the policy mode to be of type complain instead of enforcement.

Quoting the AppArmor man page:

  • enforcement - Profiles loaded in enforcement mode will result in enforcement of the policy defined in the profile as well as reporting policy violation attempts to syslogd.
  • complain - Profiles loaded in “complain” mode will not enforce policy. Instead, it will report policy violation attempts. This mode is convenient for developing profiles.

I then loaded the policy by doing:

$ sudo apparmor_parser -r containerized_buildah

Invoking the aa-status command reports a list of all the modules loaded, their policy mode and all the processes confined by AppArmor.

$ sudo aa-status
...
2 profiles are in complain mode.
   containerized_buildah
...

One last operation had to done before I could start to debug the containerized buildah: turn off “audit quieting”. Again, straight from AppArmor’s man page:

Turn off deny audit quieting

By default, operations that trigger “deny” rules are not logged. This is called deny audit quieting.

To turn off deny audit quieting, run:

echo -n noquiet >/sys/module/apparmor/parameters/audit

Before starting the container, I opened a new terminal to execute this process:

# tail -f /var/log/audit/audit.log | tee apparmor-build.log

On systems where auditd is running (like mine), all the AppArmor logs are sent to /var/log/audit/audit.log. This command allowed me to keep an eye open on the live stream of audit logs and save them into a smaller file named apparmor-build.log.

Finally, I started the container using the custom AppArmor profile shown above:

$ sudo podman run \
    --rm \
    --device /dev/fuse \
    -v <path-to-container-image-sources>:/code \
    --security-opt=apparmor=containerized_buildah \
    registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest \
    /bin/sh -c "cd /code; buildah bud -t foo ."

The build completed successfully. Grepping for ALLOWED inside of the audit file returned a stream of entries like the following ones:

type=AVC msg=audit(1600172410.567:622): apparmor="ALLOWED" operation="mount" info="failed mntpnt match" error=-13 profile="containerized_buildah" name="/tmp/containers.o5iLtx" pid=25607 comm="exe" srcname="/usr/bin/buildah" flags="rw, bind"
type=AVC msg=audit(1600172410.567:623): apparmor="ALLOWED" operation="mount" info="failed mntpnt match" error=-13 profile="containerized_buildah" name="/tmp/containers.o5iLtx" pid=25607 comm="exe" flags="ro, remount, bind"
type=AVC msg=audit(1600172423.511:624): apparmor="ALLOWED" operation="mount" info="failed mntpnt match" error=-13 profile="containerized_buildah" name="/" pid=25629 comm="exe" flags="rw, rprivate"
...

As you can see all these entries are about mount operations, with mount being invoked with quite an assortment of flags.

The default AppArmor profile explicitly denies mount operations:

...
  deny mount,
...

All I had to do was to change the containerized_buildah AppArmor profile to that:

#include <tunables/global>


profile containerized_buildah flags=(attach_disconnected,mediate_deleted) {

  #include <abstractions/base>


  network,
  capability,
  file,
  umount,
  mount,

  # Allow signals from privileged profiles and from within the same profile
  signal (receive) peer=unconfined,
  signal (send,receive) peer=default,


  deny @{PROC}/* w,   # deny write for all files directly in /proc (not in a subdir)
  # deny write to files not in /proc/<number>/** or /proc/sys/**
  deny @{PROC}/{[^1-9],[^1-9][^0-9],[^1-9s][^0-9y][^0-9s],[^1-9][^0-9][^0-9][^0-9]*}/** w,
  deny @{PROC}/sys/[^k]** w,  # deny /proc/sys except /proc/sys/k* (effectively /proc/sys/kernel)
  deny @{PROC}/sys/kernel/{?,??,[^s][^h][^m]**} w,  # deny everything except shm* in /proc/sys/kernel/
  deny @{PROC}/sysrq-trigger rwklx,
  deny @{PROC}/kcore rwklx,

  deny /sys/[^f]*/** wklx,
  deny /sys/f[^s]*/** wklx,
  deny /sys/fs/[^c]*/** wklx,
  deny /sys/fs/c[^g]*/** wklx,
  deny /sys/fs/cg[^r]*/** wklx,
  deny /sys/firmware/** rwklx,
  deny /sys/kernel/security/** rwklx,


  # suppress ptrace denials when using using 'ps' inside a container
  ptrace (trace,read) peer=default,

}

The profile is now back to enforcement mode and, most important of all, it allows any kind of mount invocation.

I tried to be more granular and allow only the mount flags actually used by buildah, but the list was too long, there were too many combinations and that seemed pretty fragile. The last thing I want to happen is to have AppArmor break buildah in the future if a slightly different mount operation is done.

Reloading the AppArmor profile via sudo apparmor_parser -r containerized_buildah and restarting the build proved that the profile was doing its job also in enforcement mode: the build successfully completed. 🎉🎉🎉

But the journey over yet, not quite…

Why AppArmor is blocking only x86_64 builds?

Once I figured out the root cause of x86_64 builds there was one last mystery to be solved: why the ARM64 builds worked just fine? Why didn’t AppArmor cause any issue over there?

The answer was quite simple (and a bit shocking to me): it turned out the Raspberry Pi OS (formerly known as raspbian) ships a kernel that doesn’t have AppArmor enabled. I never realized that!

I didn’t find the idea of running containers without any form of Mandatory Access Control particularly thrilling. Hence I decided to change the operating system run on my Raspberry Pi nodes.

I initially picked Raspberry Pi OS because I wanted to have my Raspberry Pi 4 boot straight from an external USB disk instead of the internal memory card. At the time of writing, this feature requires a bleeding edge firmware and all the documentation points at Raspberry Pi OS. I just wanted to stick with what the community was using to reduce my chances of failure…

However, if you need AppArmor support, you’re left with two options: openSUSE and Ubuntu.

I installed openSUSE Leap 15.2 for aarch64 (aka ARM64) on one of my Raspberry Pi 4. The process of getting it to boot from USB was pretty straightforward. I added the node back into the Kubernetes cluster, forced some workloads to move on top of it and monitored its behaviour. Everything was great, I was ready to put openSUSE on my 2nd Raspberry Pi 4 when I noticed something strange: my room was quieter than the usual…

My Raspberry Pis are powered using the official PoE HAT. I love this hat, but I hate its built-in fan because it’s notoriously loud (yes, you can tune its thresholds, but it’s still damn noisy when it kicks in).

Well, my room was suddenly quieter because the fan of the PoE HAT was not spinning at all. That lead the CPU temperature to reach more than 85 °C 😱

It turns out the PoE HAT needs a driver which is not part of the mainstream kernel and unfortunately nobody added it to the openSUSE kernel yet. That means openSUSE doesn’t see and doesn’t even turn on the PoE HAT fan (not even at full speed).

I filed a enhancement bug report against openSUSE Tumbleweed to get the PoE HAT driver added to our kernel and moved over to Ubuntu. Unfortunately that was a blocking issue for me. What a pity 😢

On the other hand, the kernel of Ubuntu Server supports both the PoE HAT fan and AppArmor. After some testing I switched all my Raspberry Pi nodes to run Ubuntu 20.04 Server.

To prove my mental sanity, I ran the builder-arm64 POD against the Ubuntu nodes using the default AppArmor profile. The build failed on ARM64 in the same way as it did on x86_64. What a relief 😅.

Kubernetes and AppArmor profiles

At this point I’ve a tailor-made AppArmor profile for buildah, plus all the nodes of my cluster have AppArmor support. It’s time to put all the pieces together!

The previous POD definition has to be extended to ensure the main container running buildah is using the tailor-made AppArmor profile instead of the default one.

Kubernetes’ AppArmor support is a bit primitive, but effective. The only requirement, when using custom profiles, is to ensure the profile is already known by the AppArmor system on each node of the cluster.

This can be done in an easy way: just copy the profile under /etc/apparmor.d and perform a systemct reload apparmor. This has to be done once, at the next boot the AppArmor service will automatically load all the profiles found inside of /etc/apparmor.d.

This is how the final POD definition looks like:

apiVersion: v1
kind: Pod
metadata:
  name: builder-amd64
  annotations:
    container.apparmor.security.beta.kubernetes.io/main: localhost/containerized_buildah
spec:
  nodeSelector:
    kubernetes.io/arch: "amd64"
  containers:
  - name: main
    image: registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest
    command: ["/bin/sh"]
    args: ["-c", "cd code; cd $(readlink checkout); buildah bud -t guestbook ."]
    volumeMounts:
      - name: code
        mountPath: /code
    resources:
      limits:
        github.com/fuse: 1
  initContainers:
  - name: git-sync
    image: k8s.gcr.io/git-sync/git-sync:v3.1.7
    args: [
      "--one-time",
      "--depth", "1",
      "--dest", "checkout",
      "--repo", "https://github.com/flavio/guestbook-go.git",
      "--branch", "master"]
    volumeMounts:
      - name: code
        mountPath: /tmp/git
  volumes:
  - name: code
    emptyDir:
      medium: Memory

This time the build will work fine also inside of Kubernetes, regardless of the node architecture! 🥳

What’s next?

First of all, congratulations for having made up to this point. It has been quite a long journey, I hope you enjoyed it.

The next step consists of taking this foundation (a Kubernetes POD that can run buildah to build new container images) and find a way to orchestrate that.

What I’ll show you in the next blog post is how to create a workflow that, given a GitHub repository with a Dockerfile, builds two container images (amd64 and arm64), pushes both of them to a container registry and then creates a multi-architecture manifest referencing them.

As always feedback is welcome, see you soon!

Conference Organizers Announce Schedule, Platform Registration

Organizers of the openSUSE + LibreOffice Conference are pleased to announce the schedule for the conference is published.  

All times on the schedule are published in Coordinated Universal Time. The conference will take place live Oct. 15 through Oct. 17 using the https://oslo.gonogo.live/ platform.

There are more than 100 talks scheduled that range from talks about the openSUSE and LibreOffice projects to talks about documentation. There are talks about open-source projects, cloud and container technologies, embedded devices, community development, translations, marketing, future technologies, quality assurance and more. 

There will be multiple sessions happening at the same time, so some talks might overlap. Attendees have an option to personalize a schedule so that they are reminded when the live talk they would like to see begins. 

Live talks scheduled for the event will be either a 15 minute short talks, a 30 minute normal talks or a 60-minute work group/panel session. 

Attendees will be able to register on https://oslo.gonogo.live/ before the event on Oct. 14. The https://oslo.gonogo.live/ platform is designed as a social conferencing tool. Users can put in profile information, share their location and interact with other members of the audience. 

All attendees are encouraged to click on the upper left menu and click the Info/Tour button to get familiar with the platform. The presenter will have the possibility to control the video as well as pause, rewind, fast-forward, etc., which is built into the system; the format of the video would need to be an mp4 and shared from a URL that everyone can access; this could be shared from Google drive, NextCloud, ownCloud or another video shareable platform.

Conference t-shirts can be purchased in the platform under the shop button starting from October 13.

Sep 15th, 2020

Akademy Award 2020, los premios de la Comunidad KDE

A pesar de que este año hemos tenido una edición especial de Akademy, ésta no ha estado exenta de los Akademy Award 2020, es decir, los premios de la Comunidad KDE. Se trata de un reconocimiento a aquellas personas que han destacado en el desarrollo del Proyecto KDE.  Rindámosles un pequeño homenaje.

Akademy Award 2020, los premios de la Comunidad KDE

Tradicionalmente, las ponencias, charlas y las minireuniones de pasillo del fin de semana de cualquier Akademy finalizan con el anuncio de los Akademy Award, es decir, los premios que se otorgan a los miembros de la Comunidad KDE a aquellos integrantes que destacan por una u otra razón.

Akademy Award 2020, los premios de la Comunidad KDE

Este año ha sido un poco especial ya que Akademy 2020 será recordada por ser una edición realizada en línea, donde se han mantenido charlas y ponencias pero en forma de webconferencias, y donde las reuniones de pasillo se han realizado vía Matrix, IRC o Telegram.

No obstante, no han faltado los Akademy Award (esta vez otorgados al finalizar la Akademy), unos premios que tienen un curiosa forma de ser otorgados, ya que son elegidos por los ganadores de la anterior edición. De esta forma se consigue diversificar los ganadores a lo largo de los años, tal y como se puede apreciar en el listado de premiados.

Akademy Award 2020, los premios de la Comunidad KDE

De esta forma Volker Krause, Nate Graham y Marco Martin ganadores del 2019 han fallado otorgado los premios Akademy Award 2020 a:

  • Premio a la mejor aplicación para Bhushan Shah por la creación de una nueva plataforma, Plasma Mobile, en la que podrán prosperar nuevas aplicaciones para smartphones.
  • Premio a la mejor no-aplicación para Carl Schwan por su trabajo de renovación de los sitios web de KDE.
  • Premio del jurado para Luigi Toscano por su trabajo en la localización.

Y, por supuesto, un reconocimiento explícito para los organizadores de Akademy 2020 en línea, que aunque suele ser habitual en los eventos locales, este año ha sido vital. En resumen, un agradecimiento especial para Kenny Coyle, Kenny Duffus, Allyson Alexandrou y Bhavisha Dhruve por haber conseguido llevar a buen puerto un evento lleno de dificultades técnicas.

¡Muchas felicidades a todos ellos!

Más información: KDE.News

使用 certbot 申請 SSL 憑證 with openSUSE in Azure 小記

使用 certbot 申請 SSL 憑證 with openSUSE in Azure 小記


OS: openSUSE Leap 15.2 in Azure

Nginx: 1.16.1

DNS provider: gandi.net


今天來測試使用 certbot 這個 ACME 客戶端來進行 Let’s Encrypt 憑證的申請.


Let’s Encrypt  官網入門網頁


參考 Certbot 網頁上, openSUSE leap 15 與 nginx 的文件




先來安裝 certbot 套件


使用 zypper 指令安裝

# zypper  install  python3-certbot


  • 這邊我是指定 python3-certbot, 因為如果是裝 certbot 會裝到 python2 的版本, 希望 certbot 用 python3 就要進行指定


因爲今天是要透過 certbot 來申請 SSL 憑證, 所以會執行 certonly 方式來執行



# certbot  certonly  --manual  --preferred-challenges=dns  -d   ines.tw


Saving debug log to /var/log/letsencrypt/letsencrypt.log

Plugins selected: Authenticator manual, Installer None

Enter email address (used for urgent renewal and security notices)

 (Enter 'c' to cancel):  sakana@study-area.org ( 聯絡信件 )


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Please read the Terms of Service at

https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must

agree in order to register with the ACME server at

https://acme-v02.api.letsencrypt.org/directory

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

(A)gree/(C)ancel: A (同意協議)


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Would you be willing to share your email address with the Electronic Frontier

Foundation, a founding partner of the Let's Encrypt project and the non-profit

organization that develops Certbot? We'd like to send you email about our work

encrypting the web, EFF news, campaigns, and ways to support digital freedom.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

(Y)es/(N)o: Y (同意分享 email, 這個看個人)

Obtaining a new certificate

Performing the following challenges:

dns-01 challenge for ines.tw


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

NOTE: The IP of this machine will be publicly logged as having requested this

certificate. If you're running certbot in manual mode on a machine that is not

your server, please ensure you're okay with that.


Are you OK with your IP being logged?

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

(Y)es/(N)o: Y (同意 IP 被記錄, 一樣看個人)


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Please deploy a DNS TXT record under the name

_acme-challenge.ines.tw with the following value:


gVIVkBS2LLHzu1HSqOTUwE3LOddA3jhtAgPkDL1wosw


Before continuing, verify the record is deployed 

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Press Enter to Continue (按下 Enter 之前, 要確認 DNS 供應商那邊已經設定 TXT 紀錄, value 是上面紫色的內容)

Waiting for verification...

Cleaning up challenges


IMPORTANT NOTES:

 - Congratulations! Your certificate and chain have been saved at:

   /etc/letsencrypt/live/ines.tw/fullchain.pem

   Your key file has been saved at:

   /etc/letsencrypt/live/ines.tw/privkey.pem

   Your cert will expire on 2020-12-06. To obtain a new or tweaked

   version of this certificate in the future, simply run certbot

   again. To non-interactively renew *all* of your certificates, run

   "certbot renew"

 - Your account credentials have been saved in your Certbot

   configuration directory at /etc/letsencrypt. You should make a

   secure backup of this folder now. This configuration directory will

   also contain certificates and private keys obtained by Certbot so

   making regular backups of this folder is ideal.

 - If you like Certbot, please consider supporting our work by:


   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate

   Donating to EFF:                    https://eff.org/donate-le


  • certonly 只申請憑證

  • --manual  手動方式

  • --preferred-challenges=dns 

    • 使用 DNS 進行驗證 

  • -d   ines.tw

    • 要申請的網域名稱

  • 相關憑證會存放在 /etc/letsencrypt/live/你的網域下

  • 一次簽發 90 天


觀察相關資訊


# ls  -lh  /etc/letsencrypt/live/ines.tw/


total 4.0K


-rw-r--r-- 1 root root 692 Sep  7 15:01 README

lrwxrwxrwx 1 root root  31 Sep  7 15:01 cert.pem -> ../../archive/ines.tw/cert1.pem

lrwxrwxrwx 1 root root  32 Sep  7 15:01 chain.pem -> ../../archive/ines.tw/chain1.pem

lrwxrwxrwx 1 root root  36 Sep  7 15:01 fullchain.pem -> ../../archive/ines.tw/fullchain1.pem

lrwxrwxrwx 1 root root  34 Sep  7 15:01 privkey.pem -> ../../archive/ines.tw/privkey1.pem


主要有 4 個檔案


cert.pem: 申請的網域的SSL憑證 (Your domain's certificate)

  • 可以對應到之前sslforfree的 certificate.crt - 公鑰


chain.pem: Let's Encrypt 的 鏈證書 (The Let's Encrypt chain certificate)

  • 可以對應到之前sslforfree的ca_bundle.crt - 中繼憑證

 

fullchain.pem: 公鑰與中繼憑證合併 (cert.pem and chain.pem combined)

  • Nginx 如果要設定 ssl, 就會使用這個檔案 


privkey.pem: SSL憑證的私鑰 (Your certificate's private key)

  • 可以對應到之前sslforfree的private.key - 私鑰


這樣就算申請完畢, 但是要如何知道目前申請了那些憑證呢?

可以使用下列指令列出相關資訊


# certbot  certificates


Saving debug log to /var/log/letsencrypt/letsencrypt.log


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Found the following certs:

  Certificate Name: ines.tw

    Serial Number: 4c5679bc25190a70e2e9072885094771114

    Domains: ines.tw

    Expiry Date: 2020-12-06 14:01:32+00:00 (VALID: 89 days)

    Certificate Path: /etc/letsencrypt/live/ines.tw/fullchain.pem

    Private Key Path: /etc/letsencrypt/live/ines.tw/privkey.pem

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -


接下來思考, 如果要加入其他的 FQDN或是想要加入 *.ines.tw 呢?


加入 *.ines.tw


# certbot  certonly  --manual  --preferred-challenges=dns --cert-name  ines.tw  -d ines.tw,*.ines.tw


Saving debug log to /var/log/letsencrypt/letsencrypt.log

Plugins selected: Authenticator manual, Installer None


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

You are updating certificate ines.tw to include new domain(s):

+ *.ines.tw


You are also removing previously included domain(s):

(None)


Did you intend to make this change?

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

(U)pdate cert/(C)ancel: U

Renewing an existing certificate

Performing the following challenges:

dns-01 challenge for ines.tw


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

NOTE: The IP of this machine will be publicly logged as having requested this

certificate. If you're running certbot in manual mode on a machine that is not

your server, please ensure you're okay with that.


Are you OK with your IP being logged?

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

(Y)es/(N)o: Y


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Please deploy a DNS TXT record under the name

_acme-challenge.ines.tw with the following value:


AFatx1Qx8ylhYIPmnSFIAFktRQ00GI7SbzUtHqTADJc


Before continuing, verify the record is deployed.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Press Enter to Continue  (按下 Enter 之前, 要確認 DNS 供應商那邊已經設定 TXT 紀錄, value 是上面紫色的內容)

Waiting for verification...

Cleaning up challenges


IMPORTANT NOTES:

 - Congratulations! Your certificate and chain have been saved at:

   /etc/letsencrypt/live/ines.tw/fullchain.pem

   Your key file has been saved at:

   /etc/letsencrypt/live/ines.tw/privkey.pem

   Your cert will expire on 2020-12-06. To obtain a new or tweaked

   version of this certificate in the future, simply run certbot

   again. To non-interactively renew *all* of your certificates, run

   "certbot renew"

 - If you like Certbot, please consider supporting our work by:


   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate

   Donating to EFF:                    https://eff.org/donate-le


觀察憑證資訊


# certbot  certificates


Saving debug log to /var/log/letsencrypt/letsencrypt.log


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Found the following certs:

  Certificate Name: ines.tw

    Serial Number: 4d2c4a18b7d8f375fca8d127cefc677e152

    Domains: ines.tw *.ines.tw

    Expiry Date: 2020-12-06 14:21:22+00:00 (VALID: 89 days)

    Certificate Path: /etc/letsencrypt/live/ines.tw/fullchain.pem

    Private Key Path: /etc/letsencrypt/live/ines.tw/privkey.pem

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -



Lab2 : 驗證 SSL 憑證


那如果驗證真的有效呢?

我使用 nginx 來驗證, 先設定好預設網頁

可以參考 http://sakananote2.blogspot.com/2020/02/nginx-with-opensuse-leap-151-in-azure.html 


為了管理方便我在 /etc/nginx 目錄下建立一個 ssl  目錄

# mkdir  /etc/nginx/ssl


將之前產出的憑證複製到資料夾


# cp  /etc/letsencrypt/live/ines.tw/fullchain.pem  /etc/nginx/ssl/

# cp  /etc/letsencrypt/live/ines.tw/privkey.pem  /etc/nginx/ssl/


修改 nginx 設定檔

# vim  /etc/nginx/nginx.conf


worker_processes  1;

events {

    worker_connections  1024;

    use epoll;

}

http {

    include       mime.types;

    default_type  application/octet-stream;

    sendfile        on;

    keepalive_timeout  65;

    include conf.d/*.conf;

    server {

        listen       80;

        listen       443 ssl;

        server_name  ines.tw;

ssl_certificate /etc/nginx/ssl/fullchain.pem;

ssl_certificate_key /etc/nginx/ssl/privkey.pem;

        location / {

            root   /srv/www/htdocs/;

            index  index.html index.htm;

        }

        error_page   500 502 503 504  /50x.html;

        location = /50x.html {

            root   /srv/www/htdocs/;

        }

    }

    

    include vhosts.d/*.conf;

}



  • 加入上面紅色部分


將 nginx 服務 reload


# systemctl   reload   nginx


因為是走 HTTPS, 所以記得要開 port 443 


在 Azure 該 VM 的網路設定內, 點選 新增輸入連接埠規則,設定 port 443 可以連線


開啟瀏覽器, 輸入 https://YOUR_DOMAIN

就可以看到可愛的鎖頭符號了



又前進一步了 :)



Reference:





Recopilación del boletín de noticias de la Free Software Foundation – septiembre de 2020

Boletín de noticias relacionadas con el software libre publicado por la Free Software Foundation.

¡El boletín de noticias de la FSF está aquí!

La Free Software Foundation (FSF) es una organización creada en Octubre de 1985 por Richard Stallman y otros entusiastas del software libre con el propósito de difundir esta filosofía.

La Fundación para el software libre (FSF) se dedica a eliminar las restricciones sobre la copia, redistribución, entendimiento, y modificación de programas de computadoras. Con este objeto, promociona el desarrollo y uso del software libre en todas las áreas de la computación, pero muy particularmente, ayudando a desarrollar el sistema operativo GNU.

Además de tratar de difundir la filosofía del software libre, y de crear licencias que permitan la difusión de obras y conservando los derechos de autorías, también llevan a cabo diversas campañas de concienciación y para proteger derechos de los usuarios frentes a aquellos que quieren poner restricciones abusivas en cuestiones tecnológicas.

Mensualmente publican un boletín (supporter) con noticias relacionadas con el software libre, sus campañas, o eventos. Una forma de difundir los proyectos, para que la gente conozca los hechos, se haga su propia opinión, y tomen partido si creen que la reivindicación es justa!!

Puedes ver todos los números publicados en este enlace: http://www.fsf.org/free-software-supporter/free-software-supporter

Después de muchos años colaborando en la traducción al español del boletín, desde inicios de este año 2020 he decidido tomarme un descanso en esta tarea.

Pero hay detrás un pequeño grupo de personas que siguen haciendo posible la difusión en español del boletín de noticias de la FSF.

¿Te gustaría aportar tu ayuda en la traducción? Lee el siguiente enlace:

Por aquí te traigo un extracto de algunas de las noticias que ha destacado la FSF este mes de septiembre de 2020

La universidad de héroes disfrazados: Un video de la FSF

Del 7 de agosto

La universidad de héroes disfrazados es un video animado que cuenta una historia acerca de un grupo de héroes cayendo en garras de los poderes del software propietario en la educación. La junta universitaria adquiere un software de aprendizaje remoto de última tecnología y que les facilita a ellos continuar con sus operaciones de manera online; pero — [ALERTA DE SPOILER] — esta puede sembrar las semillas de su caída.

Este video resalta la importancia de resistir al uso de programas privtivos de videconferencia en la educación remota y, adicionalmente, al compartir el video, te animamos a formar y compartir nuestra petición, pidiendo a las escuelas proteger la libertad de los estudiantes al comunicar y enseñar acerca del software libre.

Geoffrey Knauth elegido como presidente de la fundación para el software libre; Odile Bénassy se une a la junta

Del 5 de agosto

Odile Bénassy, activista y desarrolladora de software libre desde hace mucho tiempo, conocida especialmente por su trabajo en promover el software libre en Francia, fue elegida para pertenecer a la junta de directores de la FSF. Geoffrey Knauth, quien ha servido en la junta de la FSF cerca de 20 años, fue elegido como presidente.

Declaración del nuevo presidente de la FSF, Geoffrey Knauth

Del 5 de agosto

La junta de la FSF me ha elegido en este momento como su líder servidor para ayudar a la comunidad a enfocarse en nuestra dedicación compartida de proteger y crecer el software que respeta nuestras libertades. Es importante proteger e incrementar la diversidad de los miembros de la comunidad. Es a través de nuestra diversidad de orígenes y opiniones que tenemos creatividad, perspectivas, fuerza intelectual y rigor.

Conoce al testigo estrella: Su “presentador” inteligente

Del 23 de agosto por Sidney Fussell

No controlar el software que ejecutas en tu dispositivo le da un nuevo significado a la frase: “Todo lo que digas sera usado en su contra”. En este articulo, nosotros aprendemos el repunte aterrador en el uso de estos dispositivos en las investigaciones de policía. Recuerda, si tú no controlas tus dispositivos, ellos te pueden controlar, y la cosa más inteligente que tú puedes hacer con los dispositivos “inteligentes” es evitarlos!

La era de la vigilancia masiva no durará para siempre

Del 20 de julio por Edward Snowden

Hace diez años, Snowden se reunió con periodistas en Hong Kong para revelar documentos clasificados que revelaban cómo los órganos de seguridad de varios poderosos estados habían conspirado para formar un sistema de vigilancia masiva global. En este artículo, él habla de algunas de las implicaciones de este escándalo, pero también habla de cómo encuentra “más motivos de esperanza que de desesperación”, viendo cómo la gente de Hong Kong está utilizando ingeniosas soluciones tecnológicas para luchar. El software libre es fundamental para los sistemas a los que se refiere que “guardan nuestros secretos, y quizás nuestras almas; sistemas creados en un mundo en el que poseer los medios para vivir una vida privada parece un crimen”, y sólo el software libre puede servir a los propósitos de las personas que luchan colectivamente por la libertad, en lugar de permitir que los oligarcas corporativos controlen lo que vemos, lo que decimos y lo que pensamos.

¡Como recordarás, Snowden ha sido un buen amigo de la FSF, y animó a los miembros de la FSF en su discurso de apertura de la conferencia LibrePlanet 2016!

apoyo_fsf

Estas son solo algunas de las noticias recogidas este mes, pero hay muchas más muy interesantes!! si quieres leerlas todas (cuando estén traducidas) visita este enlace:

Y todos los números del “supporter” o boletín de noticias de 2020 aquí:

Support freedom

—————————————————————

openSUSE Projects Support Hacktoberfest Efforts

The openSUSE community is ready for Hacktoberfest, which is run by Digital Ocean and DEV that encourages people to make their first contributions to open source projects. The openSUSE + LibreOffice Virtual Conference will take place during Hacktoberfest and is listed as an event on the website. The conference will have more than 100 talks about open source projects ranging from documentation to the technologies within each project.

Some resources available to those who are interested in getting started with openSUSE Projects during Hacktoberfest are:

Open Build Service

The Open Build Service is a generic system to build and distribute binary packages from sources in an automatic, consistent and reproducible way. Contributors can release packages as well as updates, add-ons, appliances and entire distributions for a wide range of operating systems and hardware architectures.

Known by many open source developers simply as OBS, the Open Build Service is a great way to build packages for your home repository to see if what is built or changes works. People can always download the latest version of your software as binary packages for their operating system. The packages can be build for different operating system. Once they are connected to your repository, you can serve them with maintenance or security updates and even add-ons for your software.

Some specific items to look at for OBS are the open-build-service-api and the open-build-service-connector.

The OBS community can be found in IRC on the channel #opensuse-buildservice. Or you can join the mailing list opensuse-buildservice@opensuse.org.

Repository Mirroring Tool

This tool allows you to mirror RPM repositories in your own private network. Organization (mirroring) credentials are required to mirror SUSE repositories. There is end-user documentation for RMT. The man pages for rmt-cli is located in the file MANUAL.md. Anyone who would like to contribute to RMT can view how to do so in the contribution guide.

openQA

openQA is an automated test tool for operating systems. It is used by multiple projects to test for quality assurance of software changes and updates. There are multiple resources to get started with to learn how the software works.

Documentation can be found https://open.qa/docs/ and https://open.qa/api/testapi/. There are tutorial videos on Youtube that go in depth on how to use the software. Quickstarts to setup your local instance are available at https://open.qa/docs/#bootstrapping and https://youtu.be/lo929gSEtms (livestream recording)

Repositories can be found at:

To Reach the community, go to the #opensuse-factory channel on freenode.net IRC

Uyuni Project

Named after the largest salt flat in the world, Uyuni is a configuration and infrastructure management tool that saves people time and headaches when managing updates of tens, hundreds or even thousands of machines.

There have been many talks about Uyuni and many can be found on the project’s Youtube Page. Presentation slides a can be found on slideshare. There is also information about getting started for developers and translators at https://github.com/uyuni-project/uyuni/wiki and https://github.com/uyuni-project/uyuni/wiki/Translating-Uyuni-to-your-language.

A quick setup guide is available at https://github.com/uyuni-project/sumaform and people can start Hacktoberfest issues.

Hacktoberfest participants can communication with members of the Uyuni Project through Gitter chat or the mailing lists.

SAFE VOTE: Minimizando propagação do COVID-19 em eleições.

Existe muita polêmica sobre os mesários nas eleições em função do COVID-19. Mas no meu ponto de vista particular, não podemos desprezar o contato a urna eletrônica por diversas pessoas e seu respectivo problema. O controle da higienização das urnas é um complexo problema, pois um erro humano pode custar muito caro.

Enfim, pensando neste cenário para descontrair um pouco, imaginei utilizar um pouco da tecnologia de visão computacional para evitar o contato com equipamento. Pois podemos atualmente detectar as mão e os dedos assim evitando contato. Uma interface deve ser intuitiva para facilitar a votação, mas a prova de conceito desenvolvida em 3 horas demonstra a viabilidade e talvez solução para vários contextos problemáticos. Veja a seguir um vídeo demonstrativo.