Build multi-architecture container images using argo workflow
Note well: this blog post is part of a series, checkout the previous episode about running containerized buildah on top of Kubernetes.
Quick recap
I have a small Kubernetes cluster running at home that is made of ARM64 and x86_64 nodes. I want to build multi-architecture images so that I can run them everywhere on the cluster, regardless of the node architecture. My plan is to leverage the same cluster to build these container images. That leads to a “Inception-style” scenario: building container images from within a container itself.
To achieve that I decided to rely on buildah to build the container images. I’ve shown how run buildah in a containerized fashion without using a privileged container and with a tailor-made AppArmor profile to secure it.
The previous blog post also showed the definition of Kubernetes PODs that would build the actual images.
Today’s goals
What I’m going to show today is how to automate the whole building process.
Given the references to the Git repository that provides a container image definition, I want to automate these steps:
- Build the container image on a ARM64 node, push the image to a container registry.
- Build the container image on a x86_64 node, push the image to a container registry.
- Create a multi-architecture container image manifest, push it to a container registry.
Steps #1 and #2 can be done in parallel, while step #3 needs to wait for the previous ones to complete.
This kind of automation can be done using some pipeline solution.
Kubernetes native pipeline solutions
There are many Continuous Integration and Continuous Delivery solutions that are available for Kubernetes. If you love to seek enlightenment by staring in front of beautiful logos, checkout this portion of the CNCF landscape dedicated to CI and CD solutions. 🤯
After some research I came up with two potential candidates: Argo and Tekton.
Both are valid projects with active communities. However I decided to settle on Argo. The main reason that led to this decision was the lack of ARM64 support from Tekton.
Interestingly enough, both Tekton and kaniko (which I discussed in the previous blog post of this series) use the same mechanism to build themselves, a mechanism that can produce only x86_64 container images and is not so easy to extend.
Argo is an umbrella of different projects, each one of them tackling specific problems like:
The projects above are just the mature ones, many others can be found under the Argo project labs GitHub organization. These projects are not yet considered production ready, but are super interesting.
My favourite ones are:
The majority of these projects don’t have ARM64 container images yet, but work is being done and this work is significantly simpler compared to the one of porting Tekton. Most important of all: the core projects I need have already been ported.
Creating pipelines using Argo Workflow
A pipeline can be created inside Argo by defining a Workflow resource.
Copying from the core concepts documentation page of Argo Workflow, these are the elements I’m going to use:
-
Workflow: a Kubernetes resource defining the execution of one or more
template. -
Template: a
step,stepsordag. - Step: a single step of a workflow, typically runs a container based on inputs and capture the outputs.
- Steps: a list of steps.
- Directed Acyclic Graph (DAG): a set of steps (nodes) and the dependencies (edges) between them.
Spoiler alert, I’m going to create multiple Argo Templates, each one of them focusing on one specific part of the problem. Then I’ll use a DAG to explicit the dependencies between all these Templates. Finally, I’ll define an Argo Workflow to “wrap” all these objects.
I could show you the final result right away, but you would probably be overwhelmed by it. I’ll instead go step-by-step as I did. I’ll start with a small subset of the problem and then I’ll keep building on top of it.
Porting our build POD to an Argo Workflow
By the end of the previous blog post, I was able to build a container image by using the following Kubernetes POD definition:
apiVersion: v1
kind: Pod
metadata:
name: builder
annotations:
container.apparmor.security.beta.kubernetes.io/main: localhost/containerized_buildah
spec:
nodeSelector:
kubernetes.io/arch: "amd64"
containers:
- name: main
image: registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest
command: ["/bin/sh"]
args: ["-c", "cd code; cd $(readlink checkout); buildah bud -t guestbook ."]
volumeMounts:
- name: code
mountPath: /code
resources:
limits:
github.com/fuse: 1
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync/git-sync:v3.1.7
args: [
"--one-time",
"--depth", "1",
"--dest", "checkout",
"--repo", "https://github.com/flavio/guestbook-go.git",
"--branch", "master"]
volumeMounts:
- name: code
mountPath: /tmp/git
volumes:
- name: code
emptyDir:
medium: MemoryThese are the key points of this POD:
- It uses an Init Container to retrieve the source code of the container image from a Git repository.
- A Kubernetes Volume is used to share the source code of the container image to be built between the Init Container and the main one.
- The Git repository details, the image name and other references are all hard-coded.
- The POD just builds the container image, there’s no push action at the end of it.
- The POD is forcefully scheduled on a x86_64 node; hence this will produce only x86_64 container images.
- The POD requires a Fuse resource, this is required to allow buildah to use the performant overlay graph driver.
- The POD uses a specific AppArmor profile, not the default one provided by the container engine.
Starting from something like Argo’s “Hello world Workflow”, we can transpose the POD defined above to something like that:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: simple-build-
spec:
entrypoint: buildah
templates:
- name: buildah
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/main: localhost/containerized_buildah
nodeSelector:
kubernetes.io/arch: "amd64"
container:
image: registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest
command: ["/bin/sh"]
args: ["-c", "cd code; cd $(readlink checkout); buildah bud -t guestbook ."]
volumeMounts:
- name: code
mountPath: /code
resources:
limits:
github.com/fuse: 1
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync/git-sync:v3.1.7
args: [
"--one-time",
"--depth", "1",
"--dest", "checkout",
"--repo", "https://github.com/flavio/guestbook-go.git",
"--branch", "master"]
volumeMounts:
- name: code
mountPath: /tmp/git
volumes:
- name: code
emptyDir:
medium: MemoryAs you can see the POD definition has been transformed into a Template
object. The contents of the POD spec section have been basically copied and pasted under the Template.
The POD annotations have been moved straight under the template.metadata section.
I have to admit this was pretty confusing to me in the beginning, but everything became clear once I started to look at the field documentation of the Argo resources.
The workflow can be submitted using the argo cli tool:
$ argo submit workflow-simple-build.yaml
Name: simple-build-qk4t4
Namespace: argo
ServiceAccount: default
Status: Pending
Created: Wed Sep 30 15:45:20 +0200 (now)This will be visible also from the Argo Workflow UI:

Refactoring the Argo Workflow
The previous Workflow definition can be cleaned up a bit, leading to the following YAML file:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: simple-build-
spec:
entrypoint: buildah
templates:
- name: buildah
inputs:
parameters:
- name: arch
- name: repository
- name: branch
- name: image_name
- name: image_tag
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/main: localhost/containerized_buildah
nodeSelector:
kubernetes.io/arch: "amd64"
script:
image: registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest
command: [bash]
source: |
set -xe
cd /code/
# needed to workaround protected_symlink - we can't just cd into /code/checkout
cd $(readlink checkout)
buildah bud -t {{inputs.parameters.image_name}}:{{inputs.parameters.image_tag}}-{{inputs.parameters.arch}} .
buildah push --cert-dir /certs {{inputs.parameters.image_name}}:{{inputs.parameters.image_tag}}-{{inputs.parameters.arch}}
echo Image built and pushed to remote registry
volumeMounts:
- name: code
mountPath: /code
- name: certs
mountPath: /certs
readOnly: true
resources:
limits:
github.com/fuse: 1
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync/git-sync:v3.1.7
args: [
"--one-time",
"--depth", "1",
"--dest", "checkout",
"--repo", "{{inputs.parameters.repository}}",
"--branch", "{{inputs.parameters.branch}}"]
volumeMounts:
- name: code
mountPath: /tmp/git
volumes:
- name: code
emptyDir:
medium: Memory
- name: certs
secret:
secretName: registry-certCompared to the previous definition, this one doesn’t have any hard-coded
value inside of it. The details of the Git repository, the image name, the container registry,… all
of that is now passed dynamically to the template by using the input.parameters map.
The main container has also been rewritten to use an Argo Workflow specific field: script.source. This
is really handy because it provides a nice way to write a bash script to be executed inside the
container.
The source script has been also extended to perform a push operation at the
end of the build process.
As you can see the architecture of the image is appended to the tag of the image.
This is a common pattern used when building multi-architecture container images.
One final note about the push operation. The destination registry is secured
using a self-signed certificate. Because of that either the CA that signed the
certificate or the registry’s certificate have to be provided to buildah.
This can be done by using the --cert-dir flag and by placing the certificates
to be loaded under the specified path.
Note well, the certificate files must have the .crt file extension otherwise
they won’t be handled.
I “loaded” the certificate into Kubernetes by using a Kubernetes secret like this one:
apiVersion: v1
kind: Secret
metadata:
name: registry-cert
namespace: argo
type: Opaque
data:
ca.crt: `base64 -w 0 actualcert.crt`As you can see the main container is now mounting the contents of the registry-cert
Kubernetes Secret under /certs.
This time, when submitting the workflow, we must specify its parameters:
$ argo submit workflow-simple-build-2.yaml \
-p arch=amd64 \
-p repository=https://github.com/flavio/guestbook-go.git \
-p branch=master \
-p image_name=registry-testing.svc.lan/guestbook-go \
-p image_tag=0.0.1
Name: simple-build-npqdw
Namespace: argo
ServiceAccount: default
Status: Pending
Created: Wed Sep 30 15:52:06 +0200 (now)
Parameters:
arch: {1 0 amd64}
repository: {1 0 https://github.com/flavio/guestbook-go.git}
branch: {1 0 master}
image_name: {1 0 registry-testing.svc.lan/guestbook-go}
image_tag: {1 0 0.0.1}Building on multiple architectures
The Workflow object defined so far is still hard-coded to be scheduled only
on x86_64 nodes (see the nodeSelector constraint).
I could create a new Workflow definition by copying one shown before and then
change the nodeSelector constraint to reference the
ARM64 architecture. However, this would violate the
DRY principle.
Instead, I will abstract the Workflow definition by leveraging a feature of
Argo Workflow called
loops.
I will define a parameter for the target architecture and then I will iterate
over two possible values: amd64 and arm64.
This is the resulting Workflow definition:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: simple-build-
spec:
entrypoint: build-images-arch-loop
templates:
- name: build-images-arch-loop
inputs:
parameters:
- name: repository
- name: branch
- name: image_name
- name: image_tag
steps:
- - name: build-image
template: buildah
arguments:
parameters:
- name: arch
value: "{{item.arch}}"
- name: repository
value: "{{inputs.parameters.repository}}"
- name: branch
value: "{{inputs.parameters.branch}}"
- name: image_name
value: "{{inputs.parameters.image_name}}"
- name: image_tag
value: "{{inputs.parameters.image_tag}}"
withItems:
- { arch: 'amd64' }
- { arch: 'arm64' }
- name: buildah
inputs:
parameters:
- name: arch
- name: repository
- name: branch
- name: image_name
- name: image_tag
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/main: localhost/containerized_buildah
nodeSelector:
kubernetes.io/arch: "{{inputs.parameters.arch}}"
script:
image: registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest
command: [bash]
source: |
set -xe
cd /code/
# needed to workaround protected_symlink - we can't just cd into /code/checkout
cd $(readlink checkout)
buildah bud -t {{inputs.parameters.image_name}}:{{inputs.parameters.image_tag}}-{{inputs.parameters.arch}} .
buildah push --cert-dir /certs {{inputs.parameters.image_name}}:{{inputs.parameters.image_tag}}-{{inputs.parameters.arch}}
echo Image built and pushed to remote registry
volumeMounts:
- name: code
mountPath: /code
- name: certs
mountPath: /certs
readOnly: true
resources:
limits:
github.com/fuse: 1
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync/git-sync:v3.1.7
args: [
"--one-time",
"--depth", "1",
"--dest", "checkout",
"--repo", "{{inputs.parameters.repository}}",
"--branch", "{{inputs.parameters.branch}}"]
volumeMounts:
- name: code
mountPath: /tmp/git
volumes:
- name: code
emptyDir:
medium: Memory
- name: certs
secret:
secretName: registry-certThe workflow definition grew a bit. I’ve added a new template called build-images-arch-loop, which is now
the entry point of the workflow. This template performs a loop over the
[ { arch: 'amd64' }, { arch: 'arm64' } ] array, each time invoking the buildah
template with slightly different input parameters. The only parameter that changes
across the invocations is the arch one, which is used to define the
nodeSelector constraint.
Executing this workflow results in two steps being executed at the same time: one building the image on a random x86_64 node, the other doing the same thing on a random ARM64 node.
This can be clearly seen from the Argo Workflow UI:

When the workflow execution is over, the registry will contain two different images:
<image-name>:<image-tag>-amd64<image-name>:<image-tag>-arm64
Now there’s just one last step to perform: create a multi-architecture container manifest referencing these two images.
Creating the image manifest
The Image manifest Version 2, Schema 2
specification defines a new type of image manifest called “Manifest list”
(application/vnd.docker.distribution.manifest.list.v2+json).
Quoting the official specification:
The manifest list is the “fat manifest” which points to specific image manifests for one or more platforms. Its use is optional, and relatively few images will use one of these manifests. A client will distinguish a manifest list from an image manifest based on the Content-Type returned in the HTTP response.
The creation of such a manifest is pretty easy and it can be done with docker, podman and buildah in a similar way.
I will still use buildah to create the manifest and push it to the registry where all the images are stored.
This is the Argo Template that takes care of that:
- name: create-manifest
inputs:
parameters:
- name: image_name
- name: image_tag
- name: architectures
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/main: localhost/containerized_buildah
volumes:
- name: certs
secret:
secretName: registry-cert
script:
image: registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest
command: [bash]
source: |
set -xe
image_name="{{inputs.parameters.image_name}}"
image_tag="{{inputs.parameters.image_tag}}"
architectures="{{inputs.parameters.architectures}}"
target="${image_name}:${image_tag}"
architectures_list=($(echo $architectures | tr "," "\n"))
buildah manifest create ${target}
#Print the split string
for arch in "${architectures_list[@]}"
do
arch_image="${image_name}:${image_tag}-${arch}"
buildah pull --cert-dir /certs ${arch_image}
buildah manifest add ${target} ${arch_image}
done
buildah manifest push --cert-dir /certs ${target} docker://${target}
echo Manifest creation done
volumeMounts:
- name: certs
mountPath: /certs
readOnly: true
resources:
limits:
github.com/fuse: 1The template has an input parameter called architectures, this string is made
of the architectures names joined by a comma; e.g. "amd64,arm64".
The script creates a manifest with the name of the image and then, iterating
over the architectures, it adds the architecture-specific images to it.
Once this is done the manifest is pushed to the container registry.
To make a simple example, assuming the following scenario:
- We are building the
guestbook-goapplication with releasev0.1.0 - We want to build the image for the x86_64 and the ARM64 architectures
- We want to push the images to the
registry.svc.lanregistry
The Argo Template that creates the manifest will pull the following images:
-
registry.svc.lan/guestbook-go:v0.1.0-amd64: the x86_64 image -
registry.svc.lan/guestbook-go:v0.1.0-arm64: the ARM64 image
Finally, the Template will create and push a manifest named registry.svc.lan/guestbook-go:v0.1.0.
This image reference will always return the right container image to the node
requesting it.
Adding the container image to the manifest is done with the
buildah manifest add command. This command doesn’t actually need to have
the container image available locally, it would be enough to reach out to
the registry hosting it to obtain the manifest digest.
In our case the images are stored on a registry secured with
a custom certificate. Unfortunately, the manifest add command
was lacking some flags (like the cert one); because of that I had
to introduce the workaround of pre-pulling all the images referenced
by the manifest. This has the side effect of wasting some time, bandwidth and
disk space.
I’ve submitted patches both to buildah
and to podman to enrich their
manifest add commands; both pull requests have been merged into the master
branches. The next release of buildah will ship with my patch and the
manifest creation Template will be simpler and faster.
Explicating dependencies between Argo templates
Argo allows to define a workflow sequence with clear dependencies between each step. This is done by defining a DAG.
Our workflow will be made of one Argo Template of type DAG, that will have two tasks:
- Build the multi-architecture images. This is done with the Argo Workflow loop shown above.
- Create the manifest. This task depends on the successful completion of the previous one.
This is the Template definition:
- name: full-process
dag:
tasks:
- name: build-images
template: build-images-arch-loop
arguments:
parameters:
- name: repository
value: "{{workflow.parameters.repository}}"
- name: branch
value: "{{workflow.parameters.branch}}"
- name: image_name
value: "{{workflow.parameters.image_name}}"
- name: image_tag
value: "{{workflow.parameters.image_tag}}"
- name: create-multi-arch-manifest
dependencies: [build-images]
template: create-manifest
arguments:
parameters:
- name: image_name
value: "{{workflow.parameters.image_name}}"
- name: image_tag
value: "{{workflow.parameters.image_tag}}"
- name: architectures
value: "{{workflow.parameters.architectures_string}}"As you can see the Template takes the usual series of parameters we’ve already defined, and forwards them to the tasks.
This is the full definition of our Argo workflow, hold on… this is really long 🙀
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: build-multi-arch-image-
spec:
ttlStrategy:
secondsAfterCompletion: 60
entrypoint: full-process
arguments:
parameters:
- name: repository
value: https://github.com/flavio/guestbook-go.git
- name: branch
value: master
- name: image_name
value: registry-testing.svc.lan/guestbook
- name: image_tag
value: 0.0.1
- name: architectures_string
value: "arm64,amd64"
templates:
- name: full-process
dag:
tasks:
- name: build-images
template: build-images-arch-loop
arguments:
parameters:
- name: repository
value: "{{workflow.parameters.repository}}"
- name: branch
value: "{{workflow.parameters.branch}}"
- name: image_name
value: "{{workflow.parameters.image_name}}"
- name: image_tag
value: "{{workflow.parameters.image_tag}}"
- name: create-multi-arch-manifest
dependencies: [build-images]
template: create-manifest
arguments:
parameters:
- name: image_name
value: "{{workflow.parameters.image_name}}"
- name: image_tag
value: "{{workflow.parameters.image_tag}}"
- name: architectures
value: "{{workflow.parameters.architectures_string}}"
- name: build-images-arch-loop
inputs:
parameters:
- name: repository
- name: branch
- name: image_name
- name: image_tag
steps:
- - name: build-image
template: buildah
arguments:
parameters:
- name: arch
value: "{{item.arch}}"
- name: repository
value: "{{inputs.parameters.repository}}"
- name: branch
value: "{{inputs.parameters.branch}}"
- name: image_name
value: "{{inputs.parameters.image_name}}"
- name: image_tag
value: "{{inputs.parameters.image_tag}}"
withItems:
- { arch: 'amd64' }
- { arch: 'arm64' }
- name: buildah
inputs:
parameters:
- name: arch
- name: repository
- name: branch
- name: image_name
- name: image_tag
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/main: localhost/containerized_buildah
nodeSelector:
kubernetes.io/arch: "{{inputs.parameters.arch}}"
volumes:
- name: code
emptyDir:
medium: Memory
- name: certs
secret:
secretName: registry-cert
script:
image: registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest
command: [bash]
source: |
set -xe
cd /code/
# needed to workaround protected_symlink - we can't just cd into /code/checkout
cd $(readlink checkout)
buildah bud -t {{inputs.parameters.image_name}}:{{inputs.parameters.image_tag}}-{{inputs.parameters.arch}} .
buildah push --cert-dir /certs {{inputs.parameters.image_name}}:{{inputs.parameters.image_tag}}-{{inputs.parameters.arch}}
echo Image built and pushed to remote registry
volumeMounts:
- name: code
mountPath: /code
- name: certs
mountPath: /certs
readOnly: true
resources:
limits:
github.com/fuse: 1
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync/git-sync:v3.1.7
args: [
"--one-time",
"--depth", "1",
"--dest", "checkout",
"--repo", "{{inputs.parameters.repository}}",
"--branch", "{{inputs.parameters.branch}}"]
volumeMounts:
- name: code
mountPath: /tmp/git
- name: create-manifest
inputs:
parameters:
- name: image_name
- name: image_tag
- name: architectures
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/main: localhost/containerized_buildah
volumes:
- name: certs
secret:
secretName: registry-cert
script:
image: registry.opensuse.org/home/flavio_castelli/containers/containers/buildahimage:latest
command: [bash]
source: |
set -xe
image_name="{{inputs.parameters.image_name}}"
image_tag="{{inputs.parameters.image_tag}}"
architectures="{{inputs.parameters.architectures}}"
target="${image_name}:${image_tag}"
architectures_list=($(echo $architectures | tr "," "\n"))
buildah manifest create ${target}
#Print the split string
for arch in "${architectures_list[@]}"
do
arch_image="${image_name}:${image_tag}-${arch}"
buildah pull --cert-dir /certs ${arch_image}
buildah manifest add ${target} ${arch_image}
done
buildah manifest push --cert-dir /certs ${target} docker://${target}
echo Manifest creation done
volumeMounts:
- name: certs
mountPath: /certs
readOnly: true
resources:
limits:
github.com/fuse: 1That’s how life goes with Kubernetes, sometimes there’s just a lot of YAML…

Now we can submit the workflow to Argo:
$ argo submit build-pipeline-final.yml
Name: build-multi-arch-image-wndlr
Namespace: argo
ServiceAccount: default
Status: Pending
Created: Thu Oct 01 16:22:46 +0200 (now)
Parameters:
repository: {1 0 https://github.com/flavio/guestbook-go.git}
branch: {1 0 master}
image_name: {1 0 registry-testing.svc.lan/guestbook}
image_tag: {1 0 0.0.1}
architectures_string: {1 0 arm64,amd64}The visual representation of the workflow is pretty nice:

As you might have noticed, I didn’t provide any parameter to argo submit; the
Argo Workflow now has default values for all the input parameters.
Garbage collector
Something worth of note, Argo Workflow leaves behind all the containers it creates. This is good to triage failures, but I don’t want to clutter my cluster with all these resources.
Argo provides cost optimization parameters to implement cleanup strategies. The one I’ve used above is the Workflow TTL Strategy.
You can see these lines at the top of the full Workflow definition:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: build-multi-arch-image-
spec:
ttlStrategy:
secondsAfterCompletion: 60This triggers an automatic cleanup of all the PODs spawned by the Workflow 60 seconds after its completion, be it successful or not.
Summary
Today we have seen how to create a pipeline that builds container images for multiple architectures on top an existing Kubernetes cluster.
Argo Workflow proved to be a good solution for this kind of automation. There’s quite some YAML involved with that, but I highly doubt over projects would have spared us from that.
What can we do next? Well, to me the answer is pretty clear. The definition of the container image is stored inside of a Git repository; hence I want to connect my Argo Workflow to the events happening inside of the Git repository.
Stay tuned for more updates! In the meantime feedback is always welcome.
Librsvg is accepting interns for Outreachy's December 2020 round
There are two projects in librsvg available for Outreachy applicants in the December 2020 / March 2021 round:
-
Revamp the text engine - Do you know about international text layout? Can you read a right-to-left language, or do you write in a language that requires complex shaping? Would you like to implement the SVG 2 text specification in a pleasant Rust code base? This project requires someone who can write Rust comfortably; it will require reading and refactoring some existing code. You don't need to be an expert in exotic lifetimes and trait bounds and such; the code doesn't use them.
-
Implement SVG2/CSS3 features - Are you excited by all the SVG2 features in Inkscape, and would like to add support for them in librsvg? Would you like to do small changes to many parts of the code to implement small features, one at a time? Do you like test-driven development? This project requires someone who can write Rust code at a medium level; you'll learn a lot by cutting&pasting from existing code and refactoring things to implement SVG2 features.
Important: Outreachy's December 2020 / March 2021 round is available only for students in the Southern hemisphere. People in the Northern hemisphere can wait until the 2021 mid-year round.
You can see GNOME's projects in Outreachy for this round. The deadline for initial contributions and project applications is October 31, 2020 at 16:00 UTC.
VisualBoy Advance | Gameboy Emulation on Linux
Sealed Java State Machines
A few years back I posted about how to implement state machines that only permit valid transitions at compile time in Java.
This used interfaces instead of enums, which had a big drawback—you couldn’t guarantee that you know all the states involved. Someone could add another state elsewhere in your codebase by implementing the interface.
Java 15 brings a preview feature of sealed classes. Sealed classes enable us to solve this downside. Now our interface based state machines can not only prevent invalid transitions but also be enumerable like enums.
If you’re using jdk 15 with preview features enabled you can try out the code. This is how it looks to define a state machine with interfaces.
sealed interface TrafficLight
extends State
permits Green, SolidAmber, FlashingAmber, Red {}
static final class Green implements TrafficLight, TransitionTo {}
static final class SolidAmber implements TrafficLight, TransitionTo {}
static final class Red implements TrafficLight, TransitionTo {}
static final class FlashingAmber implements TrafficLight, TransitionTo {}
The new part is “sealed” and “permits”. Now it becomes a compile failure to define a new implementation of TrafficLight
As well as the existing behaviour where it’s a compile time failure to perform a transition that traffic lights do not allow.
n.b. you can also skip the compile time checked version and still use the type definitions to runtime check the transitions.
Multiple transitions are possible from a state too
static final class Pending
implements OrderStatus, BiTransitionTo {}
Thanks to sealed classes we can also now do enum style enumeration and lookups on our interface based state machines.
sealed interface OrderStatus
extends State
permits Pending, CheckingOut, Purchased, Shipped, Cancelled, Failed, Refunded {}
@Test public void enumerable() {
assertArrayEquals(
array(Pending.class, CheckingOut.class, Purchased.class, Shipped.class, Cancelled.class, Failed.class, Refunded.class),
State.values(OrderStatus.class)
);
assertEquals(0, new Pending().ordinal());
assertEquals(3, new Shipped().ordinal());
assertEquals(Purchased.class, State.valueOf(OrderStatus.class, "Purchased"));
assertEquals(Cancelled.class, State.valueOf(OrderStatus.class, "Cancelled"));
}
These are possible because JEP 360 provides a reflection API with which one can enumerate the permitted subclasses of an interface. ( side note the JEP says getPermittedSubclasses() but the implementation seems to use permittedSubclasses() )
We can add use this to add the above convenience methods to our State interface to allow the values(), ordinal(), and valueOf() lookups.
static > List valuesList(Class stateMachineType) {
assertSealed(stateMachineType);
return Stream.of(stateMachineType.permittedSubclasses())
.map(State::classFromDesc)
.collect(toList());
}
static > Class valueOf(Class stateMachineType, String name) {
assertSealed(stateMachineType);
return valuesList(stateMachineType)
.stream()
.filter(c -> Objects.equals(c.getSimpleName(), name))
.findFirst()
.orElseThrow(IllegalArgumentException::new);
}
static , U extends T> int ordinal(Class stateMachineType, Class instanceType) {
return valuesList(stateMachineType).indexOf(instanceType);
}
There are more details on how the transition checking works and more examples of where this might be useful in the original post. Code is on github.
The post Sealed Java State Machines appeared first on Benji's Blog.
openSUSE Tumbleweed – Review of the week 2020/40
Dear Tumbleweed users and hackers,
Week 40 marked the beginning of autumn – and at least where I am located, the weather seems to agree. Days are getting shorter, time to sit in front of the computer screen is getting more. What can we all do together to move openSUSE Tumbleweed forward? A lot, as it seems. During the last week, 4 snapshots have been published (0925, 0928, 0929 and, 0930). Some larger, some smaller, some were tested but then discarded by openQA – all in all, an average week.
The changes shipped with those 4 snapshots included:
- GNOME 3.36.6
- dracut 50+suse.226 & 50+suse.227; version +suse.226 has shown some very negative side-impacts, with segfaults while creating the initrd. A fix was thus made available as quickly as possible in the TW update channel too
- Samba 4.13.0
- Tracker 3, parallel installable with tracker 2. Preparations for GNOME 3.38
Many changes are still being prepared in the staging areas:
- Linux kernel 5.8.12
- Mesa 20.2
- Mozilla Thunderbird 78.3.1
- Mozilla Firefox 81.0
- openssl 1.1.1h (1 build fail left, neon (gh#notroj/neon#38)
- KDE Plasma 5.20 (currently beta being tested)
- openssh packaging layout change: ‘openssh’ will be a meta-package, pulling in openssh-server and openssh-clients. The first snapshot with this change was discarded. We have seen the service transparently being disabled (boo#1177039)
- glibc 2.32 – one more build failure, installation images (boo#1176972)
- gettext 0.21
- bison 3.7.1
- SELinux 3.1
- binutils 2.35
- openssl 3.0
Digest of YaST Development Sprint 109
For third sprint in a row, the YaST Team has been focusing on enhancing both AutoYaST and the management of storage devices, together with some improvements in our development infrastructure. Let’s take a quick glance at some of the results.
- New YaST test client to check AutoYaST dynamic profiles, including support for pre-scripts that modify the profile, ERB, rules and classes.
- Improved detection of which YaST package is needed to process each section of the profile, relying on RPM’s supplement information instead of the old method based on desktop files.
- First steps to annotate the documentation of AutoYaST with information about when each profile element was introduced or deprecated.
- Final design of the new Partitioner user interface. The adopted solution is described in the corresponding section of our design document and already implemented to a large extent, so we are confident to release a revamped Partitioner during next sprint.
- Improved automatic submission of translations to openSUSE Leap and SUSE Linux Enterprise, since only the Tumbleweed process was fully automated so far.
As we usually remind our readers, these blog posts only show a very small part of all the work, improvements and bug fixes we put into YaST on every sprint. So don’t forget to keep your systems updated and to stay tuned to this blog and all other openSUSE channels for more information!
Collabora is Diamond Sponsor for openSUSE + LibreOffice Conference 2020
The joint openSUSE + LibreOffice Conference 2020 will run from October 15 – 17, and Collabora has joined as a Diamond Sponsor.
Collabora is a major contributor to the LibreOffice project: 37% of commits to the LibreOffice source code in the last two years were made by the company.
In addition, Collabora has worked on LibreOffice Online and mobile applications, and offers various products and services around LibreOffice such as Collabora Online, Collabora Office and CODE.
We’re grateful for the support, and look forward to the conference. Register now at https://events.opensuse.org/conferences/oSLO/register/new and take part!
DAPS in a Container
DAPS is OpenSUSE’s “DocBook Authoring and Publishing Suite” that is used to build documentation for SUSE and OpenSUSE. It actually requires A LOT of dependencies when being installed and for that reason alone, it’s actually better to run it in a container. This is my image and how I use it.
docker run -v ~/myproject/:/home/user jsevans/daps:latest daps -d DC-project epub
Command Breakdown:
docker run – Run the command in the container:
-v ~/myproject/:/home/user – Maps a local directory called ~/myproject to a directory in the container called /home/user. /home/user is the default directory that is used by the daps command, so it is best to map this directory rather than needing any extra command line components.
jsevans/daps:latest – This is the image that I’ve created. It is based on OpenSUSE Tumbleweed but it is stable enough for this use. However, it is a large image ~1.2GB due to the number of dependencies.
daps -d DC-project epub – This is the actual command line argument for creating an EPUB ebook using DAPS. I use Asciidoc as my markup language since I don’t really want to learn docbook.
My Dockerfile:
FROM opensuse/tumbleweed
MAINTAINER Jason Evans <jsevans@opensuse.com>
RUN zypper refresh
RUN zypper --non-interactive in daps git
ENV HOME /home/user
RUN useradd --create-home --home-dir $HOME user \
&& chown -R user $HOME
WORKDIR $HOME
USER user
CMD [ "/usr/bin/daps" ]
SLES 11 - upgrade Suse Linux Enterprise Server 11 SP1 to SP4
As I mentioned previously SLES11 is absolute / not supported anymore. Even most official tutorial are wipe out form suse website.
I will write how I updating my SLES11 to SP4. I don’t have SP1 but it should be work the same as SP2 upgrade to SP3 and SP3 to SP4.
So this is my machine
VM-b0x:convey69:/convey69> uname -a
Linux VM-b0x 3.0.42-0.7-default #1 SMP Tue Oct 9 11:58:45 UTC 2012 (a8dc443) x86_64 x86_64 x86_64 GNU/Linux
VM-b0x:convey69:/convey69> cat /etc/SuSE-release
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 2
why update? Because it better to get recent patch, fix OS vulnerabilities and update kernel (anything as latest we could get! So please do not ask me why need update)
On first run, you will got problem like this
VM-b0x:convey69:/convey69> sudo zypper ref -s
root's password:
Refreshing service 'nu_novell_com'.
Adding repository 'SLE11-SP4-Debuginfo-Updates' [done]
Adding repository 'SLE11-Public-Cloud-Module' [done]
Adding repository 'SLES11-SP4-Pool' [done]
Adding repository 'SLES11-SP4-Updates' [done]
Adding repository 'SLE11-SP4-Debuginfo-Pool' [done]
Refreshing service 'novell'.
Unexpected exception.
Parse error: repoindex.xml[1] Extra content at the end of the document
Please file a bug report about this.
See http://en.opensuse.org/Zypper/Troubleshooting for instructions.
To fix this shit, go and edit /etc/zypp/services.d/service.service by change enable=1 to enable=0
VM-b0x:convey69:/convey69> cat /etc/zypp/services.d/service.service
[service]
name=novell
enabled=0
autorefresh=1
url = http://nu.novell.com/
type = ris
After done fix previous problem, you can refresh repository without mess with run zypper (as root) command
VM-b0x:convey69:/convey69> sudo zypper ref -s
Refreshing service 'nu_novell_com'.
All services have been refreshed.
Retrieving repository 'SLES11-Extras' metadata [done]
Building repository 'SLES11-Extras' cache [done]
Repository 'SLES11-Pool' is up to date.
Retrieving repository 'SLES11-SP1-Pool' metadata [done]
Building repository 'SLES11-SP1-Pool' cache [done]
Retrieving repository 'SLES11-SP1-Updates' metadata [done]
Building repository 'SLES11-SP1-Updates' cache [done]
Repository 'SLES11-SP2-Core' is up to date.
Repository 'SLES11-SP2-Extension-Store' is up to date.
Retrieving repository 'SLES11-SP2-Updates' metadata [done]
Building repository 'SLES11-SP2-Updates' cache [done]
Retrieving repository 'SLES11-Updates' metadata [done]
Building repository 'SLES11-Updates' cache [done]
All repositories have been refreshed.
Patching SLES 11 SP-2 to SP-3
I previously refer to here (link is dead now, even on backway machine!) for information and guideline, but it not available anymore. No biggie.. just follow my steps below.
Run an Online Update
Make sure the currently installed version has the latest patches installed. Run an Online Update prior to the Online Migration. When using a graphical interface, start the YaST Online Update or the updater applet. On the command line, run the following commands (the last command needs to be run twice):
# zypper ref -s
# zypper update -t patch
# zypper update -t patch
Reboot the system if needed.
Get a list of these products by running the following command:
VM-b0x:convey69:/convey69> sudo zypper se -t product | grep -h -- "-migration" | cut -d'|' -f2
root's password:
SUSE_SLES-SP1-migration
SUSE_SLES-SP2-migration
SUSE_SLES-SP3-migration
Install the migration products retrieved in the previous step with the command zypper in -t product <LIST_OF_PRODUCTS>
VM-b0x:convey69:/convey69> sudo zypper in -t product SUSE_SLES-SP3-migration
Refreshing service 'nu_novell_com'.
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following NEW package is going to be installed:
SUSE_SLES-SP3-migration
The following NEW product is going to be installed:
SUSE_SLES Service Pack 3 Migration Product
1 new package to install.
Overall download size: 4.0 KiB. After the operation, additional 3.0 KiB will be used.
Continue? [y/n/?] (y):
Register the products installed in the previous step in order to get the respective update channels:
VM-b0x:convey69:/convey69> sudo suse_register -d 2 -L /root/.suse_register.log
Execute command: /usr/bin/zypper --non-interactive ref --service
Execute command exit(0):
GUID:xxxyyyzzz
Execute command: /usr/bin/zypper --no-refresh --quiet --xmlout --non-interactive products --installed-only
Execute command exit(0):
installed products: $VAR1 = [
[
'SUSE_SLES-SP3-migration',
'11.2',
'',
'x86_64'
],
[
'SUSE_SLES',
'11.2',
'DVD',
'x86_64'
]
];
Execute command: /usr/bin/lscpu
Execute command exit(0):
Execute command: /usr/bin/zypper --non-interactive targetos
Execute command exit(0):
list-parameters: 0
xml-output: 0
no-optional: 0
batch: 0
forcereg: 0
no-hw-data: 0
log: /root/.suse_register.log
locale: undef
no-proxy: 0
yastcall: 0
arg: $VAR1 = {
'timezone' => {
'kind' => 'mandatory',
'value' => 'Europe/Vienna',
'flag' => 'i',
'description' => 'Timezone'
},
'ostarget' => {
'kind' => 'mandatory',
'value' => 'sle-11-x86_64',
'flag' => 'i',
'description' => 'Target operating system identifier'
},
'processor' => {
'kind' => 'mandatory',
'value' => 'x86_64',
'flag' => 'i',
'description' => 'Processor type'
},
'platform' => {
'kind' => 'mandatory',
'value' => 'x86_64',
'flag' => 'i',
'description' => 'Hardware platform type'
}
};
extra-curl-option:$VAR1 = [];
URL: https://secure-www.novell.com/center/regsvc
listParams: command=listparams
register: command=register
lang: english
initialDomain: .novell.com
SEND DATA to URI: https://secure-www.novell.com/center/regsvc?command=listproducts&lang=en-US&version=1.0:
About to connect() to secure-www.novell.com port 443 (#0)
Trying 130.57.66.5...
connected
Connected to secure-www.novell.com (130.57.66.5) port 443 (#0)
successfully set certificate verify locations:
CAfile: none
CApath: /etc/ssl/certs/
SSLv3, TLS handshake, Client hello (1):
SSLv3, TLS handshake, Server hello (2):
SSLv3, TLS handshake, CERT (11):
SSLv3, TLS handshake, Server finished (14):
SSLv3, TLS handshake, Client key exchange (16):
SSLv3, TLS change cipher, Client hello (1):
SSLv3, TLS handshake, Finished (20):
SSLv3, TLS change cipher, Client hello (1):
SSLv3, TLS handshake, Finished (20):
SSL connection using AES256-SHA
Server certificate:
subject: C=US; L=Provo; ST=Utah; O=Novell, Inc.; CN=*.novell.com
start date: 2015-02-23 00:00:00 GMT
expire date: 2018-05-31 12:00:00 GMT
subjectAltName: secure-www.novell.com matched
issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=DigiCert SHA2 High Assurance Server CA
SSL certificate verify ok.
Connection #0 to host secure-www.novell.com left intact
CODE: 302 MESSAGE: Moved Temporarily
RECEIVED DATA:
HTTP/1.1 302 Moved Temporarily
Date: Wed, 25 Apr 2018 02:51:49 GMT
Server: Apache/2.2.34 (Linux/SUSE)
Strict-Transport-Security: max-age=31536000;includeSubDomains
Location: http://secure-www.novell.com/center/regsvc/?command=listproducts&lang=en-US&version=1.0
Fill: wwwfill3
Content-Length: 0
Content-Type: text/plain
X-Mag: xxxxxx;yyyyyyy;zzzzzz;usrLkup->0;usrBase->0;getPRBefFind->0;getPRBefFind->0;PRAfterFind->0;swww_root;publicURL->0;swww;RwDis;FF1End->0;FP2->0;WS=zzzzzz;FP4->4;
Set-Cookie: xxxxxxx-yyyyyyy=zzzzzz; Path=/; Domain=.novell.com
Via: 1.1 secure-www.novell.com (Access Gateway-ag-xxxxxxxx-yyyyyyyyy)
Set-Cookie: lb_novell=xxxxxxx; Domain=.novell.com; Path=/
). https is required.://secure-www.novell.com/center/regsvc/?command=listproducts&lang=en-US&version=1.0
(15)
). https is required.://secure-www.novell.com/center/regsvc/?command=listproducts&lang=en-US&version=1.0
(15)
Closing connection #0
SSLv3, TLS alert, Client hello (1):
Refresh the repositories and services:
VM-b0x:convey69:/convey69> sudo zypper ref -s
Refreshing service 'nu_novell_com'.
All services have been refreshed.
Repository 'SLES11-Extras' is up to date.
Repository 'SLES11-Pool' is up to date.
Repository 'SLES11-SP1-Pool' is up to date.
Repository 'SLES11-SP1-Updates' is up to date.
Repository 'SLES11-SP2-Core' is up to date.
Repository 'SLES11-SP2-Extension-Store' is up to date.
Repository 'SLES11-SP2-Updates' is up to date.
Repository 'SLES11-Updates' is up to date.
All repositories have been refreshed.
Check the list of repositories you can retrieve with zypper lr.
If any of these repositories is not enabled (the SP3 ones are not enabled by default when following this workflow), enable them with zypper modifyrepo --enable REPOSITORY ALIAS, for example:
VM-b0x:convey69:/convey69> sudo zypper modifyrepo --enable SLES11-SP3-Pool SLES11-SP3-Updates
Repository 'nu_novell_com:SLES11-SP3-Pool' has been successfully enabled.
Repository 'nu_novell_com:SLES11-SP3-Updates' has been successfully enabled.
If your setup contains third-party repositories that may not be compatible with SP3, disable them with zypper modifyrepo --disable REPOSITORY ALIAS.
Now everything is in place to perform the distribution upgrade with zypper dup --from REPO 1 --from REPO 2 .. Make sure to list all needed repositories with --from, for example:
sudo zypper dup --from SLES11-SP3-Pool --from SLES11-SP3-Updates
Confirm with y to start the upgrade.
upon completion of the distribution upgrade from the previous step, run the following command:
sudo zypper update -t patch
Now that the upgrade to SP3 has been completed, you need to re-register your product:
sudo suse_register -d 2 -L /root/.suse_register.log
Lastly, reboot your system with sudo /sbin/shutdown -r now
Your system has been successfully updated to Service Pack 3.
VM-b0x:convey69:/convey69> uname -a
Linux VM-b0x 3.0.101-0.47.71-default #1 SMP Thu Nov 12 12:22:22 UTC 2015 (b5b212e) x86_64 x86_64 x86_64 GNU/Linux
VM-b0x:convey69:/convey69> cat /etc/SuSE-release
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 3
for patching SLES 11 SP3 to SP4, the setup are using same step but do it properly from SP1 to SP2, SP2 to SP3 and lastly SP3 to SP4.
Maybe you will face problem like this
$ zypper dup --from SLES11-SP4-Pool --from SLES11-SP4-Updates
$ sudo zypper update -t patch
Refreshing service 'nu_novell_com'.
Loading repository data...
Reading installed packages...
Resolving package dependencies...
Problem: openssh-askpass-6.2p2-0.24.1.x86_64 requires openssh = 6.2p2, but this requirement cannot be provided
uninstallable providers: openssh-6.2p2-0.9.1.x86_64[nu_novell_com:SLES11-SP3-Pool]
openssh-6.2p2-0.13.1.x86_64[nu_novell_com:SLES11-SP3-Updates]
openssh-6.2p2-0.21.1.x86_64[nu_novell_com:SLES11-SP3-Updates]
openssh-6.2p2-0.24.1.x86_64[nu_novell_com:SLES11-SP3-Updates]
Solution 1: Following actions will be done:
do not install patch:slessp3-openssh-2016011301-12325-1.noarch
do not install patch:slessp3-openssh-9357.noarch
Solution 2: Following actions will be done:
downgrade of openssh-6.6p1-36.15.1.x86_64 to openssh-6.2p2-0.24.1.x86_64
deinstallation of openssh-helpers-6.6p1-36.15.1.x86_64
Solution 3: deinstallation of openssh-askpass-1.2.4.1-1.46.x86_64
Solution 4: break openssh-askpass-6.2p2-0.24.1.x86_64 by ignoring some of its dependencies
Choose from above solutions by number or cancel [1/2/3/4/c] (c): 1
Well, plase do choose what ever you preferred. Because the latest openssl,openssh, curl is too old. We will take care manually (fetch source code, config, compile and install). I will wrote the tutorial later. Adios!
openSUSE Tumbleweed – Review of the week 2020/39
Dear Tumbleweed users and hackers,
During this week we have released ‘only’ three Snapshot (0919, 0922 and 0923). But some of you might have noticed that we are finally sending the ‘build fail notification mails’ again, helping you be more laid back, not having to look at your packages all the time, as the bot does that for you. Unfortunately, due to some OBS issue, this feature was broken for a little while.
The 3 snapshots we published contained these updates for you:
- KDE Frameworks 5.74.0
- Mesa 20.1.8
- firewalld 0.9.0
- Linux kernel 5.8.10
- Perl 5.30.3
Not as much change as in previous weeks and the list of staged changes is thus largely unchanged. Currently there are:
- GNOME 3.36.6
- openssl 1.1.1h
- openssh packaging layout change: ‘openssh’ will be a meta package, pulling in openssh-server and openssh-clients. This will allow for fainer grained installation/removal of the server part where needed
- KDE Plasma 5.20 (currently beta being tested)
- glibc 2.32
- binutils 2.35
- gettext 0.21
- bison 3.7.1
- SELinux 3.1
- openssl 3.0
