Skip to main content

the avatar of Nathan Wolf
the avatar of Kubic Project

Kubic with Kubernetes 1.19.0 released

Announcement

The Kubic Project is proud to announce that Snapshot 20200907 has been released containing Kubernetes 1.19.0.

Release Notes are avaialble HERE.

Upgrade Steps

All newly deployed Kubic clusters will automatically be Kubernetes 1.19.0 from this point.

For existing clusters, please follow our new documentation our wiki HERE

Thanks and have a lot of fun!

The Kubic Team

the avatar of YaST Team

Digest of YaST Development Sprint 107

The last two weeks of August the YaST team has kept the same modus operandi than the rest of the month, focusing on fixing bugs and polishing several internal aspects. But we also found some time to start working on some mid-term goals in the area of AutoYaST and storage management. Find below a summary of the most interesting stuff addressed during the sprint finished a week ago (sorry for the delay).

Although it doesn’t look like too much, the bright side is that we are already deep into the next sprint. So you will not have to wait much to have more news from us. Meanwhile, stay safe and fun!

the avatar of Klaas Freitag

Screensharing with MS Teams and KDE: Black Screen

In the day job we use Microsoft Teams. The good news is that it is running on the Linux Desktop, and specifically KDE. So far, so good, however, there was a problem with screensharing for me.

Whenever I tried to share my KDE screen, the screen became black, surrounded with a red rectangle as indicator for the shared area. The people who I shared with also just saw the black area, and also the mouse pointer as me.

The problem is described in a bugreport and there are two ways of solving it:

  1. Enable compositing: The red indicator rectangle requires that the window manager supports compositing. KWin can of course do that, and with that enabled, sharing works fine including the red rectangle.
  2. If compositing can or should not be used there is another workaround: As the bug report shows, renaming the file /usr/share/teams/resources/app.asar.unpacked/node_modules/slimcore/bin/rect-overlay so that it is not used by teams fixes it as well. Obviously you wont have the red rectangle with this solution.

That said, it is of course preferable to use an open source alternative to Teams to help these evolve.

the avatar of Duncan Mac-Vicar

Raspberry Pi cluster with k3s & Salt (Part 1)

I have been running some workloads on Raspberry Pi’s / Leap for some time. I manage them using salt-ssh along with a Pine64 running OpenBSD. You can read more about using Salt this way in my Using Salt like Ansible post.

rpi-cluster-small.jpg

The workloads ran on containers, which were managed with systemd and podman. Salt managed the systemd service files on /etc/systemd, which start, monitor and stops the containers. For example, the homeassistant.sls state, managed the service file for mosquitto:

homeassistant.mosquito.deploy:
  file.managed:
    - name: /root/config/eclipse-mosquitto/mosquitto.conf
    - source: salt://homeassistant/files/mosquitto/mosquitto.conf

homeassistant.eclipse-mosquitto.container.service:
  file.managed:
    - name: /etc/systemd/system/eclipse-mosquitto.service
    - contents: |
	[Unit]
	Description=%N Podman Container
	After=network.target

	[Service]
	Type=simple
	TimeoutStartSec=5m
	ExecStartPre=-/usr/bin/podman rm -f "%N"
	ExecStart=/usr/bin/podman run -ti --rm --name="%N" -p 1883:1883 -p 9001:9001 -v /root/config/eclipse-mosquitto:/mosquitto/config -v /etc/localtime:/etc/localtime:ro --net=host docker.io/library/eclipse-mosquitto
	ExecReload=-/usr/bin/podman stop "%N"
	ExecReload=-/usr/bin/podman rm "%N"
	ExecStop=-/usr/bin/podman stop "%N"
	Restart=on-failure
	RestartSec=30

	[Install]
	WantedBy=multi-user.target
  service.running:
    - name: eclipse-mosquitto
    - enable: True
    - require:
      - pkg: homeassistant.podman.pkgs
      - file: /etc/systemd/system/eclipse-mosquitto.service
      - file: /root/config/eclipse-mosquitto/mosquitto.conf
    - watch:
      - file: /root/config/eclipse-mosquitto/mosquitto.conf

The Salt state also made sure the right packages and other details where ready before the service was started.

This was very simple and worked well so far. One disadvantage is that the workloads are tied to a particular Pi. I was not going to make the setup more complex by building my own orchestrator.

Another disadvantage is that I was pulling the containers into the SD card. I was not hoping for a long life of these. After it died, I took it as a good opportunity to re-do this setup.

My long term goal would be to netboot the Pi’s, and have the storage mounted. I am not very familiar with all the procedure, so I will go step by step.

I decided for the the initial iteration:

  • k3s (Lightweight Kubernetes) on the Pi’s
  • The k3s server to use a USB disks/SSDs with btrfs as storage
  • The worker nodes to /var/lib/rancher/k3s from USB storage
  • Applying the states over almost stock Leap 15.2 images should result in a working cluster

All the above managed with salt-ssh tree on a git repository just like I was used to

k3s installation

We start by creating k3s/init.sls. For the k3s state I defined a minimal pillar defining the server and the shared token:

k3s:
  token: xxxxxxxxx
  server: rpi03

The first part of the k3s state ensures cgroups are configured correctly and disables swap:

k3s.boot.cmdline:
  file.managed:
    - name: /boot/cmdline.txt
    - contents: |
	cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

k3s.disable.swap:
  cmd.run:
    - name: swapoff -a
    - onlyif:  swapon --noheadings --show=name,type | grep .

As the goal was to avoid using the SD card, the next state makes sure /var/lib/rancher/k3s is a mount. I have to admit I wasted quite some time getting right the state for the storage mount. Using mount.mounted did not work because it is buggy and took different btrfs subvolume mounts from the same device as the same mount.

k3s.volume.mount:
  mount.mounted:
    - name: /var/lib/rancher/k3s
    - device: /dev/sda1
    - mkmnt: True
    - fstype: btrfs
    - persist: False
    - opts: "subvol=/@k3s"

I resorted then to write my own state. I discovered the awesome findmnt command, and my workaround looked like:

k3s.volume.mount:
  cmd.run:
    - name: mount -t btrfs -o subvol=/@{{ grains['id'] }}-data /dev/sda1 /data
    - unless: findmnt --mountpoint /data --noheadings | grep '/dev/sda1[/@k3s]'
    - require:
	- file: k3s.volume.mntpoint

This turned later to be a pain, as the k3s installer started k3s without caring much if this volume was mounted or not. Then I remembered: systemd does exactly that. It manages mount and dependencies. This simplified the mount state to:

k3s.volume.mount:
  file.managed:
    - name: /etc/systemd/system/var-lib-rancher-k3s.mount
    - contents : |
	[Unit]

	[Install]
	RequiredBy=k3s
	RequiredBy=k3s-agent

	[Mount]
	What=/dev/sda1
	Where=/var/lib/rancher/k3s
	Options=subvol=/@k3s
	Type=btrfs
  cmd.run:
    - name: systemctl daemon-reload
    - onchanges:
	- file: k3s.volume.mount
  service.running:
    - name: var-lib-rancher-k3s.mount

The k3s state works as follows: it runs the installation script in server or agent mode depending if the pillar k3s:server entry matches with the node where the state is applied.

{%- set k3s_server = salt['pillar.get']('k3s:server') -%}
{%- if grains['id'] == k3s_server %}
{%- set k3s_role = 'server' -%}
{%- set k3s_suffix = "" -%}
{%- else %}
{%- set k3s_role = 'agent' -%}
{%- set k3s_suffix = '-agent' -%}
{%- endif %}

k3s.{{ k3s_role }}.install:
  cmd.run:
    - name: curl -sfL https://get.k3s.io | sh -s -
    - env:
	- INSTALL_K3S_TYPE: {{ k3s_role }}
{%- if k3s_role == 'agent' %}
	- K3S_URL: "https://{{ k3s_server }}:6443"
{%- endif %}
	- INSTALL_K3S_SKIP_ENABLE: "true"
	- INSTALL_K3S_SKIP_START: "true"
	- K3S_TOKEN: {{ salt['pillar.get']('k3s:token', {}) }}
    - unless:
	# Run install on these failed conditions
	# No binary
	- ls /usr/local/bin/k3s
	# Token changed/missing
	- grep '{{ salt['pillar.get']('k3s:token', {}) }}' /etc/systemd/system/k3s{{ k3s_suffix }}.service.env
	# Changed/missing server
{%- if k3s_role == 'agent' %}
	- grep 'K3S_URL=https://{{ k3s_server }}:6443' /etc/systemd/system/k3s{{ k3s_suffix }}.service.env
{%- endif %}
    - require:
	- service: k3s.volume.mount
	- service: k3s.kubelet.volume.mount

k3s.{{ k3s_role }}.running:
  service.running:
    - name: k3s{{ k3s_suffix }}
    - enable: True
    - require:
      - cmd: k3s.{{ k3s_role }}.install

Workloads

The next step is to move workloads like homeassistant into this setup.

k3s allows to automatically deploy manifests located in /var/lib/rancher/server/manifests. We can deploy eg. mosquitto like the following:

homeassistant.mosquitto:
  file.managed:
    - name: /var/lib/rancher/k3s/server/manifests/mosquitto.yml
    - source: salt://homeassistant/files/mosquitto.yml
    - require:
      - k3s.volume.mount

With mosquito.yml being:

---
apiVersion: v1
kind: Namespace
metadata:
  name: homeassistant
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mosquitto
  namespace: homeassistant
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mosquitto
  template:
    metadata:
      labels:
	app: mosquitto
    spec:
      containers:
	- name: mosquitto
	  image: docker.io/library/eclipse-mosquitto
	  resources:
	    requests:
	      memory: "64Mi"
	      cpu: "100m"
	    limits:
	      memory: "128Mi"
	      cpu: "500m"
	  ports:
	  - containerPort: 1883
	  imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: mosquitto
  namespace: homeassistant
spec:
  ports:
  - name: mqtt
    port: 1883
    targetPort: 1883
    protocol: TCP
  selector:
    app: mosquitto

Homeassistant is no different, except that we use a ConfigMap resource to store the configuration and define an Ingress resource to access it from the LAN:

---
apiVersion: v1
kind: Namespace
metadata:
  name: homeassistant
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: homeassistant-config
  namespace: homeassistant
data:
  configuration.yaml: |
    homeassistant:
      auth_providers:
	- type: homeassistant
	- type: trusted_networks
	  trusted_networks:
	    - 192.168.178.0/24
	    - 10.0.0.0/8
	    - fd00::/8
	  allow_bypass_login: true
      name: Home
      latitude: xx.xxxx
      longitude: xx.xxxx
      elevation: xxx
      unit_system: metric
      time_zone: Europe/Berlin
    frontend:
    config:
    http:
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: homeassistant
  namespace: homeassistant
spec:
  replicas: 1
  selector:
    matchLabels:
      app: homeassistant
  template:
    metadata:
      labels:
	app: homeassistant
    spec:
      containers:
	- name: homeassistant
	  image: homeassistant/raspberrypi3-64-homeassistant:stable
	  volumeMounts:
	    - name: config-volume-configuration
	      mountPath: /config/configuration.yaml
	      subPath: configuration.yaml
	  livenessProbe:
	    httpGet:
	      scheme: HTTP
	      path: /
	      port: 8123
	    initialDelaySeconds: 30
	    timeoutSeconds: 30
	  resources:
	    requests:
	      memory: "512Mi"
	      cpu: "100m"
	    limits:
	      memory: "1024Mi"
	      cpu: "500m"
	  ports:
	    - containerPort: 8123
	      protocol: TCP
	  imagePullPolicy: Always
      volumes:
	- name: config-volume-configuration
	  configMap:
	    name: homeassistant-config
	    items:
	    - key: configuration.yaml
	      path: configuration.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: homeassistant
  namespace: homeassistant
spec:
  selector:
    app: homeassistant
  ports:
    - port: 8123
      targetPort: 8123
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: homeassistant
  namespace: homeassistant
  annotations:
    kubernetes.io/ingress.class: traefik
    traefik.frontend.rule.type: PathPrefixStrip
spec:
  rules:
  - host: homeassistant.int.mydomain.com
    http:
      paths:
      - path: /
	backend:
	  serviceName: homeassistant
	  servicePort: 8123

Setting up Ingress was the most time consuming part. It took me a while to figure out how it was supposed to work, and customizing the Treafik Helm chart is not intuitive to me. While homeassistant was more straightforward as it is a simple HTTP behind SSL proxy service, the Kubernetes dashboard is already deployed with SSL inside the cluster. I am still figuring out how ingress.kubernetes.io/protocol: https, traefik.ingress.kubernetes.io/pass-tls-cert: "true" (oh, don’t forget the quotes!) or insecureSkipVerify work toghether and what is the best way to expose it to the LAN.

In a future post, I will describe the dashboards setup, and other improvements.

a silhouette of a person's head and shoulders, used as a default avatar

Reasons to hire inexperienced engineers

There are many reasons to consider hiring inexperienced software engineers into your team, beyond the commonly discussed factors of cost and social responsibility.

Hire to maximise team effectiveness; not to maximise team size. Adding more people increases the communication and synchronisation overhead in the team. Growing a team has rapidly diminishing returns.

However, adding the right people, perspectives, skills, and knowledge into a team can transform that team’s impact. Instantly unblocking problems that would have taken days of research. Resolving debates that would have paralysed. The right balance between planning and action.

It’s easy to undervalue inexperienced software engineers as part of a healthy team mix. While teams made up of entirely senior software engineers can be highly effective. There are many benefits beyond cost and social responsibility for hiring entry level and junior software engineers onto your team.

Fresh Perspectives

Experienced engineers have learned lots of so called “best practices” or dogma. Mostly these are good habits that are safer ways of working, save time, and aid learning. On the other hand sometimes the context has changed and these practices are no longer useful, but we carry on doing them anyway out of habit. Sometimes there’s a better way, now that tech has moved on, and we haven’t even stopped to consider.

There’s a lot of value in having people on the team who’ve yet to develop the same biases. People who’ll force you to think through and articulate why you do the things you’ve come to take for granted. The reflection may help you spot a better way.

To take advantage you need sufficient psychological safety that anyone can ask a question without fear of ridicule. This also benefits everyone.

Incentive for Simplicity and Safety

A team of experienced engineers may be able to tolerate a certain amount of accidental code complexity. Their expertise may enable them to work relatively safely without good test safety nets and gaps in their monitoring. I’m sure you know better ;)

Needing to make our code simple enough to understand for a new software engineer to be able to understand and change it exerts positive pressure on our code quality.

Having to make it safe to fail. Protecting everyone on the team from being able to make a change that takes down production or corrupts data helps us all. We’re all human.

Don’t have any junior engineers? What would you do differently if you knew someone new to programming was joining your team next week? Which of those things should you be doing anyway? How many would pay back their investment even with experienced engineers? How much risk and complexity are you tolerating? What’s its cost?

Growth opportunity for others

Teaching, advising, mentoring, coaching less experienced people on the team can be a good development opportunity for others. Teaching helps deepen your own understanding of a topic. Practising your ability to lift others up will serve you well.

Level up fast

It can be humbling how swiftly new developers can get up to speed and become highly productive. Particularly in an environment that really values learning. Pair programming can be a tremendous accelerator for learning through doing. True pairing, i.e. solving problems together, rather than spoonfeeding or observing.

Tenure

Amount of software engineering experience is one indicator for the impact an individual can have. Amount of experience within your organisation is also relevant. If you only hire senior people and your org is not growing fast enough to provide them with further career development opportunities they are more likely to leave to find growth opportunities. It can be easier to find growth opportunities for people earlier in their career.

A mix of seniorities can help increase the average tenure of developers in your organisation—assuming you will indeed support them with their career development.

Action over Analysis

Junior engineers often bring a healthy bias towards getting on with doing things over excessive analysis. Senior engineers sometimes get stuck evaluating foreseen possibilities, finding “the best tool for the job”, or debating minutiae ad nauseam. Balancing the desires to do the right things right, with the desire to do something, anything quickly on a team can be transformational.

Hire Faster

There’s more inexperienced people. It’s quicker to find people if we relax our experience and skill requirements. Some underrepresented minorities may be less underrepresented at more junior levels.

The inexperienced engineer you hire today could be the senior engineer you need in years to come.

To ponder

What other reasons have I missed? In what contexts is the opposite true? When would you only hire senior engineers?

The post Reasons to hire inexperienced engineers appeared first on Benji's Blog.

a silhouette of a person's head and shoulders, used as a default avatar

What would you like to see most in minix?

I’m working on a couple of presentations and I wanted to share this nugget of joy with anyone who hasn’t actually read it.

Path: gmdzi!unido!fauern!ira.uka.de!sol.ctr.columbia.edu!zaphod.mps.
ohio-state.edu!wupost!uunet!mcsun!news.funet.fi!hydra!klaava!torvalds
From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: What would you like to see most in minix?
Summary: small poll for my new operating system
Keywords: 386, preferences
Message-ID: <1991Aug25.205708.9541@klaava.Helsinki.FI>
Date: 25 Aug 91 20:57:08 GMT
Organization: University of Helsinki
Lines: 20

Hello everybody out there using minix -

I'm doing a (free) operating system (just a hobby, won't be big and
professional like gnu) for 386(486) AT clones.  This has been brewing
since april, and is starting to get ready.  I'd like any feedback on
things people like/dislike in minix, as my OS resembles it somewhat
(same physical layout of the file-system (due to practical reasons)
among other things).

I've currently ported bash(1.08) and gcc(1.40), and things seem to work.
This implies that I'll get something practical within a few months, and
I'd like to know what features most people would want.  Any suggestions
are welcome, but I won't promise I'll implement them :-)

Linus (torvalds@kruuna.helsinki.fi)

PS.  Yes - it's free of any minix code, and it has a multi-threaded fs.
It is NOT protable (uses 386 task switching etc), and it probably never
will support anything other than AT-harddisks, as that's all I have :-(.

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE Tumbleweed – Review of the week 2020/36

Dear Tumbleweed users and hackers,

During the last week, we have shipped a massive snapshot, due to a full rebuild (RPM libexecdir change, rpm payload compression change, and symlink changes to /etc/alternatives). The libexecdir was not exactly as smooth as we wish it to be, on the other hand, it was also not disastrous. The biggest issue seen by users was likely the no-longer working ‘man’ program. This was a trap by update-alternatives, that does not like changing target paths, paired with a packaging bug. In total, we have shipped 5 snapshots (0826, 0829, 0831, 0901 and 0902).

The most noteworthy changes were:

  • Mesa 20.1.6
  • Linux kernel 5.8.2 & 5.8.4
  • rpm: %{_libexecdir} is now /usr/libexec, and payload compression change from xz to zstd. This results in ‘faster decompression’ of the RPMs at roughly the same space (in average a bit smaller even)
  • VirtualBox 6.1.3: working support for kernel 5.8
  • Mozilla Firefox 80.0
  • Kubernetes 1.19

Currently, the developers are bust bringing you these changes, that are being tested in the staging areas:

  • KDE Applications 20.08.1
  • KDE Plasma 5.19.5
  • systemd 246
  • glibc 2.32
  • binutils 2.35
  • gettext 0.21
  • bison 3.7.1
  • SELinux 3.1
the avatar of openSUSE News

Tumbleweed Rises from Rebuilt Packages

With “literally all 15,000” packages being rebuilt in snapshot 20200826, openSUSE Tumbleweed roared back from a stability rating of 36 in the rebuild snapshot to a 95 rating in snapshot 20200901, according to the snapshot reviewer.

Each snapshot progressively increased in stability this week.

Snapshot 20200901 brought ImageMagick 7.0.10.28, which provided a patch for correct colospace and fixed paths for conversion of Photoshop EPS files. VirtualBox 6.1.13 arrived in the snapshot and updated the sources to run with versions above the 5.8 Linux Kernel with no modifications needed to the kernel. The library for rendering Postscript documents, libspectre 0.2.9, now requires Ghostscript 9.24 and fixed memory leaks and crashes to the program caused by malformed documents. One major version update to the game freecell-solver was made in the snapshot; version 6.0.1 had some code cleanup, minor bug fixes and the addition of a compile time option. openSUSE’s snapper package updated to 0.8.13 and fixed the Logical Volume Manager setup for volume groups and logical volumes with one character-long names. Other notable packages updated in the snapshot were xapian-core 1.4.17, openldap2 2.4.52 and qalculate 3.12.1.

Trending at a 87 rating, snapshot 20200831, brought less than a handful of updates. The packages updated in the snapshot were bind 9.16.6, libverto 0.3.1, permissions 1550_20200826, and suse-module-tools 15.3.4. The bind package, which implements the Domain Name System (DNS) protocols for the Internet, fixed several Common Vulnerabilities and Exposure including one that made it possible to trigger an assertion failure by sending a specially crafted large TCP DNS message.

Snapshot 20200829, which is likely to record a rating of 69, updated the Linux Kernel to 5.8.4. Some Advanced Linux Sound Architecture additions arrived in the kernel update. VLC player 3.0.11.1 fixed the handling of subtitles and fixed the HLS playlist that was unable to start. Fetchmail 6.4.8 plugged a memory leak where parts of the configuration overrode one another and the configuration now supports Python 3. Flatpak 1.8.2 provided a fix for seccomp filters on s390. GNU Compiler Collection 10 made some fixes and nano 5.2 made some changes that prevent a crash after a large paste in the text editor. Packages pango 1.46.1, lvm2 2.03.10 and mariadb 10.4.14 were also updated in the snapshot along with several other packages.

The 20200826 snapshot that rebuilt all the packages had a few updated versions and an update to the 5.8.2 Linux Kernel. The updates included Mesa 20.1.6, which dropped some dependencies and provided a note in the spec file to enable the dependencies when changing the defines. Several Python packages including python-setuptools were updated in the snapshot; python-setuptools 44.1.1 had a notable change referencing a fix for Python 4 in the changelog. Optimising Web Delivery, Squid improved Transfer-Encoding handling in version 4.13 and enforce token characters for field-names.

the avatar of Just Another Tech Blog

How to write a Samba Winbind Group Policy Extension

This tutorial will explain how to write a Group Policy Extension for Samba’s Winbind. Group Policy is a delivery mechanism for distributing system settings and company policies to machines joined to an Active Directory domain. Unix/Linux machines running Samba’s Winbind can also deploy these policies. Read my previous post about which Client Side Extensions and Server Side Extensions currently exist in Winbind.

Creating the Server Side Extension

The first step to deploying Group Policy is to create a Server Side Extension (SSE). There are multiple ways to create an SSE, but here we’ll only discuss Administrative Templates (ADMX). The purpose of the SSE is to deploy policies to the SYSVOL share. Theoretically, you could manually deploy any file (even plain text) to the SYSVOL and then write a Client Side Extension that parses it, but ADMX can be read and modified by the Group Policy Management Editor, which makes administration of policies simpler.

ADMX files are simply XML files which explain to the Group Policy Management Console how to display and store a policy in the SYSVOL. AMDX files always store policies in Registry.pol files. Samba provides a mechanism for parsing these, which we’ll discuss later.

Below is a simple example of an ADMX template, and it’s corresponding ADML file.

samba.admx:

<policyDefinitions revision="1.0" schemaVersion="1.0">
  <policyNamespaces>
    <target prefix="fullarmor" namespace="FullArmor.Policies.98BB16AF_01EE_4D17_870D_A3311A44D6C2" />
    <using prefix="windows" namespace="Microsoft.Policies.Windows" />
  </policyNamespaces>
  <supersededAdm fileName="" />
  <resources minRequiredRevision="1.0" />
  <categories>
    <category name="CAT_3338C1DD_8A00_4273_8547_158D8B8C19E9" displayName="$(string.CAT_3338C1DD_8A00_4273_8547_158D8
B8C19E9)" />
    <category name="CAT_7D8D7DC8_5A9D_4BE1_8227_F09CDD5AFFC6" displayName="$(string.CAT_7D8D7DC8_5A9D_4BE1_8227_F09CD
D5AFFC6)">
      <parentCategory ref="CAT_3338C1DD_8A00_4273_8547_158D8B8C19E9" />
    </category>
  </categories>
  <policies>
    <policy name="POL_9320E11F_AC80_4A7D_A5C8_1C0F3F727061" class="Machine" displayName="$(string.POL_9320E11F_AC80_4A7D_A5C8_1C0F3F727061)" explainText="$(string.POL_9320E11F_AC80_4A7D_A5C8_1C0F3F727061_Help)" presentation="$(presentation.POL_9320E11F_AC80_4A7D_A5C8_1C0F3F727061)" key="Software\Policies\Samba\Unix Settings">
      <parentCategory ref="CAT_7D8D7DC8_5A9D_4BE1_8227_F09CDD5AFFC6" />
      <supportedOn ref="windows:SUPPORTED_WindowsVista" />
      <elements>
        <list id="LST_2E9A4684_3C0E_415B_8FD6_D4AF68BC8AC6" key="Software\Policies\Samba\Unix Settings\Daily Scripts" valueName="Daily Scripts" />
      </elements>
    </policy>
  </policies>
</policyDefinitions>

en-US/samba.adml:

<policyDefinitionResources revision="1.0" schemaVersion="1.0">
  <displayName>
  </displayName>
  <description>
  </description>
  <resources>
    <stringTable>
      <string id="CAT_3338C1DD_8A00_4273_8547_158D8B8C19E9">Samba</string>
      <string id="CAT_7D8D7DC8_5A9D_4BE1_8227_F09CDD5AFFC6">Unix Settings</string>
      <string id="POL_9320E11F_AC80_4A7D_A5C8_1C0F3F727061">Daily Scripts</string>
      <string id="POL_9320E11F_AC80_4A7D_A5C8_1C0F3F727061_Help">This policy setting allows you to execute commands, 
either local or on remote storage, daily.</string>
    </stringTable>
    <presentationTable>
      <presentation id="POL_9320E11F_AC80_4A7D_A5C8_1C0F3F727061">
        <listBox refId="LST_2E9A4684_3C0E_415B_8FD6_D4AF68BC8AC6">Script and arguments</listBox>
      </presentation>
    </presentationTable>
  </resources>
</policyDefinitionResources>

The meaning of the various tags are explained in Microsoft’s Group Policy documentation. Before the endless documentation and confusing XML scares you away, be aware there is an easier way!

ADMX Migrator

FullArmor created the ADMX Migrator to simplify the shift for system administrators from the old ADM policy templates to ADMX. Fortunately, this tool also serves our purpose for assisting us in easily creating these templates for our SSE. Unfortunately, the tool hasn’t seen any development in the past 8 years, and wont run in Windows 10 (or any Unix/Linux platform, for that matter). I had to dredge up a Windows 7 VM in order to install and use the tool (supposedly Vista works as well, but who wants to install that?).

Creating the Administrative Template

  1. Open ADMX Migrator
  2. Right click on ADMX Templates in the left tree view, and click New Template.
  3. Give your template a name, and click OK.
  4. Right click on the new template in the left tree view, and click New Category.
  5. Give the Category a name. This name will be displayed in the Group Policy Management Editor under Administrative Templates. You can choose to nest template under an existing category, or simply add it as a new root.
    Note: You can also add sub-categories under this category. After clicking OK, right click the category you created and select New Category.
  6. Next, create your policy by right clicking on your new category, and selecting New Policy Setting.
  7. Because we’ll be applying these settings to a Linux machine, the Registry fields are mostly meaningless, but they are required. Your policies will be stored under these keys on the SYSVOL in the Registry.pol file. Choose some sensible Registry Key, such as ‘Software\Policies\Samba\Unix Settings’, and a Registry Value Name, such as ‘Daily Scripts’ (these are the values used for Samba’s cron.daily policy). The Display Name is the name that will be displayed for this policy in the Group Policy Management Editor. I usually make this the same as the Registry Value Name, but it doesn’t need to be.
  8. Select whether this policy will be applied to a Machine, a User, or to Both in the Class field. In our example, we could potentially set Both, then our Client Side Extension would need to handle both cron.daily scripts (the Machine) and also User crontab entries. Click OK for your policy to be created.
  9. Your new policy will appear in the middle list view. Highlight it, and you will see a number of tabs below for configuring the policy.
  10. Select the Values tab and set the Enabled Value Type. In this case, we’ll use String, since our cron commands will be saved to the Registry.pol as a string. In the Value field, you can set a default enabled value (this is optional).
  11. Select the Presentation tab, right click in the Elements view, and click New Element > ListBox (or a different presentation, depending on the policy). If you look at the samba.adml file from the previous section, you’ll notice that the presentationTable contains a listBox item. That’s what we’re creating here.
  12. Choose an element Label, this will be the name for the list displayed in the Group Policy Management Editor.
  13. Choose a Registry Key. This will be pre-populated with the parent Registry Key you gave when creating the policy. Append something to the key to make it unique. We’ll use ‘Software\Policies\Samba\Unix Settings\Daily Scripts’ for our cron.daily policy.
  14. Navigate to the Explain tab, and add an explanation of what this policy is and what it does. This will be displayed to users in the Group Policy Management Editor.
  15. Now right click on your template name in the left tree, and select Save As.
  16. Finally, you’ll need to deploy your new policy definition to the SYSVOL. It should be saved to the Policies\PolicyDefinitions (the Group Policy Central Store) directory. These instructions from Microsoft can assist you in setting up your Group Policy Central Store.

Creating the Client Side Extension

The Client Side Extension (CSE) is the code that will be called by Samba’s Winbind to deploy our newly created policy to a client machine. CSEs must be written in Python3, and are deployed using the register_gp_extension() and unregister_gp_extension() samba functions.

#!/usr/bin/python3
# gp_scripts_ext samba gpo policy
# Copyright (C) David Mulder <dmulder@suse.com> 2020
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.

import os, re
from samba.gpclass import gp_pol_ext, register_gp_extension, \
    unregister_gp_extension, list_gp_extensions
from base64 import b64encode
from tempfile import NamedTemporaryFile
from samba import getopt as options
import optparse

intro = '''
### autogenerated by samba
#
# This file is generated by the gp_scripts_ext Group Policy
# Client Side Extension. To modify the contents of this file,
# modify the appropriate Group Policy objects which apply
# to this machine. DO NOT MODIFY THIS FILE DIRECTLY.
#

'''

class gp_scripts_ext(gp_pol_ext):
    def __str__(self):
        return 'Unix Settings/Scripts'

    def process_group_policy(self, deleted_gpo_list, changed_gpo_list):

        # Iterate over GPO guids and their previous settings, reverting
        # changes made by this GPO.
        for guid, settings in deleted_gpo_list:

            # Tell the Group Policy database which GPO we're working on
            self.gp_db.set_guid(guid)

            if str(self) in settings:
                for attribute, script in settings[str(self)].items():
                    # Delete the applied policy
                    if os.path.exists(script):
                        os.unlink(script)

                    # Remove the stored removal information
                    self.gp_db.delete(str(self), attribute)

            # Commit the changes to the Group Policy database
            self.gp_db.commit()

        # Iterate over GPO objects, applying new policies found in the SYSVOL
        for gpo in changed_gpo_list:
            if gpo.file_sys_path:
                reg_key = 'Software\\Policies\\Samba\\Unix Settings'
                sections = { '%s\\Daily Scripts' % reg_key : '/etc/cron.daily',
                             '%s\\Monthly Scripts' % reg_key : '/etc/cron.monthly',
                             '%s\\Weekly Scripts' % reg_key : '/etc/cron.weekly',
                             '%s\\Hourly Scripts' % reg_key : '/etc/cron.hourly' }

                # Tell the Group Policy database which GPO we're working on
                self.gp_db.set_guid(gpo.name)

                # Load the contents of the Registry.pol from the SYSVOL
                pol_file = 'MACHINE/Registry.pol'
                path = os.path.join(gpo.file_sys_path, pol_file)
                pol_conf = self.parse(path)
                if not pol_conf:
                    continue

                # For each policy in the Registry.pol, apply the settings
                for e in pol_conf.entries:
                    if e.keyname in sections.keys() and e.data.strip():
                        cron_dir = sections[e.keyname]
                        attribute = '%s:%s' % (e.keyname,
                                b64encode(e.data.encode()).decode())

                        # Check if this policy has already been applied in the
                        # Group Policy database. Skip the apply if it's already
                        # been installed.
                        old_val = self.gp_db.retrieve(str(self), attribute)

                        if not old_val:

                            # Create a temporary file and store policy in it,
                            # then move the file to the cron directory for
                            # execution.
                            with NamedTemporaryFile(prefix='gp_', mode="w+",
                                    delete=False, dir=cron_dir) as f:
                                contents = '#!/bin/sh\n%s' % intro
                                contents += '%s\n' % e.data
                                f.write(contents)
                                os.chmod(f.name, 0o700)

                                # Store the name of the applied file, so that
                                # we can remove it later on unapply (see the
                                # first loop in this function).
                                self.gp_db.store(str(self), attribute, f.name)

                        # Commit the changes to the Group Policy database
                        self.gp_db.commit()

    def rsop(self, gpo):
        output = {}
        pol_file = 'MACHINE/Registry.pol'
        if gpo.file_sys_path:
            path = os.path.join(gpo.file_sys_path, pol_file)
            pol_conf = self.parse(path)
            if not pol_conf:
                return output
            for e in pol_conf.entries:
                key = e.keyname.split('\\')[-1]
                if key.endswith('Scripts') and e.data.strip():
                    if key not in output.keys():
                        output[key] = []
                    output[key].append(e.data)
        return output

if __name__ == "__main__":
    parser = optparse.OptionParser('gp_scripts_ext.py [options]')
    sambaopts = options.SambaOptions(parser)
    parser.add_option_group(sambaopts)

    parser.add_option('--register', help='Register extension to Samba',
                      action='store_true')
    parser.add_option('--unregister', help='Unregister extension from Samba',
                      action='store_true')

    (opts, args) = parser.parse_args()

    # We're collecting the Samba loadparm simply to find our smb.conf file
    lp = sambaopts.get_loadparm()

    # This is a random unique GUID, which identifies this CSE.
    # Any random GUID will do.
    ext_guid = '{5930022C-94FF-4ED5-A403-CFB4549DB6F0}'
    if opts.register:
        # The extension path is the location of this file. This script
        # should be executed from a permanent location.
        ext_path = os.path.realpath(__file__)
        # The machine and user parameters tell Samba whether to apply this
        # extension to the computer, to individual users, or to both.
        register_gp_extension(ext_guid, 'gp_scripts_ext', ext_path,
                              smb_conf=lp.configfile, machine=True, user=False)
    elif opts.unregister:
        # Remove the extension and do not apply policy.
        unregister_gp_extension(ext_guid)

    # List the currently installed Group Policy Client Side Extensions
    exts = list_gp_extensions(lp.configfile)
    for guid, data in exts.items():
        print(guid)
        for k, v in data.items():
            print('\t%s: %s' % (k, v))

The gp_pol_ext/gp_inf_ext Python Classes

Your CSE must be a class that inherits from a subclass of gp_ext. The gp_pol_ext is a subclass of gp_ext that provides simplified parsing of Registry.pol files. If you choose to store your policies in ini/inf files in the SYSVOL (instead of using Administrative Templates), then you can inherit from the gp_inf_ext instead.

If your class inherits from either gp_pol_ext or gp_inf_ext, it has a parse() function defined, which takes a single filename. The parse() function will parse the contents of the policy file and return it in a sensible format.

If for some reason you choose to store data on the SYSVOL in some other format (such as in XML, etc), you’ll need to subclass gp_ext, then implement a read() function, like this:

import xml.etree.ElementTree
class gp_xml_ext(gp_ext):
    def read(self, data_file):
        return xml.etree.ElementTree.parse(data_file)

The read() function is called by parse(), passing it a local filename tied to the systems SYSVOL cache. Then within process_group_policy() you can call parse() to fetch the parsed data from the SYSVOL.

Process Group Policy

The process_group_policy() function serves two primary purposes; it applies new policy, and it removes old policy. In the example, you’ll notice there are two main loops. The first loop iterates over the deleted_gpo_list, which are all the policies that should be removed. The second loop iterates over the changed_gpo_list, which are new policies which must be applied.

Deleted GPOs

for guid, settings in deleted_gpo_list:
    self.gp_db.set_guid(guid)
    if str(self) in settings:
        for attribute, script in settings[str(self)].items():
            if os.path.exists(script):
                os.unlink(script)
            self.gp_db.delete(str(self), attribute)
    self.gp_db.commit()

The deleted_gpo_list is a dictionary which contains the guids of Group Policy Objects, with associated settings which were previously applied. This list of applied settings is generated by the second loop (changed_gpo_list) while it is applying policy. The self.gp_db object is a database handle which allows you to manipulate the Group Policy Database.

The Group Policy Database keeps track of all settings that have been applied, and info on how to revert those applied settings. For example, in the smb.conf CSE, the attribute stored in settings could be client max protocol and the stored value could be NT1. If our GPO has applied a client max protocol set to SMB3_11, the Group Policy database keeps track of the previous value of NT1 for when it’s time to remove the SMB3_11 policy.

In our example cron script policy above, you can see that the stored value is a filename (the script variable). This is actually the name of the script we’ve applied, which will allow us to delete it later (therefore removing the policy). The scripts CSE is a special case, where we don’t need to keep track of an old value to restore, instead we’re keeping track of files to delete (since the ‘old value’ is that the script didn’t exist).

Changed GPOs

for gpo in changed_gpo_list:
    if gpo.file_sys_path:
        reg_key = 'Software\\Policies\\Samba\\Unix Settings'
        sections = { '%s\\Daily Scripts' % reg_key : '/etc/cron.daily',
                     '%s\\Monthly Scripts' % reg_key : '/etc/cron.monthly',
                     '%s\\Weekly Scripts' % reg_key : '/etc/cron.weekly',
                     '%s\\Hourly Scripts' % reg_key : '/etc/cron.hourly' }
        self.gp_db.set_guid(gpo.name)
        pol_file = 'MACHINE/Registry.pol'
        path = os.path.join(gpo.file_sys_path, pol_file)
        pol_conf = self.parse(path)
        if not pol_conf:
            continue
        for e in pol_conf.entries:
            if e.keyname in sections.keys() and e.data.strip():
                cron_dir = sections[e.keyname]
                attribute = '%s:%s' % (e.keyname,
                        b64encode(e.data.encode()).decode())
                old_val = self.gp_db.retrieve(str(self), attribute)
                if not old_val:
                    with NamedTemporaryFile(prefix='gp_', mode="w+",
                            delete=False, dir=cron_dir) as f:
                        contents = '#!/bin/sh\n%s' % intro 
                        contents += '%s\n' % e.data
                        f.write(contents)
                        os.chmod(f.name, 0o700)
                        self.gp_db.store(str(self), attribute, f.name) 
                self.gp_db.commit()

The second loop is a little more involved. When we iterate over changed_gpo_list, we’re actually iterating over a list of GPO objects. The attributes of the object are:

  • gpo.name: The GUID of the GPO.
  • gpo.file_sys_path: A physical path to a cache of GPO on the local filesystem.

There are other methods and attributes, but these are the only ones important to a CSE.

The primary purpose of this loop is to iterate over the GPOs, read their policy in the SYSVOL, then check the sections for the Registry Key we created in our Server Side Extension. If our policy Registry Key exists, then we read the entry and apply the policy.

In our example, we find the ‘Software\Policies\Samba\Unix Settings\Daily Scripts’ policy, then read the script contents from Registry.pol entry and write the script to a local file. In the example code, we write the script to a temporary file, then move the file to it’s permanent location in /etc/cron.daily.

We also have to make sure we set the name of the new script in our self.gp_db (Group Policy Database). This ensures that we can remove the policy when we delete the GPO. At the beginning of our loop, we also need to ensure we call set_guid() on our Group Policy Database, so it knows which GPO we’re operating on.

Resultant Set of Policy

The rsop() function in the extension is optional. It should return a dictionary containing key/value pairs of what our current policy will or has applied. The function is passed a list of GPO objects (similar to our changed_gpo_list), and we should parse the list similar to how we did in our changed_gpo_list loop. The difference is that the rsop() function does not apply any policy. It only returns what will be applied. This function enables the samba-gpupdate --rsop command to report on applied, or soon to be applied policy.

Registering/Unregistering a Client Side Extension

The final step is to register an extension on the host. While the example code provides a detailed example of how to register an extension, the basic requirement is simply to call register_gp_extension().

ext_guid = '{5930022C-94FF-4ED5-A403-CFB4549DB6F0}'
ext_path = os.path.realpath(__file__)
register_gp_extension(ext_guid, 'gp_scripts_ext', ext_path,
    smb_conf='/etc/samba/smb.conf', machine=True, user=False)

The extension guid can be any random guid. It simply must be unique among all extensions that you register to the host. The extension path is literally just the path to the source file containing your CSE.

You must pass your smb.conf file to the extension, so it knows where to store the list of registered extensions. You also must specify whether to apply this extension to the machine, or to individual users (or to both).

Unregistering the extension is simple. You call the unregister_gp_extension() and pass it the unique guid you previously chose which represents this CSE.

Conclusion

I hope this tutorial proves useful. Please comment below if you have questions! You can also reach me on freenode in the samba-technical channel (nic: dmulder).