Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

horde trustr – A new horde CA app step by step

Trustr is my current project to create a simple certificate management app.
I decided that it is just about the right scope to demonstrate a few things about application development in Horde 5.

I have not made any research if the name is already occupied by some other software. Should any problems arise, please contact me and we will find a good solution. I just wanted to start without losing much time on unrelated issues.

My goals as of now:

– Keep everything neat, testable, fairly decoupled and reusable. The core logic should be exportable to a separate library without much change. There won’t be any class of static shortcut methods pulling stuff out of nowhere. Config and registry are only accessed at select points, never in the deeper layers.
– Provide a CLI using Horde_Cli and Horde_Cli_Application (modeled after the backup tool in horde base git)
– Store to relational database using Horde_Db and Horde_Rdo for abstraction
– Use php openssl extension for certificate actions, but design with future options in mind
– Rely on magic openssl defaults as little as possible
– Use conf.xml / conf.php for any global defaults
– Show how to use the inter-app API (reusable for xml-rpc and json-rpc)
– Showcase an approach to REST danabol ds in Horde (experimental)

The app is intended as a resource provider. The UI is NOT a top priority. However, I am currently toying around with a Flux-like design in some unrelated larger project and I may or may not try some ideas later on.

Initial Steps: Creating the working environment

I set up a new horde development container using the horde tumbleweed image from Open Build Service and a docker compose file from my colleague Florian Frank. Please mind both are WIP and improve-as-needed projects.


git clone https://github.com/FrankFlorian/hordeOnTumbelweed.git
cd hordeOnTumbelweed
docker-compose -f docker-compose.yml up


This yields a running horde instance on localhost and a database container.
I needed to perform a little manual setup in the web admin ui to get the DB to run and create all default horde schemas.

Next I entered the developer container with a shell
docker exec -it hordeOnTumbelWeed_php_1 bash

There are other ways to work with a container but that’s what I did.

 

Creating a  skeleton app

The container comes with a fairly complete horde git checkout in /srv/git/horde and a clone of the horde git tools in /srv/git/git-tools

A new skeleton app can be created using

horde-git-tools dev new --author "Ralf Lang <lastname@b1-systems.de>" --app-name trustr

The new app needs to be linked to the web directory using

horde-git-tools dev install

Also, a registry entry needs to be created by putting a little file into /srv/git/horde/base/config/registry.d

cat trustr-registry.d.php

<?php
// Copy this example snipped to horde/registry.d
$this->applications['trustr'] = array(
'name' => _('Certificates'),
'provides' => array('certificates')
);

 

This makes the new app show up in the admin menu. To actually use it and make it appear in topbar, you also need to go to /admin/config and create the config file for this app. Even though the settings don’t actually mean anything by now, the file must be present.

I hope to follow up soon with articles on the architecture and sub systems of the little app.

a silhouette of a person's head and shoulders, used as a default avatar

a silhouette of a person's head and shoulders, used as a default avatar

Catatan Perjalanan CGK-HKG-TPE

2018-08-09 11.41.35

Bukan perjalanan pertama saya melalui rute ini, namun tak salah menuliskan catatan ini.

Pesawat saya berangkat Kamis pagi jam 8.15 dengan Catay Pacific CX-718. Saya sudah melakukan checkin online sebelumnya melalui website dan memilih nomor kursi. Catay Airways hanya mengijinkan melakukan checkin online mulai dari 48 jam sebelum keberangkatan. Kemudian saya memilih Muslim Food dari menu checkin Catay. Pilihan Muslim Food hanya tersedia di Penerbangan HK-TPE, sedangkan dari CGK-HK semua makanan adalah Halal. Untuk penerbangan pulang, pilihan Muslim Food hanya tersedia dalam penerbangan TPE-HK, sedangkan HK-CGK semua makanan dinyatakan halal, sehingga tidak perlu memilih lagi. Selalu bawa botol minum isi ulang.

Di bandara Terminal 2F, saya segera melakukan checkin dan drop bagasi. Meski berangkat bertiga, urusan ini mending diselesaikan sendiri tanpa harus tunggu-tungguan. Lanjut imigrasi karena bagian ini cukup memakan waktu. Sampai di loket imigrasi, tebakan saya benar, antrian mengular. Butuh 15 menit dari awal antri hingga pemerikasaan saya selesai. Huft. Sampai di bagian Gate baru mencari Pak Edwin dan Pak Haris yang ternyata sudah duduk ngopi dan sarapan di Old Town Coffee. Oh iya, dalam penerbangan internasional terdapat banyak catatan dalam barang bawaan yang biasanya rewel juga, diantaranya:

  1. Aturan benda cair dalam botol, maksimal 100ml dan dimasukkan ke kantong zipper bening.
  2. Sebelum x-ray terakhir, pastikan botol minum dikosongkan. Nanti bisa diisi kembali di water refil setelah x-ray tersebut.

Boarding 40 menit sebelum jam keberangkatan. Penerbangan ditempuh dalam kurun waktu 5 jam 10 menit. Terdapat perbedaan waktu antara Jakarta dan Hongkong, yaitu lebih cepat 1 jam. HK GMT+8. Saya langsung mengatur jam tangan dan jam di HP agar ikut berganti. Perjalanan saya habiskan dengan tidur, membaca novel dan mendengarkan musik. Terdapat layanan hiburan yang memadai di dalam kabin pesawat.

Oh iya, saya lupa penerbangan Catay menggunakan pesawat jenis berapa, namun di kedua pesawat itu terdapat colokan universal yang bisa digunakan untuk mencharge HP. Ada dua kemungkinan lokasi colokan itu berada. Pilihan pertama ada di lipatan meja makan. Yang kedua pilihannya di bawah kursi, di bagian pertemuan kursi. Silahkan diraba perlahan. Oh iya, ada indikasi lampu hijau yang hidup menandakan charger bisa digunakan karena jika lampu tidak hidup, berarti tidak ada arus yang berjalan. Lampu beberapa kali mati, biasanya ketika akan lepas landas, ada goncangan dan ketika akan mendarat.

Saya sedikit rewel jika berada dalam ruangan AC cukup lama. Apalagi jika di dalam pesawat. Ini menyebabkan hidung saya iritasi kemudian mimisan. Makanya saya selalu menggunakan masker dalam pesawat jika lama penerbangan lebih dari 2 jam. Tak lupa jaket tebal bertudung kepala.

Mendarat di HK pukul 14.20 waktu setempat. Setelah membawa bagasi kabin, kemudian menuju couter checkin transit. Nyari tempat isi ulang air minum, minum secukupnya. Lanjut menuju gate transit. Butuh naik 1 lantai dan kemudian dilakukan pemeriksaan ulang, x-ray bagasi kabin dan screening. Pastikan sebelum masuk pemeriksaan, kosongkan botol minum agar mempercepat proses scanning.

Koneksi internet di Bandara HK saya acungi jempol. Cukup aktifkan Wi-Fi kemudian pilih nama access point untuk bandara HK, setujui persyaratan dan peraturan. Setelah itu bisa berinternet ria tanpa pembatasan durasi berinternet. Untuk kecepatan tidak perlu dikomentari, cukup cepat.

Menuju Gate penerbangan selanjutnya. Tetapi butuh menemukan tempat sholat. Di terminal HK, tempat sholat dapat ditemukan di dekat Gate 42-43. Peralatan solat, mukena dan alquran tersedia rapi. Arah kiblat juga tersedia. Tempat wudhu tersedia di dalam ruangan. Perlu dicatat juga, mushola tersebut bukan merupakan tempat ibadah khusus muslim. Jadi kita kemungkinan akan berbagi ruang dalam sunyi dengan pemeluk agama lain.

Selesai sholat, kita mencari makan. Restoran Halal tersedia juga di foodcourt sekitaran Gate 40. Old Town Coffee. Makanan semenanjung malaya dan Indonesia.
Saya memilih kari ayam dengan harga 92 Dolar HK termasuk Es Teh Manis.

2018-08-09 16.28.00

Selesai makan, kita bergeser ke Gate 62 dimana kita melanjutkan perjalanan selanjutnya ke TPE. Dalam pesawat kita dibagikan lembar pengisian informasi kedatangan. Saya sebelumnya sudah mengisi secara online, tapi nulis lagi gpp. Jadi selalu bawa ballpoint di tas kabin untuk memudahkan mengisi form pelaporan. Karena dengan mengisi ketika masih di pesawat, bisa mempercepat kita dalam antrian imigrasi.

Mendarat. Kemudian buru-buru antri imigrasi. Butuh sekitaz 20 menit antri hingga keluar imigrasi. Lanjut ke konter bagasi untuk mengambil koper. Sembari antri, saya mampir toilet dan mengaktifkan pake Mi Roaming Taiwan. Apa itu Mi roaming? Akan saya ceritakan di postingan selanjutnya.

Keluar bandara, sudah di jemput oleh Franklin Weng. dan cerita berlanjut ke tulisan ini

Salam
Estu

a silhouette of a person's head and shoulders, used as a default avatar

A Simple List Render Optimization For React

Photo by George Brynzan on Unsplash

Yesterday I was watching a talk by Ben Ilegbodu at React Alicante called Help! My React App is Slowwwww! in which Ben discussed some optimizations developers can make to help improve performance of React applications. He goes over many of the potential bottlenecks that may arise such as unnecessary DOM updates, reconciliation and unnecessary object creation. It’s a really interesting talk and I encourage you to watch it (link below) but what I found most interesting was his first point about unnecessary DOM updates.

When trying to optimize performance of web apps, we can look for actions that are bottlenecks and try to minimize the amount of times the application performs these actions. It turns out updating the DOM is a very time consuming operation. It is in fact so time consuming that React has a process called reconciliation that exists to try and avoid unnecessary updates.

Unfortunately as Ben shows in his talk — and as I will show in this post — there are still situations where reconciliation will not be able to help us. However we don’t need to lose hope because there are some simple tweaks we can make to address the issue.

The 🔑 to Lists

This is a really handy trick you can use to optimize the rendering of list items in React. Suppose you have a page that displays a list of items and is defined as follows:

When the button is clicked, it will add an item to the list. This will then trigger an update to the DOM to display our new item along with all the old items. If we look at the DOM inspector while clicking the button we see the following (orange indicates the node is updating):

See how all the list items are updated? If we think about this for a moment this doesn’t actually seem like an ideal update. Why can’t we just insert the new node without having to update all the other nodes? The reason for this has to do with how we are using the map function in the List component.

See how we are setting the key for each list item as the index? The problem here is that React uses the key to determine if the item has actually changed. Unfortunately since the insertion we are doing happens at the start of the list, the indexes of all items in the list are increased by one. This causes React to think there has been a change to all the nodes and so it updates them all.

To work around this we need to modify the map function to use the unique id of each item instead of the index in the array:

And now when we click the button we see that the new nodes are being added without updating the old ones:

So what’s the lesson?

Always use a unique key when creating lists in React (and the index is not considered unique)!

The Exception ✋

There can be situations where you do not have a truly unique id in your arrays. The ideal solution is to find some unique key which may be derived by combining some values in the object together. However in certain cases — like an array of strings — this cannot be possible or guaranteed. In these cases you must rely on the index to be the key.

So there you have it, a simple trick to optimize list rendering in React! 🎉🏎

If you liked this post be sure to follow this blog, follow me on twitter and my blog on dev.to.

P.S. Looking to contribute to an open source project? Come contribute to Saka, we could use the help! You can find the project here: https://github.com/lusakasa/saka


A Simple List Render Optimization For React 🏎 was originally published in Information & Technology on Medium, where people are continuing the conversation by highlighting and responding to this story.

a silhouette of a person's head and shoulders, used as a default avatar

Russian KDE community

KDEНаконец-то увидел свет новый движок портала https://kde.ru, посвященного русскоязычным пользователям KDE и проету KDE в частности. И хотя сам я уже давно KDE не пользуюсь, до сих пор многое связывает меня с этим проектом. Именно с разработки KDE начался мой путь в мир свободного ПО. Еще будучи студентом я познкомился с проетом GNU, Linux и Qt3, а уже после, в Германии – когда я начал работу в SUSE – занимался разработкой и портированием стека KDE4 на openSUSE (больше всего постов в этом блоге именно о KDE и openSUSE). Мы тогда еще сидели на svn, помните что это такое? 🙂
После SUSE я забросил проект KDE, опробовал и пересел на различные window managers. До тех пор, пока не устроился разработчиком LiMux (Kubuntu на основе собственной management-системы на основе LDAP) тут, в Мюнхене. И хотя занимался я в основном security-проектами,  было приятно вспомнить и почти весь KDE стек, который мы создавался более чем 10 лет назад.

Это очень интересный проект. Да, он большой, очень большой, и не всегда это хорошо, тем не менее он обладает самой на мой взгляд продуманной архитектурой. Программировать KDE всегда было здорово не только из-за мощи Qt и самым новейшим инструментам, используемым в проекте, но и благодаря идеям, заложенным в основу структуры стека его компонентов. Очень важным в проекте KDE всегда было сообщество людей, его разрабатывающих. Его окружило много молодых инженеров, открытых для новых, а порой даже и для откровенно сумасшедших идей. Это так освежало. Это так вписывалось в природу свободного ПО. Я никогда не забуду эти хиппи-тусовок по всей Европе, куда мы добирались по ночам и самыми сумасшедшими способами, доклады, подготавливаемые и читаемые друг другу просто ради удовольствия.

Я рад, что и в России есть люди, продолжающие разработку и продвигающие этот проект. Хочу пожелать вам, ребята, удачи. Постарайтесь сохранить эту атмосферу, сводящую с ума и увлекающую за собой. Я думаю, это самый большой стимул для разработки ПО. Увлечение процессом, удовольствие от наконец-то найденного решения, восхищение его элегантностью и простотой. Все это KDE, все это – сообщество людей, его продвигающих.

a silhouette of a person's head and shoulders, used as a default avatar

Announcing Tumbleweed Snapshots Official Hosting

Adapted from announcement to opensuse-factory mailing list:

Tumbleweed Snapshots, fixed repositories containing previously released versions of Tumbleweed, are now officially hosted on download.opensuse.org! For those not familiar with the Tumbleweed Snapshots concept please see the short introduction video and the presentation I gave at oSC18. The overall concept is to allow Tumbleweed to be consumed as short-lived distributions that can be utilized after a new snapshot is published. The two primary benefits are not having to update just to install new packages and being able to sit on an older snapshot when avoiding problematic package updates. The latter working best with snapper rollbacks as it allows for normal system operation after rolling back.

Given the proper method for updating between snapshots is zypper dup instead of zypper up which is normally reserved for updating between major releases of the distribution, like Leap, keeping the previous repositories and switching between them with the use of dup is quite natural. Given the latest snapshot is accessible and zypper operates normally there are no down-sides to this approach while providing some important advantages. If you have not already tried them I would encourage you to do so. Many users are already updating Tumbleweed once a week and thus perfectly align with the design of snapshots.

The installation step and basic usage is documented in the tumbleweed-cli README. No additional resources are utilized on the local machine. If for whatever reason you want to go back just run tumbleweed uninit to restore the default repository setup.

If you are already using Tumbleweed Snapshots and would like to switch to the official hosting a migration command is provided in tumbleweed-cli 0.3.0 which will be included in Tumbleweed shortly. As a side-note, bash completion is also provided!

Once running version 0.3.0 (run tumbleweed --version to see which is installed), simply run tumbleweed migrate. Keep in mind the official hosting only provides 10 snapshots while my personal hosting on S3 provides 50. The count will hopefully be increased in the future, but given how long it has taken to get this far it may be some time. Additionally, my AWS hosting has a CDN setup and will likely remain faster until mirrors decide to host the snapshots. Based on feedback and how things progress I will decide when to stop hosting on AWS, but it will be announced and the tumbleweed-cli will be changed to auto-migrate.

For those interested, the difference in storage usage can be compared on metrics.opensuse.org.

Making Tumbleweed Snapshots the default is likely worth considering once everything settles and an appropriate level of adoption by mirrors is reached.

It is encouraging to see the enthusiastic discussions about Tumbleweed Snapshots in IRC and e-mails.

Enjoy!

the avatar of Kubic Project

CRI-O is now our default container runtime interface

CRI-O

We’re really excited to announce that as of today, we now officially supports the CRI-O Container Runtime Interface as our default way of interfacing with containers on your Kubic systems!

Um that’s great, but what is a Container Runtime Interface?

Contrary to what you might have heard, there are more ways to run containers than just the docker tool. In fact there are an increasing number of options, such as runc, rkt, frakti, cri-containerd and more. Most of these follow the OCI standard defining how the runtimes start and run your containers, but they lack a standard way of interfacing with an orchestrator. This makes things complicated for tools like kubernetes, which run on top of a container runtime to provide you with orchestration, high availability, and management.

Kubernetes therefore introduced a standard API to be able to talk to and manage a container runtime. This API is called the Container Runtime Interface (CRI).

Existing container runtimes like Docker use a “shim” to interface between Kubernetes and the runtime, but there is another way, using an interface that was designed to work with CRI natively. And that is where CRI-O comes into the picture.

Introduction to CRI-O

Started little over a year ago, CRI-O began as a Kubernetes incubator project, implementing the CRI Interface for OCI compliant runtimes. Using the lightweight runc runtime to actually run the containers, the simplest way of describing CRI-O would be as a lightweight alternative to the Docker engine, especially designed for running with Kubernetes.

As of 6th Sept 2018 CRI-O is no longer an incubator project, but now an official part of the Kubernetes family of tools.

Why CRI-O?

There are a lot of reasons the Kubic project love CRI-O, but to give a Top 4 some of the largest reasons include:

  • A Truly Open Project: As already mentioned, CRI-O is operated as part of the broader Kubernetes community. There is a broad collection of contributors from companies including Red Hat, SUSE, Intel, Google, Alibaba, IBM and more. The project is run in a way that all these different stakeholders can actively propose changes and can expect to see them merged, or at least spur steps into that direction. This is harder to say of other similar projects.
  • Lightweight: CRI-O is made of lots of small components, each with specific roles, working together with other pieces to give you a fully functional container experience. In comparison, the Docker Engine is a heavyweight daemon which is communicated to using the docker CLI tool in a client/server fashion. You need to have the Daemon running before you can use the CLI, and if that daemon is dead, so is your ability to do anything with your containers.
  • More Securable: Every container run using the Docker CLI is a ‘child’ of that large Docker Daemon. This complicates or outright prevents the use of tooling like cgroups & security constraints to provide an extra layer of protection to your containers. As CRI-O containers are children of the process that spawned it (not the daemon) they’re fully compatible with these tools without complication. This is not only cool for Kubernetes, but also when using CRI-O with Podman, but more about that later…
  • Aligned with Kubernetes: As an official Kubernetes project, CRI-O releases in lock step with Kubernetes, with similar version numbers. ie. CRI-O 1.11.x works with Kubernetes 1.11.x. This is hugely beneficial for a project like Kubic where we’re rolling and want to keep everything working together at the latest versions. On the other side of the fence, Kubernetes currently only officially supports Docker versions 17.03.x, now well over 1 year old and far behind the 18.06.x version we currently have in Kubic.

CRI-O and Kubernetes

Given one of the main roles of Kubic is to run Kubernetes, as of today, Kubic’s kubeadm system role is now designed to use CRI-O by default.

Our documentation has been updated to reflect the new CRI-O way of doing things.

The simplest way of describing it would be that we now have less steps than before.
You can now initialise your master node with a single command immediately after installation.
But you need to remember to add --cri-socket=/var/run/crio/crio.sock to your kubeadm init and join commands. (We’re looking into ways to streamline this).

CRI-O and MicroOS

Kubic is about more than Kubernetes, and our MicroOS system role is a perfect platform for running containers on stand-alone machines. That too now includes CRI-O as it’s default runtime interface.

In order to make use of CRI-O without Kubernetes, you need a command-line tool, and that tool is known as podman. It is now installed by default on Kubic MicroOS.

Podman

Podman has been available in Tumbleweed & Kubic for some time. Put simply, it is to CRI-O what the Docker CLI tool is to the Docker Engine daemon. It even has a very similar syntax.

  • Use podman run to run containers in the same way you’d expect from docker run
  • podman pull pulls containers from registries, just like docker pull, and by default our podman is configured to use the same Docker Hub as many users would expect.
  • Some podman commands have additional functionality compared to their docker equivalents, such as podman rm --all and podman rmi --all which will remove all of your containers and their images respectively.
  • A full crib-sheet of podman commands and their docker equivalents is available

Podman also benefits from CRI-Os more lightweight architecture. Because every Podman container is a direct child of the podman command that created it, it’s trivial to use podman as part of systemd services. This be combined with systemd features like socket activation to do really cool things like starting your container only when users try to access it!

What about Docker?

As excited as we are about CRI-O and Podman, we’re not blind to the reality that many users just won’t care and will be more comfortable running the well known docker tool.

For the basic use case of running containers, both docker and podman can co-exist on a system safely. Therefore it will still be available and installed by default on Kubic MicroOS.
If you wish to remove it, just run transactional-update pkg rm -u docker-kubic and reboot.

The Docker Engine doesn’t co-exist with CRI-O quite so well in the Kubernetes scenario, so we do not install both by default on the kubeadm system role.
We still wish to support users wishing to use the Docker Engine with Kubernetes. Therefore to swap from CRI-O to the Docker Engine just run transactional-update pkg in patterns-caasp-alt-container-runtime -cri-o-kubeadm-criconfig and reboot.

Alternatively if you’re installing Kubic from the installation media you can deselect the “Container Runtime” and instead choose the “Alternative Container Runtime” pattern from the “Software” option as part of the installation.

Regardless of which runtime you choose to use, thanks for using Kubic and please join in, send us your feedback, code, and other contributions, and remember, have a lot of fun!.

a silhouette of a person's head and shoulders, used as a default avatar

Learning from Pain

Pain is something we generally try to avoid; pain is unpleasant, but it also serves an important purpose.

Acute pain can be feedback that we need to avoid doing something harmful to our body, or protect something while it heals. Pain helps us remember the cause of injuries and adapt our behaviour to avoid a repeat.

As a cyclist I occasionally get joint pain that indicates I need to adjust my riding position. If I just took painkillers and ignored the pain I’d permanently injure myself over time.

I’m currently recovering from a fracture after an abrupt encounter with a pothole. The pain is helping me rest and allow time for the healing process. The memory of the pain will also encourage me to consider the risk of potholes when riding with poor visibility in the future.

We have similar feedback mechanisms when planning, building, and running software; we often find things painful.

Alas, rather than learn from pain and let it guide us, we all too often stock up on painkillers in the form of tooling or practices that let us press on obstinately doing the same thing that caused the pain in the first place.

Here are some examples…

Painful Tests

Automated tests can be a fantastic source of feedback that helps us improve our software and learn to write better software in the future. Tests that are hard to write are a sign something could be better.

The tests only help us if we listen to the pain we feel when tests are hard to write and read. If we reach for increasingly sophisticated tooling to allow us to continue doing the painful things, then we won’t realise the benefits. Or worse, if we avoid unit testing in favour of higher level tests, we’ll miss out on this valuable feedback altogether.

Here’s an example of a test that was painful to write and read, testing the sending of a booking confirmation email.

@Test // Click to Expand, Full code in link above
public void sendsBookingConfirmationEmail() {
    var emailSender = new EmailSender() {
        String message;
        String to;

        public void sendEmail(String to, String message) {
            this.to = to;
            this.message = message;
        }

        public void sendHtmlEmail(String to, String message) {

        }

        public int queueSize() {
            return 0;
        }
    };

    var support = new Support() {
        @Override
        public AccountManager accountManagerFor(Customer customer) {
            return new AccountManager("Bob Smith");
        }

        @Override
        public void calculateSupportRota() {

        }

        @Override
        public AccountManager superviserFor(AccountManager accountManager) {
            return null;
        }
    };


    BookingNotifier bookingNotifier = new BookingNotifier(emailSender, support);

    Customer customer = new Customer("jane@example.com", "Jane", "Jones");
    bookingNotifier.sendBookingConfirmation(customer, new Service("Best Service Ever"));

    assertEquals("Should send email to customer", customer.email, emailSender.to);
    assertEquals(
        "Should compose correct email",
        emailSender.message,
        "Dear Jane Jones, you have successfully booked Best Service Ever on " + LocalDate.now() + ". Your account manager is Bob Smith"
    );

}
  • The test method is very long at around 50 lines of code
  • We have boilerplate setting up stubbing for things irrelevant to the test such as queue sizes and supervisors
  • We’ve got flakiness from assuming the current date will be the same in two places—the test might not pass if run at midnight, or when changing the time
  • There’s multiple assertions for multiple responsibilities
  • We’ve had to work hard to capture side effects

Feeling this pain, one response would be to reach for painkillers in the form of more powerful mocking tools. If we do so we end up with something like this. Note that we haven’t improved the implementation at all (it’s unchanged), but now we’re feeling a lot less pain from the test.

@Test // Click to Expand, Full code in link above
public void sendsBookingConfirmationEmail() throws Exception {
    var emailSender = mock(EmailSender.class);
    var support = mock(Support.class);

    BookingNotifier bookingNotifier = new BookingNotifier(emailSender, support);

    LocalDate expectedDate = LocalDate.parse("2000-01-01");
    Customer customer = new Customer("jane@example.com", "Jane", "Jones");
    when(support.accountManagerFor(customer)).thenReturn(new AccountManager("Bob Smith"));
    mockStatic(LocalDate.class, args -> expectedDate);

    bookingNotifier.sendBookingConfirmation(customer, new Service("Best Service Ever"));

    verify(emailSender).sendEmail(
        customer.email,
        "Dear Jane Jones, you have successfully booked Best Service Ever on 2000-01-01. Your account manager is Bob Smith"
    );

}
  • The test method is a quarter the length—-but the implementation is as complex
  • The flakiness is gone as the date is mocked to a constant value—but the implementation still has a hard dependency on the system time.
  • We’re no longer forced to stub irrelevant detail—but the implementation still has dependencies on collaborators with too many responsibilities.
  • We only have a single assertion—but there are still as many responsibilities in the implementation
  • It’s easier to capture the side effects—but they’re still there

A better response would be to reflect on the underlying causes of the pain. Here’s one direction we could go that removes much of the pain and doesn’t need complex frameworks

@Test // Click to Expand, Full code in link above
public void composesBookingConfirmationEmail() {

    AccountManagers dummyAllocation = customer -> new AccountManager("Bob Smith");
    Clock stoppedClock = () -> LocalDate.parse("2000-01-01");

    BookingNotificationTemplate bookingNotifier = new BookingNotificationTemplate(dummyAllocation, stoppedClock);

    Customer customer = new Customer("jane@example.com", "Jane", "Jones");

    assertEquals(
        "Should compose correct email",
        bookingNotifier.composeBookingEmail(customer, new Service("Best Service Ever")),
        "Dear Jane Jones, you have successfully booked Best Service Ever on 2000-01-01. Your account manager is Bob Smith"
    );

}
  • The test method is shorter, and the implementation does less
  • The flakiness is gone as the implementation no longer has a hard dependency on the system time
  • We’re no longer forced to stub irrelevant detail because the implementation only depends on what it needs
  • We only have a single assertion, because we’ve reduced the scope of the implementation to merely composing the email. We’ve factored out the responsibility of sending the email.
  • We’ve factored out the side effects so we can test them separately

My point is not that the third example is perfect (it’s quickly thrown together), nor am I arguing that mocking frameworks are bad. My point is that by learning from the pain (rather than rushing to hide it with tooling before we’ve learnt anything) we can end up with something better.

The pain we feel when writing tests can also be a prompt to reflect on our development process—do we spend enough time refactoring when writing the tests, or do we move onto the next thing as soon as they go green? Are we working in excessively large steps that let us get into messes like the above that are painful to clean up?

n.b. there’s lots of better examples of learning from test feedback in chapter 20 of the GOOS book.

Painful Dependency Injection

Dependency injection seems to have become synonymous with frameworks like spring, guice, dagger; as opposed to the relatively simple idea of “passing stuff in”. Often people reach for dependency injection frameworks out of habit, but sometimes they’re used as a way of avoiding design feedback.

If you start building a trivial application from scratch you’ll likely not feel the need for a dependency injection framework at the outset. You can wire up your few dependencies yourself, passing them to constructors or function calls.

As complexity increases this can become unwieldy, tedious, even painful. It’s easy to reach for a dependency injection framework to magically wire all your dependencies together to remove that boilerplate.

However, doing so prematurely can deprive you of the opportunity to listen to the design feedback that this pain is communicating.

Could you reduce the wiring pain through increased modularity—adding, removing, or finding better abstractions?

Does the wiring code have more detail than you’d include in a document explaining how it works? How can you align the code with how you’d naturally explain it? Is the wiring code understandable to a domain expert? How can you make it more so?

Here’s a little example of some manual wiring of dependencies. While short, it’s quite painful:

// Click to Expand, Full code in link above
public static void main(String... args) {
    var credentialStore = new CredentialStore();

    var eventStore = new InfluxDbEventStore(credentialStore);

    var probeStatusReporter = new ProbeStatusReporter(eventStore);

    var probeExecutor = new ProbeExecutor(new ScheduledThreadPoolExecutor(2), probeStatusReporter, credentialStore, new ProbeConfiguration(new File("/etc/probes.conf")));

    var alertingRules = new AlertingRules(new OnCallRota(new PostgresRotaPersistence(), LocalDateTime::now), eventStore, probeStatusReporter)

    var pager = new Pager(new SMSGateway(), new EmailGateway(), alertingRules, probeStatusReporter);

    var dashboard = new Dashboard(alertingRules, probeExecutor, new HttpsServer());
}
  • There’s a lot of components to wire together
  • There’s a mixture of domain concepts and details like database choices
  • The ordering is difficult to get right to resolve dependencies, and it obscures intent

At this point we could reach for a DI framework and @Autowire or @Inject these dependencies and the wiring pain would disappear almost completely.

However, if instead we listen to the pain, we can spot some opportunities to improve the design. Here’s an example of one direction we could go

// Click to Expand, Full code in link above
public static void main(String... args) {

    var probeStatus = probeExecutor();
    var probeVisibility = visibilityOf(probeStatus);
    var dashboard = dashboardFor(probeVisibility);
    var pager = pagerFor(probeVisibility);

}

private static ProbeVisibility visibilityOf(ProbeStatusReporter probeStatus) {
    var credentialStore = new CredentialStore();
    var eventStore = new InfluxDbEventStore(credentialStore);
    AlertingRules alertingRules = new AlertingRules(new OnCallRota(new PostgresRotaPersistence(), LocalDateTime::now), eventStore, probeStatus);
    return new ProbeVisibility(alertingRules, probeStatus);
}

static class ProbeVisibility {
    AlertingRules alertingRules;
    ProbeStatusReporter probeStatus;

    public ProbeVisibility(AlertingRules alertingRules, ProbeStatusReporter probeStatus) {
        this.alertingRules = alertingRules;
        this.probeStatus = probeStatus;
    }
}

private static Pager pagerFor(ProbeVisibility probeVisibility) {
    return new Pager(new SMSGateway(), new EmailGateway(), probeVisibility.alertingRules, probeVisibility.probeStatus);
}

private static Dashboard dashboardFor(ProbeVisibility probeVisibility) {
    return new Dashboard(probeVisibility.alertingRules, probeVisibility.probeStatus, new HttpsServer());
}

private static ProbeStatusReporter probeExecutor() {
    var credentialStore = new CredentialStore();
    var eventStore = new InfluxDbEventStore(credentialStore);

    var probeStatusReporter = new ProbeStatusReporter(eventStore);
    var executor = new ProbeExecutor(new ScheduledThreadPoolExecutor(2), probeStatusReporter, credentialStore, new ProbeConfiguration(new File("/etc/probes.conf")));
    executor.start();
    return probeStatusReporter;
}
  • We’ve spotted and fixed the dashboard’s direct dependency on the probe executor, it now uses the status reporter like the pager.
  • The dashboard and pager shared a lot of wiring as they had a common purpose in providing visibility on the status of probes. There was a missing concept here, adding it has simplified the wiring considerably.
  • We’ve separated the wiring of the probe executor from the rest.

After applying these refactorings the top level wiring reads more like a description of our intent.

Clearly this is just a toy example, and the refactoring is far from complete, but I hope it illustrates the point: dependency injection frameworks are useful, but be aware of the valuable design feedback they may be hiding from you.

Painful Integration

It’s common to experience “merge pain” when trying to integrate long lived branches of code and big changesets to create a releasable build. Sometimes the large changesets don’t even pass tests, sometimes your changes conflict with changes others on the team have made.

One response to this pain is to reach for increasingly sophisticated build infrastructure to hide some of the pain. Infrastructure that continually runs tests against branched code, or continually checks merges between branches can alert you to problems early. Sadly, by making the pain more bearable, we risk depriving ourselves of valuable feedback.

Ironically continuous-integration tooling often seems to be used to reduce the pain felt when working on large, long lived changesets; a practice I like to call “continuous isolation”.

You can’t automate away the human feedback available when integrating your changes with the rest of the team—without continuous integration you miss out on others noticing that they’re working in the same area, or spotting problems with your approach early.

You also can’t replace the production feedback possible from integrating small changes all the way to production (or a canary deployment) frequently.

Sophisticated build infrastructure can give you the illusion of safety by hiding the pain from your un-integrated code. By continuing to work in isolation you risk more substantial pain later when you integrate and deploy your larger, riskier changeset. You’ll have a higher risk of breaking production, a higher risk of merge conflicts, as well as a higher risk of feedback from colleagues being late, and thus requiring substantial re-work.

Painful Alerting

Over-alerting is a serious problem; paging people spuriously for non-existent problems or issues that do not require immediate attention undermines confidence, just like flaky test suites.

It’s easy to respond to overalerting by paying less and less attention to production alerts until they are all but ignored. Learning to ignore the pain rather than listening to its feedback.

Another popular reaction is to desire increasingly sophisticated tooling to handle the flakiness—from flap detection algorithms, to machine learning, to people doing triage. These often work for a while—tools can assuage some of the pain, but they don’t address the underlying causes.

The situation won’t significantly improve without a feedback mechanism in place, where you improve both your production infrastructure and approach to alerting based on reality.

The only effective strategy for reducing alerting noise that I’ve seen is: every alert results in somebody taking action to remediate it and stop it happening again—even if that action is to delete the offending alerting rule or amend it. Analyse the factors that resulted in the alert firing, and make a change to improve the reliability of the system.

Yes, this sometimes does mean more sophisticated tooling when it’s not possible to prevent the alert firing in similar spurious circumstances with the tooling available.

However it also means considering the alerts themselves. Did the alert go off because there was an impact to users, the business, or a threat to our error budget that we consider unacceptable? If not, how can we make it more reliable or relevant?

Are we alerting on symptoms and causes rather than things that people actually care about?
Who cares about a server dying if no users are affected? Who cares about a traffic spike if our systems handle it with ease?

We can also consider the reliability of the production system itself. Was the alert legitimate? Maybe our production system isn’t reliable enough to run (without constant human supervision) at the level of service we desire? If improving the sophistication of our monitoring is challenging, maybe we can make the system being monitored simpler instead?

Getting alerted or paged is painful, particularly if it’s in the middle of the night. It’ll only get less painful long-term if you address the factors causing the pain rather than trying hard to ignore it.

Painful Deployments

If you’ve been developing software for a while you can probably regale us with tales of breaking production. These anecdotes are usually entertaining, and people enjoy telling them once enough time has passed that it’s not painful to re-live the situation. It’s fantastic to learn from other people’s painful experiences without having to live through them ourselves.

It’s often painful when you personally make a change and it results in a production problem, at least at the time—not something you want to repeat.

Making a change to a production system is a risky activity. It’s easy to associate the pain felt when something goes wrong, with the activity of deploying to production, and seek to avoid the risk by deploying less frequently.

It’s also common to indulge in risk-management theatre: adding rules, processes, signoff and other bureaucracy—either because we mistakenly believe it reduces the risk, or because it helps us look better to stakeholders or customers. If there’s someone else to blame when things go wrong, the pain feels less acute.

Unfortunately, deploying less frequently results in bigger changes that we understand less well; inadvertently increasing risk in the long run.

Risk-management theatre can even threaten the ability of the organisation to respond quickly to the kind of unavoidable incidents it seeks to protect against.

Yes, most production issues are caused by an intentional change made to the system, but not all are. Production issues get caused by leap second bugs, changes in user behaviour, spikes in traffic, hardware failures and more. Being able to rapidly respond to these issues and make changes to production systems at short notice reduces the impact of such incidents.

Responding to the pain of deployments that break production by changing production less often, is pain avoidance rather than addressing the cause.

Deploying to production is like bike maintenance. If you do it infrequently it’s a difficult job each time and you’re liable to break something. Components seize together, the procedures are unfamiliar, and if you don’t test-ride it when you’re done then it’s unlikely to work when you want to ride. If this pain leads you to postpone maintenance, then you increase the risk of an accident from a worn chain or ineffective brakes.

A better response with both bikes and production systems is to keep them in good working order through regular, small, safe changes.

With production software changes we should think about how we can make it a safe and boring activity—-how can we reduce the risk of deploying changes to production, or how can we reduce the impact of deploying bad changes to production.

Could the production failure have been prevented through better tests?

Would the problem have been less severe if our production monitoring had caught it sooner?

Might we have spotted the problem ourselves if we had a culture of testing in production and were actually checking that our stuff worked once in production?

Perhaps canary deploys would reduce the risk of a business-impacting breakage?

Would blue-green deployments reduce the risk by enabling swift recovery?

Can we improve our architecture to reduce the risk of data damage from bad deployments?

There are many many ways to reduce the risk of deployments, we can channel the pain of bad deployments into improvements to our working practices, tooling, and architecture.

Painful Change

After spending days or weeks building a new product or feature, it’s quite painful to finally demo it to the person who asked for it and discover that it’s no longer what they want. It’s also painful to release a change into production and discover it doesn’t achieve the desired result, maybe no-one uses it, or it’s not resulting in an uptick to your KPI.

It’s tempting to react to this by trying to nail down requirements first before we build. If we agree exactly what we’re building up front and nail down the acceptance criteria then we’ll eliminate the pain, won’t we?

Doing so may reduce our own personal pain—we can feel satisfied that we’ve consistently delivered what was asked of us. Unfortunately, reducing our own pain has not reduced the damage to our organisation. We’re still wasting time and money by building valueless things. Moreover, we’re liable to waste even more of our time now that we’re not feeling the pain.

Again, we need to listen to what the pain’s telling us; what are the underlying factors that are leading to us building the wrong things?

Fundamentally, we’re never going to have perfect knowledge about what to build, unless we’re building low value things that have been built many times before. So instead let’s try to create an environment where it’s safe to be wrong in small ways. Let’s listen to the feedback from small pain signals that encourage us to adapt, and act on it, rather than building up a big risky bet that could result in a serious injury to the organisation if we’re wrong.

If we’re frequently finding we’re building the wrong things, maybe there are things we can change about how we work, to see if it reduces the pain.

Do we need to understand the domain better? We could spend time with domain experts, and explore the domain using a cheaper mechanism than software development, such as eventstorming.

Perhaps we’re not having frequent and quality discussions with our stakeholders? Sometimes minutes of conversation can save weeks of coding.

Are we not close enough to our customers or users? Could we increase empathy using personas, or attending sales meetings, or getting out of the building and doing some user testing?

Perhaps having a mechanism to experiment and test our hypotheses in production cheaply would help?

Are there are lighter-weight ways we can learn that don’t involve building software? Could we try selling the capabilities optimistically, or get feedback from paper prototypes, or could we hack together a UI facade and put it in front of some real users?

We can listen to the pain we feel when we’ve built something that doesn’t deliver value, and feed it into improving not just the product, but also our working practices and habits. Let’s make it more likely that we’ll build things of value in the future.

Acute Pain

Many people do not have the privilege of living pain-free most of the time, sadly we have imperfect bodies and many live with chronic pain. Acute pain, however, can be a useful feedback mechanism.

When we find experiences and day to day work painful, it’s often helpful to think about what’s causing that pain and, what we can do to eliminate the underlying causes, before we reach for tools and processes to work around or hide the pain.

Listening to small amounts of acute pain, looking for the cause and taking action sets up feedback loops that help us improve over time; ignoring the pain leads to escalating risks that build until something far more painful happens.

What examples do you have of people treating the pain rather than the underlying causes?

The post Learning from Pain appeared first on Benji's Blog.

the avatar of Frank Karlitschek

Nextcloud 14 and Video Verification

Today the Nextcloud community released Nextcloud 14. This release comes with a ton of improvements in the areas of User Experience, Accessibility, Speed, GDPR compliance, 2 Factor Authentication, Collaboration, Security and many other things. You can find an overview here

But there is one feature I want to highlight because I find it especially noteworthy and interesting. Some people ask us why we are doing more than the classic file sync and share. Why do we care about Calendar, Contacts, Email, Notes, RSS Reader, Deck, Chat, Video and audio calls and so on.

It all fits together

The reason is that I believe that a lot if these features belong together. There is a huge benefit in an integrated solution. This doesn’t mean that everyone needs and wants all features. This is why we make it possible to switch all of them off so that you and your users have only the functionality available that you really want. But there are huge advantages to have deep integration. This is very similar to the way KDE and GNOME integrate all applications together on the Desktop. Or how Office 365 and Google Suite integrate cloud applications.

The why of Video Verification

The example I want to talk about for this release is Video Verification. It is a solution for a problem that was unsolved until now.

Let’s imagine you have a very confidential document that you want to share with one specific person and only this person. This can be important for lawyers, doctors or bank advisors. You can send the sharing link to the email you might have of this person but you can’t be sure that it reaches this person and only exactly this person. You don’t know if the email is seen by the mailserver admin or the kid who plays with the smartphone or the spouse or the hacker who has hijacked the PC or the mail account of the recipient. The document is transmitted vie encrypted https of course but you don’t know who is on the other side. Even if you sent the password via another channel, you can’t have 100% certainty.

Let’s see how this is done in other cases.

TLS solves two problems for https. The data transfer is encrypted with strong encryption algorithms but this is not enough. Additionally certificates are used to make sure that you are actually talking to the right endpoint on the other side of the https connection. It doesn’t help to securily communicate with what you think is your bank but is actually an attacker!

In GPG encrypted emails the encryption is done with strong and proven algorithms. But there is an additional key signing needed to make sure that the key is owned by the right person.

This second part, the verification of the identity of the recipient, is missing in file sync and share until now. Video verification solves that.

How it works

I want to share a confidential document with someone. In the Nextcloud sharing dialog I type in the email of the person and i activate the option ‘Password via Talk’ then I can set a password to protect the document.

The recipient gets a link to the document by email. Once the person clicks on the link the person sees a screen that asks for a password. The person can click on the ‘request password’ button and then a sidebar open which initiates a Nextcloud Talk call to me. I get a notification about this call in my web interface, via my Nextcloud Desktop client or, most likely to get my attention, my phone will ring because the Nextcloud app on my phone got a push notification. I answer my phone and I have an end to end encrypted, peer to peer video call with the recipient of the link. I can verify that this is indeed the right person. Maybe because I know the person or because the person holds up a personal picture ID. Once I’m sure it is the right person I tell the person the password to the document. The person types in the password and has access to the document.

This procedure is of course over the top for normal standard shares. But if you are dealing with very confidential documents because you are a doctor, a lawyer, a bank or a whistleblower then this is the only way to make sure that the document reaches the right person.

I’m happy and a b it proud that the Nextcloud community is able to produce such innovative features that don’t even exist in proprietary solution

As always all the server software, the mobile apps and the desktop clients are 100% open source, free software and can be self hosted by everyone. Nextcloud is a fully open source community driven project without a contributor agreement or closed source and proprietary extensions or an enterprise edition.

You can get more information on nextcloud.com or contribute at github.com/nextcloud

 

 

 

a silhouette of a person's head and shoulders, used as a default avatar
a silhouette of a person's head and shoulders, used as a default avatar
darix posted in English at

PHP FPM Apparmor

I haven’t been using apache for a few years now … oh wow maybe like 15 years now.

There was one feature I really liked with apache and apparmor though … mod_changehat. The module allows me to assign different apparmor scopes to apache scopes. So you could limit that your wordpress vhost can not access the files of your nextcloud vhost even though they are running in the same apache.