Skip to main content

the avatar of Martin de Boer

5 best features of the Dolphin file manager

I like Dolphin, the default KDE file manager, a lot. I have used it for over 9 years. There are many features that I use regularly that I wouldn’t want to live without. In this post, I like to share my personal top 5. Some features are more obvious, others are hidden features.

1) Split windows

The split window view can be opened with the F3 key or by clicking on the Split icon. This is great for copying files from one location to another. Just drag and drop the files from left to right (or vis versa).

2) Integrated terminal

The build-in terminal can be opened by pressing the F4 key or by clicking on Control –> Panels –> Terminal. This is handy for the few Linux applications that need to be downloaded from an external website. For example: Oneplay Codec pack and Codeweavers Crossover Linux. The terminal automatically opens in the location that your are visiting, allowing you to directly input the zypper cli commands.

3) Show hidden files

Sometimes you want to edit or remove a configuration file. These files are often located in hidden folders in your Home directory. By showing the hidden files, you are able to navigate to these locations with ease.

4) Open With

The ability to open a file with the application of your choice is very handy. I like to open images with Gwenview by default. But for editing, I like to open them with Kolourpaint or GIMP. This feature allows me to do this with minimal effort.

5) Previews

You would think that this feature would be nr. 1. But unfortunately Preview didn’t always work as advertised. This was mainly a problem during the Qt 4 days (2009 – 2014). The Preview function is working without issues since the Qt 5 version of Dolphin. You can enable previews by clicking on the associated button. You can use the slider in the bottom right, to increase or decrease the size of the previews.

Published on: 21 October 2018

the avatar of Klaas Freitag

Kraft Version 0.82

A new release of Kraft, the Qt- and KDE based software to help to organize business docs in small companies, has arrived.

A couple of days ago version 0.82 was released. It mainly is a bugfix release, but it also comes with a few new features. Users were asking for some new functions that they needed to switch to Kraft with their business communication, and I am always trying to make that a priority.

The most visible feature is a light rework of the calculation dialog that allows users to do price calculations for templates. It was cleared up, superflous elements were finally removed and the remaining ones now work as expected. The distinction between manual price and calculated price should be even more clear now. Time calculations can now not only done in the granularity of minutes, as this was to coarse for certain usecases. The unit for a time slice can now be either seconds, minutes or hours.

[caption id=“attachment_936” width=“516”]Kraft 0.82 New calculation dialog in 0.82[/caption]

Apart from that, for example sending documents per email was fixed, and in addition to doing it through thunderbird, Kraft can now also utilize the xdg-email tool to work with the desktop standard mail client, such as KMail.

Quite a few more bugfixes make this a nice release. Check the full Changelog! Update is recommended.

Thanks for your comments or suggestions about Kraft!

the avatar of Efstathios Iosifidis

FOSSCOMM 2018 aftermath

FOSSCOMM 2018

FOSSCOMM (Free and Open Source Software Communities Meeting) is the pan-Hellenic conference of free and open source software communities. It is addressed at programmers, students and anyone else interested in the open source movement, despite their background. This year hosted by our friends at Heraklion, Crete.

Personally, I've been to many conferences during the year and I wanted to present the projects I'm rooting for.

I applied for 3 talks, all on Saturday.

My first talk was about GNOME. It's a talk from FOSSCOMM 2013 but with new information and of course main reason was to promote GUADEC 2019. The file of the talk are on Github.

My friend Kyriakos interviewed me about GNOME. Here you can find the video:


My second talk was about Nextcloud. Usually I make a general talk about Nextcloud and then Panteleimon Sarantos follows with Nextcloud Pi. This year, I preferred to use Frank's keynote at Nextcloud conference. I would like to thank him providing me his files. I made some minor changes. You can find the file on Github.

My friend Kyriakos interviewed me about Nextcloud. Here you can find the video:


Finally, a very cool project that I'm fired up lately is GNU Health. So my third and final talk had to do with it. The reason that I'm fired up is because openSUSE sponsors the project, and also because it's closer to my profession. I would like to thank Dr Luis Falcon and Axel Braun. I met them at openSUSE conference 2018 and I liked the project. I wanted to get involved so I started promotion and translation. So I asked them for their presentation files and I made a mix of them. The file I used are on Github.

This conference isn't only about presenting but it's also socializing. I met my FOSS friends. I talk to them over the internet and I surely meet them once a year.

I had a booth with promo materials from all 3 projects and some I got from my trips to other conferences abroad. Since I was there to promote GUADEC, I invited as many people as I could as volunteers and I also invited as many sponsors as I could find.

Here are some pictures...

FOSSCOMM day 1, openSUSE-GNOME-GNUHealth booth
FOSSCOMM Day 1 at the booth

FOSSCOMM GNOME presentation
FOSSCOMM minutes before my GNOME presentation.

FOSSCOMM GNUHealth presentation
GNUHealth presentation

FOSSCOMM day 2, openSUSE-GNOME-GNUHealth booth
FOSSCOMM Day 2 at the booth

FOSSCOMM, openSUSE-GNOME-GNUHealth booth with promo materials
FOSSCOMM booth, promo materials


When they will upload our talks, I will add them here.

I would like to thank GNOME Foundation for sponsoring my trip to Heraklion.

Sponsored by GNOME Foundation

a silhouette of a person's head and shoulders, used as a default avatar

Pleased to flash you, hope you change my name...

Remember that time when ATI employees tried to re-market their atomBIOS bytecode as scripts?

You probably don't, it was a decade ago.

It was in the middle of the RadeonHD versus -ati shitstorm. One was the original, written in actual C poking display registers directly, and depending on ATIs atomBIOS as little as practical. The other was the fork, implementing everything the fglrx way, and they stopped at nothing to smear the real code (including vandalizing the radeonhd repo, _after_ radeonhd had died). It was AMD teamed with SUSE versus ATI teamed with redhat. From a software and community point of view, it was real open source code and the goal of technical excellence, versus spite and grandstanding. From an AMD versus ATI point of view, it was trying to gain control over and clean up a broken company, and limiting the damage to server sales, versus fighting the new "corporate" overlord and people fighting to keep their old (ostensibly wrong) ways of working.

The RadeonHD project started april 2007, when we started working on a proposal for an open source driver. AMD management loved it, and supported innovations such as fully free docs (the first time since 3dfx went bust in 1999), and we started coding in July 2007. This is also when we were introduced to John Bridgman. At SUSE, we were told that John Bridgman was there to provide us with the information we needed, and to make sure that the required information would go through legal and be documented in public register documents. As an ATI employee, he had previously been tasked by AMD to provide working documentation infrastructure inside ATI (or bring one into existence?). From the very start, John Bridgman was underhandedly working on slowly killing the RadeonHD project. First by endlessly stalling and telling a different lie every week about why he did not manage to get us information this time round either. Later, when the RadeonHD driver did make it out to the public, by playing a very clear double game, specifically by supporting a competing driver project (which did things the ATI way) and publicly deriding or understating the role of the AMD backed SUSE project.

In November 2007, John Bridgman hired Alex Deucher, a supposed open source developer and x.org community member. While the level of support Bridgman had from his own ATI management is unclear to me, to AMD management he claimed that Alex was only there to help out the AMD sponsored project (again, the one with real code, and public docs), and that Alex was only working on the competing driver in his spare time (yeah right!). Let's just say that John slowly "softened" his claim there over time, as this is how one cooks a frog, and Mr Bridgman is an expert on cooking frogs.

One particularly curious instance occurred in January 2008, when John and Alex started to "communicate" differently about atomBIOS, specifically by consistently referring to it as a set of "scripts". You can see one shameful display here on alex his (now defunct) blog. He did this as part of a half-arsed series of write-ups trying to educate everyone about graphics drivers... Starting of course with... those "scripts" called atomBIOS...

I of course responded with a blog entry myself. Here is a quote from it: "At no point do AtomBIOS functions come close to fitting the definition of script, at least not as we get them. It might start life as "scripts", but what we get is the bytecode, stuck into the ROM of our graphics cards or our mainboard." (libv, 2008-01-28)

At the same time, Alex and John were busy renaming the C code for r100-r4xx to "legacy", inferring both that these old graphics cards were too old to support, and that actual C code is the legacy way of doing things. "The warping of these two words make it impossible to deny: it is something wrong, that somehow has to be made to appear right." (libv, 2008-01-29) Rewriting an obvious truth... Amazing behaviour from someone who was supposed to be part of the open source community.

At no point did ATI provide us with any tools to alter these so-called scripts. They provided only the interpreter for the bytecode, the bare minimum of what it took for the rest of the world to actually use atomBIOS. There never was atomBIOS language documentation. There was no tooling for converting to and from the bytecode. There never was tooling to alter or create PCI BIOS images from said atomBIOS scripts. And no open source tool to flash said BIOSes to the hardware was available. There was an obscure atiflash utility doing the rounds on the internet, and that tool still exits today, but it is not even clear who the author is. It has to be ATI, but it is all very clandestine, i think it is safe to assume that some individuals at some card makers sometimes break their NDAs and release this.

The only tool for looking at atomBIOS is Matthias Hopfs excellent atomdis. He wrote this in the first few weeks of the RadeonHD project. This became a central tool for RadeonHD development, as this gave us the insight into how things fit together. Yes, we did have register documentation, but Mr. Bridgman had given us twice 500 pages of "this bit does that" (the same ones made public a few months later), in the hope that we would not see the forest through the trees. Atomdis, register docs, and the (then) most experienced display driver developer on the planet (by quite a margin) made the RadeonHD a viable driver in record time, and it gave ATI no way back from an open source driver. When we showed Mr. Bridgman the output of atomdis in september 2007, he was amazed just how readable its output was compared to ATI internal tools. So much for scripts eh?

I have to confess though that i convinced Matthias to hold off making atomdis public, as i knew that people like Dave Airlie would use it against us otherwise (as they ended up doing with everything else, they just did not get to use atomdis for this purpose as well). In Q2 2009. after the RadeonHD project was well and truly dead, Matthias brought up the topic again, and i wholeheartedly agreed to throw it out. ATI and the forkers had succeeded anyway, and this way we would give others a tool to potentially help them move on from atomBIOS. Sadly, not much has happened with this code since.

One major advantage of full openness is the ability to support future use cases that were unforeseeable at the time of hardware, code or documentation release. One such use-case is the today rather common (still) use of GPUs for cryptocurrency "mining". This was not a thing back in 2007-2009 when we were doing RadeonHD. It was not a thing when AMD created a GPGPU department out of a mix of ATI and new employees (this is the department that has been pushing out 3D ISA information ever since). It was also not considered when ATI stopped the flow of (non shader ISA) documentation back in Q1 2009 (coinciding with radeonHD dying). We only just got to the point of providing a 3d driver then, and never got near considering pushing for open firmware for Radeon GPUs. Fully open power management and fully open 3d engine firmware could mean that both could be optimised to provide a measurable boost to a very specific use-case, which cryptocurrency mining usually is. By retreating in its proprietary shell with the death of the RadeonHD driver, and by working on the blob based fig-leaf driver to keep the public from hating the fglrx driver as much, ATI has denied us the ability to gain those few 10ths of percents or even whole percents that there are likely to be gained from optimised support.

Today though, miners are mostly just altering the gpu and memory frequencies and voltages in their atomBIOS based data tables (not touching the function tables). They tend to use a binary only gui tool to edit those values. Miners also extensively use the rather horrible atiflash. That all seems very limited compared to what could have been if AMD had not lost the internal battle.

So much for history, as giving ATI the two fingered salute is actually more of an added bonus for me. I primarily wanted to play around with some newer ideas for tracing registers, and i wanted to see how much more effective working with capstone would make me. Since SPI chips are simple hardware, and SPI engines are just as trivial, ati spi engines seemed like an easy target. The fact that there is one version of atiflash (that made it out) that runs under linux, made this an obvious and fun project. So i spent the last month, on and off, instrumenting this binary, flexing my REing muscle.

The information i gleaned is now stuck into flashrom, a tool to which i have been contributing since 2007, from right before i joined SUSE. I even held a coreboot devroom at FOSDEM in 2010, and had a talk on how to RE BIOSes to figure out board specific flash enables. I then was on a long flashrom hiatus from 2010 til earlier this year, when i was solely focused on ARM. But a request for a quick board enable got me into flashrom again. Artificial barriers are made to be breached, and it is fun and rewarding to breach them, and board enables always were a quick fix.

The current changes are still in review at review.coreboot.org, but i have a github clone for those who want immediate and simple access to the complete tree.

To use the ati_spi programmer in flashrom you need to use:
./flashrom -pati_spi
and then specify the usual flashrom arguments and commands.

Current support is limited to the hardware i have in my hands today, namely Rx6xx, and only spans to that hardware that was obviously directly compatible to the Rx6xx SPI engine and GPIO block. This list of a whopping 182 devices includes:
* rx6xx: which are the radeon HD2xxx and HD3xxx series, released april 2007
* rx7xx: or HD4xxx, released june 2008.
* evergreen: or HD5xxx, released september 2009.
* northern island: HD6xxx series, released oktober 2010
* Lombok, part of the Southern Island family, release in january 2012.

The other Southern Islanders have some weird IO read hack that i need to go test.

I have just ordered 250eur worth of used cards that should extend support across all PCIE devices all the way through to Polaris. Vega is still out of reach, as that is prohibitively expensive for a project that is not going to generate any revenue for yours truly anyway (FYI, i am self-employed these days, and i now need to find a balance between fun, progress and actual revenue, and i can only do this sort of thing during downtime between customer projects). According to pci.ids, Vega 12 and 20 are not out yet, but they too will need specific changes when they come out. Having spent all that time instrumenting atiflash, I do have enough info to quickly cover the entire range.

One good thing about targetting AMD/ATI again is that I have been blackballed there since 2007 anyway, so i am not burning any bridges that were not already burned a decade ago \o/

If anyone wants to support this or related work, either as a donation, or as actually invoiced time, drop me an email. If any miners are interested in specifically targetting Vega, soon, or doing other interesting things with ATI hardware, I will be happy to set up a coinbase wallet. Get in touch.

Oh, and test out flashrom, and report back so we can update the device table to "Tested, OK" status, as there are currently 180 entries ranked as "Not-Tested", a number that is probably going to grow quite a bit in the next few weeks as hardware trickles in :)

the avatar of Martin de Boer

Basic photo editing with Darktable

When I was starting in the world of amateur digital photography, I learned about 2 software programs that everybody was using: Adobe Photoshop and Adobe Lightroom. In the Free and Open Source world, there are 2 amazing alternatives: GIMP and Darktable. GIMP is aimed at destructive (the image is changed when saved) photo editing. It can be used to change photos in many ways, including cutting things out of the picture or pasting things into the picture. Darktable is aimed at non-destructive photo editing. I like to compare it to developing a photo. The negative is kept, and the photo (the positive) is developed from the negative. For many people, getting started can be daunting. Therefore I like to present the absolute basics of photo editing with Darktable.

Tutorial

First open Darktable. When the program is opened, you need to load a photo collection. You can do this for many photos at once by clicking on ‘folder’ in the left sidebar in the ‘import’ section.

Select the folder in which you have stored the RAW files of your digital camera. Click on open.

Note: If you only take JPG pictures with your digital camera, Darktable is not for you! You should look into GIMP.

Now your photos will load in the Lighttable section of Darktable. In the top right of the program, you can see (highlighted) in which section of Darktable you are currently working. To start editing a photo, double click on the image.

Now you are automatically moved into the darkroom section of Darktable. Underneath the sections you see a Histogram (the colored chart) and below that a row of buttons. These buttons are the groups in which the functions of Darktable are grouped. Now click on the ‘basic group’ button. Then click on ‘white balance’.

The easiest way to get the white balance right is to find a white spot on your photo and select this. To be able to do this, you first need to go into the white balance presents and change this from ‘camera’ to ‘spot’.

Now you can zoom into the photo by using the mouse wheel (if you have one) or by using the navigation functions (located in the top of the left sidebar). Then select the part of the picture that represents pure white and Darktable will automatically determine the white balance. When I am not 100% happy with the result, I simply try again on another spot.

You can also use one of the presets that are available by default such as: camera, daylight, shade, cloudy or flash. Or you can manually drag the temperature left-right to select the white balance that you prefer.

After having selected the right white balance, it is time to get the light/dark area’s of your photo right. Go into the tone group. Now select levels.

What you want to do, is look at the histogram and see where it drops off. Most histograms look like a mountain range with an upward slope on the left, a middle part that’s like a mountain range and a downward slope on the right. If you don’t see a slope on the left, that means that your dark areas are fine. If you don’t see a slope on the right, that means that your light area’s are fine. What you want to do, is to drag the line on the left or on the right towards the middle, so that the slopes are shorter or even gone. Try to experiment with this!

The last group in this tutorial is the correction group. Select the lens correction.

Now you want to enable the lens correction with the ‘on’ button of the module. In most cases, the camera and lens are already selected correctly by Darktable. Otherwise adjust the camera model and lens to the ones that you have used.

The only thing remaining is to export the photo(s) into the JPG format. Click on lighttable, to change the Darktable mode. Then in the right sidebar go to the ‘export selected’ part and click on the folder icon to select the folder where you want the pictures to be saved. Now select all the images that you want to export (in the middle section) and then click on the export button.

Conclusion

Using Darktable is not as difficult as it might seem. You just need to know your way around. There are a lot more functions available. But the ones in this tutorial are enough in 95% of the time. In future blog posts, I will explore some of the more advanced features.

Published on: 15 October 2018

a silhouette of a person's head and shoulders, used as a default avatar

I’m going to LinuxDay in Dornbirn, Austria

1200px-textil_htl_dorbirn1This weekend I am going to LinuxDay, the German-specking Free Software one-day event. It takes place in the higher technical school in Dornbirn, Austria.
Organized by Linux User Group Vorarlberg, it scheduled two timelines of presentations/talks and stands of the many Free Software projects including openSUSE, Debian, Devuan, etc. It looks a bit like a LinuxTag or FOSDEM. Here you can find photos from the last years.

This time it celebrates 20 years, so I think it is a good reason to visit this event now. And not just this. It is very beauty there in this time of the year. For sure I will take my family there and continue to enjoy the traveling on Sunday 🙂

a silhouette of a person's head and shoulders, used as a default avatar

Libre Linux (GNU Kernel) on openSUSE

As we known, openSUSE project doesn’t provide official packages for Linux Libre kernel. There is a simple reason for that: default openSUSE kernel doesn’t include some proprietary modules; it’s free. All proprietary parts of the kernel could be found in a separate package kernel-firmware. But anyway there are users who want to use exactly GNU version. So, why not? This short tutorial describes how to build and install Libre Linux on openSUSE Leap 15.1 (openSUSE TW needs the same instructions).

Right now in the Leap 15.1 repository the kernel version is 4.12.14.

> uname -r
4.12.14-lp151.16-default

Let’s check the latest available 4.x kernel on the FSF server. Right now the latest avaliable kernel there is version 4.18. Its size is less then 100 Mb. Download it:

> wget -c \
https://linux-libre.fsfla.org/pub/linux-libre/releases/LATEST-4.N/linux-libre-4.18-gnu.tar.xz

Before we continue, I will recommend to verify file integrity. The .sign files can be used to verify that the downloaded files were not corrupted or tampered with. The steps shown here are adapted from the Linux Kernel Archive, see the linked page for more details about the process.

wget -c \
https://linux-libre.fsfla.org/pub/linux-libre/releases/LATEST-4.N/linux-libre-4.18-gnu.tar.xz.sign

Having downloaded the keys, you can now verify the sources. You can use gpg2 to verify the .tar archives. Here is an example of a correct output:

> gpg2 --verify linux-libre-4.18-gnu.tar.xz.sign
gpg: assuming signed data in 'linux-libre-4.18-gnu.tar.xz'
gpg: Signature made Mon 13 Aug 2018 01:25:14 AM CEST
gpg:                using DSA key 474402C8C582DAFBE389C427BCB7CF877E7D47A7
gpg: Can't check signature: No public key

> gpg2  --keyserver hkp://keys.gnupg.net --recv-keys \
474402C8C582DAFBE389C427BCB7CF877E7D47A7
key BCB7CF877E7D47A7:
12 signatures not checked due to missing keys
gpg: key BCB7CF877E7D47A7: \
public key "linux-libre (Alexandre Oliva) " imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1

> gpg2 --verify linux-libre-4.18-gnu.tar.xz.sign 
gpg: assuming signed data in 'linux-libre-4.18-gnu.tar.xz'
gpg: Signature made Mon 13 Aug 2018 01:25:14 AM CEST
gpg:                using DSA key 474402C8C582DAFBE389C427BCB7CF877E7D47A7
gpg: Good signature from "linux-libre (Alexandre Oliva) " [unknown]
gpg:                 aka "[jpeg image of size 5511]" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 4744 02C8 C582 DAFB E389  C427 BCB7 CF87 7E7D 47A7

The primary key fingerprint looks good.

If everything goes well, untar downloaded kernel:

> tar xfv linux-libre-4.18-gnu.tar.xz
> cd linux-4.18

Well… now comes the personal part of the installation process, i.e. you know better what’s you should to care about during creating the config file, what’s hardware do you have and your kernel should support, what kind of optimization do you want to have, etc. That’s the most important step of this entire tutorial. For example, good configured kernel could save few seconds of boot time, bad configured kernel will doesn’t boot at all 🙂
To prepare the configuration file, you will need a base kernel configuration, it’s a plain text file calling .config. The are many ways to create .config file. It’s the same like for official Linux Kernel.
Before we can configure our new kernel we will need to install all needed dependencies.

# zypper in gcc make ncurses-devel bison flex libelf-devel libopenssl-devel bc
# make menuconfig
# make -j4
# make modules_install
# make install

If you newer built a linux kernel before and it makes you scary, you can just make make menuconfig and just close it without to change anything. It will scan your hardware and generate a default config. This configuration will include much more then you will really need, but it guarantees that the new kernel will boot.

After installing we can still find the native openSUSE default-kernel in the GRUB menu. I think, this is the default behavior today in the most GNU/Linux systems. Thus, if something goes wrong and, for example, your new self-configured kernel will not boot, don’t worry.

> uname -r
4.18.0-gnu-lp151.16-default

I think, if it’s your first experience with the kernel compilation process and you will get new kernel that will boot and it will be smaller then default openSUSE kernel, you can be proud of yourself.
Whatever you will get, don’t forget to have a lot of fun 🙂
More info about Linux kernel for beginners could be found on the https://kernelnewbies.org/. More info about GNU Libre Linux could be found on the https://www.fsfla.org/ikiwiki/selibre/linux-libre/index.en.html. And, finally, if you interested in the openSUSE Linux kernel development process, you are always welcome to visit openSUSE wiki portal 😉

a silhouette of a person's head and shoulders, used as a default avatar

Some Thoughts On Blogging

Photo by Camille Orgel on Unsplash

In the spring of 2014 I was a college student in an IT program applying to coop job postings and needed something to differentiate myself from the other applicants. I decided that I would start a blog where I document random things I found interesting. Sound really broad? That’s because I didn’t really have any particular focus in mind and wanted to experiment with different topics. I think in the long run this was a good decision as it let me see what I was comfortable with, what worked and what didn't. This is the kind of thing you can learn best from experience but I will go more into detail about this later. For now, lets get back to my life story.

blog --init 💥

The first post came in April of 2014 titled Programming Paradigms. And yes I used WordPress back then. The post was short — 25 words — with a link to a YouTube series by Dr. Jerry Cain. If a picture is a thousand words I guess this video is 31.05 million (17.25min * 30fps * 1000words)? Does it work like that? 😅

My first blog post: https://ushamim.wordpress.com/2014/04/05/programming-paradigms/

Projects

After a few months I started working on a C++/Qt project called TLE. I used my blog to post screenshots and mockups (helpfully drawn by a close friend) whenever I got something interesting working. I’m confident to say that no one read it, but it was still nice just being able to get my thoughts onto “paper”. It helped me think about what I was doing and how I would explain it to someone who had never worked on the project before.

An early mockup of TLE

Getting Traction

As this was all going on, I started participating in the openSUSE community. I would post on the forums, the mailing lists and even in the IRC channel. I began to notice there were common questions about new releases and the direction of the project. So what did I do? I wrote blog posts explaining it for people who didn't have the time to watch the conference videos.

Snippet from my post: https://ushamim.wordpress.com/2014/06/29/opensuse-the-way-forward/

At this point the frequency of my posts got slower, partly due to the posts getting longer and partly due to being busy with school. To keep my blog alive I started blogging about things I was doing in school. This is how I started blogging about technical content (how to’s, beginners guides, etc.). The very first post like this was about configuring a DNS server in CentOS 6.6:

Snippet from https://ushamim.wordpress.com/2015/01/25/configuring-a-dns-server-on-centos-6-6/

When I started posting these I began to notice that the traffic to my blog was increasing significantly. This gave me the confidence to try and write some bigger guides.

Moving To Medium

I discovered the existence of Medium around July 2016. I was really intrigued by the website since it looked really nice compared to my blog and seemed to be very popular with writers. After a bit of debating I decided to switch to Medium. A major reason I switched was in hopes that I would get better exposure. I wrote a reflection about using Medium that you can read here.

The Comprehensive Guide To AppArmor

In July 2016 I also released a guide to AppArmor called The Comprehensive Guide To AppArmor. The desire to create the post came from wanting to understand how AppArmor worked but having difficulty finding documentation that explained everything. So I decided to work with the developers of AppArmor to write a guide which explains all the basics you need to know to use AppArmor. The guide can be found here. This is one of the most unique posts I have ever written as it went through several revisions and I had constant feedback from the actual developers of AppArmor — something I don't usually get from tools I write about.

Becoming A Dev

McMaster University Campus from Wikipedia

In 2016 I graduated from my college and felt that I would still like to learn more about computers. So I decided to sign up for a transfer program that lets college students transfer into year 3 at McMaster university. At the end of my first year (April 2017) at McMaster we were allowed to apply to internship jobs and thinking it was good to get some work experience I applied. I ended up getting a position as a Software Developer Intern at Ontario Teachers’ Pension Plan (OTPP).

During my time at OTPP I worked with both JavaScript and Java. At the same time I was working on personal projects in my free time. At the start I started trying to re-implement a project I had created in school — a podcast feed. For our school project we had implemented it using Qt but the code was a mess and we had no tests. I decided to build the app again but this time using JavaFX — because it would help me get better at Java for work — and to test it, something I learned the importance of at work. While developing this application I learned a lot about testing philosophies and how to test JavaFX. Noticing that there was almost no documentation for TestFX — a library used to test JavaFX applications — I wrote some (you can find here).

This post turned out to be quite popular as there seems to be demand for testing JavaFX but a lack of documentation.

Saka

Screenshot of Saka

Spring of 2018 and I ended up becoming the maintainer of a browser extension called Saka. As I took over for the previous maintainer I was forced to learn a lot of the tools used in JavaScript projects. Things like Webpack, Karma, Babel and so on. As I began to understand the project I realized that there were no tests to validate the behavior of the application. This caused me worry — I didn’t have confidence in myself to modify code without breaking other parts of the app. So I decided to learn how browser extensions were tested and realizing it was not documented well I wrote a blog post titled Unit Testing Browser Extension that you can read here.

Saka represents a pretty important milestone in my journey as a dev. For the first time I was maintaining a project I had full control of that other people actually used! It was very exciting but also a bit nerve wrecking — I didn’t want to break it and lose users as a result. Looking back, it was a great idea to take over Saka since I have learned so much about JavaScript while maintaining it and it has inspired a lot of content on this blog.

Reflection

Stats from September 2016
Stats from September 2018

As my life has changed so has my blog. In a way it is a reflection of where I am with my life, what my focuses are and what goals I have for the future. It has changed quite a bit from what it started out as but I think it has only gotten better. It took me a long time to become confident enough to write my own content but it has been important in helping me learn new things and helping others do the same.

If you liked this post be sure to follow this blog, follow me on twitter and my blog on dev.to.

P.S. Looking to contribute to an open source project? Come contribute to Saka, we could use the help! You can find the project here: https://github.com/lusakasa/saka


Some Thoughts On Blogging was originally published in Information & Technology on Medium, where people are continuing the conversation by highlighting and responding to this story.

a silhouette of a person's head and shoulders, used as a default avatar

horde trustr – A new horde CA app step by step

Trustr is my current project to create a simple certificate management app.
I decided that it is just about the right scope to demonstrate a few things about application development in Horde 5.

I have not made any research if the name is already occupied by some other software. Should any problems arise, please contact me and we will find a good solution. I just wanted to start without losing much time on unrelated issues.

My goals as of now:

– Keep everything neat, testable, fairly decoupled and reusable. The core logic should be exportable to a separate library without much change. There won’t be any class of static shortcut methods pulling stuff out of nowhere. Config and registry are only accessed at select points, never in the deeper layers.
– Provide a CLI using Horde_Cli and Horde_Cli_Application (modeled after the backup tool in horde base git)
– Store to relational database using Horde_Db and Horde_Rdo for abstraction
– Use php openssl extension for certificate actions, but design with future options in mind
– Rely on magic openssl defaults as little as possible
– Use conf.xml / conf.php for any global defaults
– Show how to use the inter-app API (reusable for xml-rpc and json-rpc)
– Showcase an approach to REST danabol ds in Horde (experimental)

The app is intended as a resource provider. The UI is NOT a top priority. However, I am currently toying around with a Flux-like design in some unrelated larger project and I may or may not try some ideas later on.

Initial Steps: Creating the working environment

I set up a new horde development container using the horde tumbleweed image from Open Build Service and a docker compose file from my colleague Florian Frank. Please mind both are WIP and improve-as-needed projects.


git clone https://github.com/FrankFlorian/hordeOnTumbelweed.git
cd hordeOnTumbelweed
docker-compose -f docker-compose.yml up


This yields a running horde instance on localhost and a database container.
I needed to perform a little manual setup in the web admin ui to get the DB to run and create all default horde schemas.

Next I entered the developer container with a shell
docker exec -it hordeOnTumbelWeed_php_1 bash

There are other ways to work with a container but that’s what I did.

 

Creating a  skeleton app

The container comes with a fairly complete horde git checkout in /srv/git/horde and a clone of the horde git tools in /srv/git/git-tools

A new skeleton app can be created using

horde-git-tools dev new --author "Ralf Lang <lastname@b1-systems.de>" --app-name trustr

The new app needs to be linked to the web directory using

horde-git-tools dev install

Also, a registry entry needs to be created by putting a little file into /srv/git/horde/base/config/registry.d

cat trustr-registry.d.php

<?php
// Copy this example snipped to horde/registry.d
$this->applications['trustr'] = array(
'name' => _('Certificates'),
'provides' => array('certificates')
);

 

This makes the new app show up in the admin menu. To actually use it and make it appear in topbar, you also need to go to /admin/config and create the config file for this app. Even though the settings don’t actually mean anything by now, the file must be present.

I hope to follow up soon with articles on the architecture and sub systems of the little app.

a silhouette of a person's head and shoulders, used as a default avatar