Gdk-pixbuf modules - call for help
I've been doing a little refactoring of gdk-pixbuf's crufty code, to see if the gripes from my braindump can be solved. For things where it is not obvious how to proceed, I've started taking more detailed notes in a gdk-pixbuf survey.
Today I was looking at which gdk-pixbuf modules are implemented by third parties, that is, which external projects provide their own image codecs pluggable into gdk-pixbuf.
And there are not that many!
The only four that I found are libheif, libopenraw, libwmf, librsvg (this last one, of course).
Update 2019/Sep/12 - Added apng, exif-raw, psd, pvr, vtf, webp, xcf.
All of those use the gdk-pixbuf module API in a remarkably similar fashion. Did they cut&paste each other's code? Did they do the simplest thing that didn't crash in gdk-pixbuf's checks for buggy loaders, which happens to be exactly what they do? Who knows! Either way, this makes future API changes in the modules a lot easier, since they all do the same right now.
I'm trying to decide between these:
-
Keep modules as they are; find a way to sandbox them from gdk-pixbuf itself. This is hard because the API is "chatty"; modules and calling code go back and forth peeking at each other's structures.
-
Decide that third-party modules are only useful for thumbnailers; modify them to be thumbnailers instead of generic gdk-pixbuf modules. This would mean that those formats would stop working automatically in gdk-pixbuf based viewers like EOG.
-
Have "blessed" codecs inside gdk-pixbuf which are not modules so their no longer have API/ABI stability constraints. Keep third-party modules separate. Sandbox the internal ones with a non-chatty API.
-
If all third-party modules work indeed as I found, the module API can be simplified quite a lot since no third-party modules implement animations or saving. If so, simplify the module API and the gdk-pixbuf internals rather drastically.
Do you know any other image formats which provide gdk-pixbuf modules? Mail me, please!
Icecream 1.3 and Icemon 3.3 released
The changelogs are here and here. In a less changelog-y way, the changes are:
- Compiler location are no longer hardcoded anywhere. Previously the compiler automatically packaged and sent to remote nodes was always /usr/bin/gcc (g++, clang, clang++). That might not match the actual compiler used and the workaround was to manually package the proper one using icecc-create-env. But now it's possible to build even with e.g. CXX=/my/own/build/of/clang and it'll simply work. This should also mean that explicitly setting $ICECC_VERSION now should be needed only for cross-compiling.
- Slightly better job scheduling, both for remote and local builds. For example, the local machine should no longer be possibly overloaded by running way too many local preprocessor steps.
- Better compression, both for sending data and packaged compilers. Compilation data is compressed using zstd if the other node supports it, compiler environments can be compiled using zstd or xz. This improves performance by reducing both network and CPU usage. Note that while compilation compression falls back to the older method if not supported by the other side, for compiler environments this is more tricky and so it has to be set up manually. You can set e.g. ICECC_ENV_COMPRESSION=xz , but the daemon will not fall back to using any other mechanism. Which means it will use only nodes that are at least version 1.3, the scheduler should also be from 1.3 (run another one if needed, the newest one wins) and the remote node needs to support the compression (1.3 uses newly uses libarchive, which supports zstd only in its relatively recent releases). So this is mainly useful if you have full control over the Icecream cluster, but by default the compression is the old gzip, for backwards compatibility.
- Speaking of which, the maximum cache size for compiler environments now defaults to 256MiB. Use the --cache-size option of iceccd for different sizes.
- Objective C/C++ support has been fixed.
- Some special workarounds for GCC's -fdirectives-only option that is used when sending sources to remote nodes, as it breaks in some corner cases.
- The --interface option of the daemons (and scheduler) now allow binding only to a specific network interface, if needed. Note that Icecream still assumes it runs in a trusted network and if that's not so it's up to you to ensure it by using tools such as a firewall.
- Icemon now displays in the defailed host view what protocol a node supports (1.3 has protocol version 42, env_xz/env_zstd mean it supports compiler environments compiled using xz/zstd).
- And various other fixes.
Upgrading to openSUSE Leap 15.1, downgrading to NVIDIA GeForce GTX 1050 Ti
In my latest article on upgrading to openSUSE Leap 15.1 I had experienced issues with my graphics card, an AMD Radeon RX 580 8GB card. The drivers were not working, which resulted in a black screen during the upgrade process. I could work around this by using nomodeset, but that essentially meant that I wasn’t using my midrange graphics card at all. I have bought this HP Pavilion Power 580-146nd especially to be able to run my Linux / Steam games at high settings and 1080p on my openSUSE system. So not having my graphics card working wasn’t an option. I could have switched to another Linux operating system. And for a short period I considered switching to Pop!_OS, but I am still very fond of the openSUSE operating system and the KDE Plasma desktop.
I needed to consider a different solution. In my past experience, NVIDIA cards have always performed very well with openSUSE, when using the proprietary drivers. Another benefit of going with NVIDIA was the lower power consumption. And a third benefit was that I could purchase a graphics card with a DVI output. I considered buying the GeForce GTX 1650 or the GeForce GTX 1050 Ti. Both cards have a recommended 300 watt Power Supply Unit. Which is exactly the PSU that is residing in my HP Pavilion Power. The 1650 is the newer card, which could mean that the drivers are not fully developed for Linux. So I decided to go for the somewhat older, but not much slower 1050 Ti.
| Card |
CUDA cores |
Boost Clock |
Passmark score |
NVIDIA GeForce GTX 1650, 4GB |
896 |
1680 MHz |
7946 |
| NVIDIA GeForce GTX 1050 Ti, 4GB |
768 |
1392 MHz |
6041 |
The second issue was which card to buy. First I looked at the ASUS ROG Strix GeForce GTX 1050 Ti 4GB, but that card was to big for my case. Therefore I bought the ASUS Phoenix GeForce GTX 1050 Ti 4GB.


This card was pretty easy to install. The card didn’t need additional power, in contrast to the Radeon RX 580 which used a 6-pin connector.


After installing the card, it was time for action! I plugged in the openSUSE Leap 15.1 USB thumbstick and waited for an installation screen. And lucky enough, my plan worked! Leap 15.1 recognized my new GPU and I proceeded with the upgrade of my operating system. I won’t cover all the details here, as I have detailed the upgrade process in my previous post.



After installation, there was still the issue of installing the proprietary NVIDIA drivers. For this, I first needed to add the NVIDIA repository. This is very easy. Just type ‘Repositories’ in the launcher menu and click on ‘Software Repositories’. Then click on ‘Add’ and put in the details of the NVIDIA repository. Furthermore, you need to trust the GnuPG key of this repository, to indicate that this is a trusted source.
https://download.nvidia.com/opensuse/leap/15.1/


After adding the repository, you can click ‘Finish’ and exit the Software Repositories tool. Now type ‘Software Management’ in the Launcher menu. This will launch the Yast Software Manager. The proprietary drivers will already be selected. Click on ‘Accept’ and then also ‘Accept’ all prompts to agree with the proprietary licenses. The installation will commence shortly afterwards.



After the installation I did encounter some strange behavior. My desktop looked like it was scaled x4. This was caused by a problem with my HDMI cable. As soon as I swapped this cable for a DVI cable, my settings were restored to normal. As shown below in KInfocenter, the proprietary drivers were installed as intended.

The last thing I wanted to try out was how my games performed. I have launched a couple of games:
- Supertuxkart – 60 FPS
- OpenArena – 90 FPS
- Xonotic – 110 FPS
- Tomb Raider – 40 FPS
- Yooka Laylee – 60 FPS
- Bioshock Infinite – 75 FPS



Conclusion
I was able to upgrade to openSUSE Leap 15.1 and I am again able to play my games in high settings on 1080p with good framerates. I am very pleased with my decision to go with the NVIDIA GeForce GTX 1050 Ti. Having a DVI output was a life saver for me. Otherwise I still had to struggle through all kind of weird HDMI issues. I did sacrifice some GPU power, but the NVIDIA GPU is a better match for the system as the HP Pavilion Power only has a 300 watt PSU. The NVIDIA GPU will most likely run cooler and will make the system more quiet overall.
This experience did remind me that going with Linux / openSUSE can still cause some difficulties, because of hardware support. In my opinion, going with NVIDIA is the safer option at this moment when running openSUSE Leap.
Published on: 10 september 2019
thoughtful blog reply by Alberto 'Mardy' Mardegan
VirtScreen on openSUSE | Turn a Tablet into a Second Monitor
openSUSE Conference 2019

The openSUSE Conference is the annual openSUSE community event that brings people from around the world together to meet and collaborate.
I attended the openSUSE Conference in Nuremberg, Germany, which was held at Z-Bau (House for Contemporary Culture) between 24 to 26 May.
Z-Bau is an iconic building with a long history. It was built by the Nazis to be used as barracks. After the second world war the building went through renovations and was converted into an arts centre.

There a nice spot right outside the building called the "beer garden". That's where the geeks would have their beers or coffee during a session break or simply to sit and have a conversation.



Right at the entrance you'd find several tables with goodies from different companies.

Shelly had asked me to buy her a Rubik's cube while on the conference trip but guess what, the folks from Amazon were giving away Rubik's cubes among other awesome goodies.
The conference started with a keynote speech/presentation by the President of Engineering, Product & Innovation at SUSE, Dr. Thomas Di Giacomo.
Dr. Giacomo welcomed the geeks to oSC 2019. He gave some SUSE updates, such as the company being independent since the March 15th, having reached 1750 employees, and that SUSE making a revenue of $ 400M.
He mentioned a few projects where SUSE is contributing upstream like openATTIC, Ceph, Stratos and Airship.

While talking about SUSE code contributions to openSUSE he explained the idea behind "factory first", i.e SUSE Linux Enterprise (SLE) pushed to the factory repo which then lands in openSUSE Tumbleweed and Leap. He mentioned the release of openSUSE Leap 15.1 and encouraged users to upgrade.
Dr. Giacomo also talked about openSUSE Kubic, MicroOS and the developers who are working on cool stuff for the community.
In the afternoon I had marked a few sessions on openSUSE MicroOS to attend. The first presentation was by the openSUSE Chairman, Richard Brown. He started the presentation with a little bit of history of how computers were originally programmed to do one thing at a time and then with the advent of networking, the Internet and the Personal Computer, technologies evolved rapidly.
He then related how we started thinking less about servers and datacenters and the focus become more cloud-centric. He talked about a new world that is made up of the Cloud, Virtualization, Internet of Things and Containers.

Richard talked about Podman, CRI-O, Kubernetes and the efforts made for easy container deployment using openSUSE Kubic. His mention about Kured (KUbernetes REboot Daemon) particularly caught my attention. I was previously asked about the "automatic reboots" becoming a problem in Kubic by a friend. The answer was if you do not trust unattended reboots to be safe then you should probably disable transactional updates. Then, that defeats the whole purpose of a "self-serving" system that openSUSE Kubic aims to be.
My presentation themed "openSUSE MicroOS in Production" was scheduled for 15h45 that day.

It was the first time that I did a presentation outside of Mauritius. I was nervous. The openSUSE Conference brings geeks from around the world and you can expect the top nerds of the community to be there.
I had a second talk during oSC 2019. It was a "Lightning Beer and Wein (Not Wine) Talks" session. I spoke about Netiquette within the openSUSE community. I had to hold a beer (also drink it), hold a microphone and change slides while talking about netiquette. That was fun and this time I was not nervous at all. Although I cannot say whether the confidence came from the German beer or I owe it to the many friends I had made during those three days. Either way, the conference helped me find myself within the openSUSE community, which is a great feeling for a lad from a small island in the middle of an ocean.
Principles

We recently did a post about the Nextcloud Mission and Principles we discussed at the previous Contributor Week. I guess it is mostly the easy-to-agree on stuff, so let me ruin the conversation a bit with the harder stuff. Warning: black and white don't exist beyond this point.
Open Source
In an internal conversation about some community pushback on something we did, I linked to islinuxaboutchoice.com - people often think that 'just' because a product is open source, it can't advertise to them, it has to be chock full of options, it has to be made by volunteers, it can't cost money and so on...But if you want to build a successful product and change the world, you have to be different. You have to keep an eye on usability. You have to promote what you do - nobody sees the great work that isn't talked about. You have to try and build a business so you can pay people for their work and speed up development. Or at least make sure that people can build businesses around your project to push it forward.
I personally think this is a major difference between KDE and GNOME, with the former being far less friendly to 'business' and thus most entrepreneurial folks and the resources they bring go into GNOME. And I've had beers with people discussing SUSE's business and its relationship with openSUSE - just like Fedora folks must think about how they work with Red Hat, all the time. I think the openSUSE foundation is a good idea (I've pushed for it when I was community manager), but going forward I think the board should have a keen eye on how they can enable and support commercial efforts around openSUSE. In my humble opinion the KDE board has been far to little focused on that (I've ran for the board on this platform) and you also see the LibreOffice's Document Foundation having trouble in this area. To help the projects be successful, the boards on these organizations need to have people on them who understand business and its needs, just like they need to have community members who understand the needs of open source contributors.
But companies bring lots of complications to open source. When they compete (as in the LibreOffice ecosystem), when they advertise, when they push for changes in release cycles... Remember Mark Shuttleworth arguing KDE should adopt a 6-month release cycle? In hindsight, I think we should have!
Principles
So, going back to the list of Nextcloud's Mission and Principles, I say they are the easy stuff, because they are. They show we want to do the right thing, they show what our core motivation was behind starting this company: building a project that helps people regain control over their privacy. But, in day to day, I see myself focus almost exclusively on the needs of business. And you know what, businesses don't need privacy... That isn't why we do this.Oh, I'm very proud we put in significant effort in home users when we can - our Simple Signup program has cost us a lot of effort and won't ever make us a dime. The Nextcloud Box was, similarly, purely associated with our goals, not a commercial project. Though you can argue both had marketing benefits - in the end, a bigger Nextcloud ecosystem helps us find customers.
I guess that's what keeps me motivated - customers help us improve Nextcloud, more Nextcloud users help us find more customers and so both benefit.
Pragmatism and the real hard questions
Personally, I'd add an item about 'pragmatism' to the list, though you can say it is inferred from our rather large ambitions. We want to make a difference, a real difference. That means you have to keep focused on the goal, put in the work and be pragmatic.An example is the conversation about github. Would we prefer a more decentralized solution? Absolutely. Are we going to compromise our goals by moving away from the largest open source collaboration network to a platform which will result in less contributions? No.... As long as github isn't making our work actively harder, does not act unethically and its network provides the biggest benefits to our community by helping us reach our goals, we will stay...
More questions and the rabbit hole
Would you buy a list of email addresses to send them information about Nextcloud? No, because it harms those users' privacy and probably isn't even really legal. Would you work with a large network to reach its members, even if you don't like that network and its practices? Yes - that is why we're on Facebook and Twitter, even though we're not fans of either.Let's make it even harder. How about the choice of who you sell to. Should we not sell to Company X even if that deal would allow us to hire 10 great developers on making Nextcloud better for the whole world and further our goals? Would you work with a company that builds rockets and bombs to earn money for Nextcloud development? We've decided 'nope' a few times already, we don't want that money. But what about their suppliers? And suppliers of suppliers? A company that makes screws might occasionally sell to Boeing which also makes money from army fighters... Hard choices, right?
And do you work with countries that are less than entirely awesome? Some would argue that would include Russia and China, others would say the USA should be on a black list, too... What about Brazil under its current president? The UK? You can't stop anyone from using an open source product anyway, of course... It gets political quick, we've decided to stick to EU export regulations but it's a tough set of questions. Mother Teresa took money from dictators. Should she have? No?
It might seem easy to say, in a very principled way, no to all the above questions, but then your project won't be successful. And your project wants to make the world better, does it not?
Conclusion?
We discuss these things internally and try to be both principled and pragmatic. That is difficult and I would absolutely appreciate thoughts, feedback, maybe links to how other organizations make these choices. Please, post them here, or in the comments section of the original blog. I can totally imagine you'd rather not comment here as this blog is hosted by blogger.com - yes, a Google company. For pragmatic reasons... I haven't had time to set up something else!There's lots of grey areas in this, it isn't always easy, and sometimes you do something that makes a few people upset. As the Dutch say - **Waar gehakt wordt vallen spaanders**.
PS and if you, despite all the hard questions, still would want to work at a company that tries to make the world better, we're hiring! Personally, I need somebody in marketing to help me organize events like the Nextcloud Conference, design flyers and slide decks for sales and so on... Want to work with me? Shoot me an email!
Migrating from Jekyll to org-mode and Github Actions
Introduction
My website has been generated until now by Github pages using a static site generator tool called Jekyll and the Lagom theme.
Github Pages
Github Pages allows to host static sites in Github repositories for free. It is very simple, you would just put the website in a branch (eg. gh-pages) and Github will serve it as username.github.io/repo, or via a custom domain by adding a simple CNAME text file to the repository.
Static Site Generators
If you did not want to write HTML directly, you could use a static site generator to keep a set of templates and content in the git repository and putting the output of the generator (the HTML files) into the gh-pages branch to be served by Github.
In addition to be able to write the content in Markdown, site generators made much easier to maintain content like a blog, with features like different templates for posts, code syntax highlighting, drafts and support for themes, which allowed to change the look and feel of the site by just changing one configuration option and just re-generating it.
If you wanted this process to be automatic, you could run the generation as part of some CI job (eg. with TravisCI), so that the site is re-generated when its sources are updated.
Jekyll support in Github Pages
Jekyll is one of these site generators and it was the most popular for a while. The nice part was that if you used Jekyll, Github Pages will generate your website automatically, without having to setup CI. Just push your changes and a minute later your site was published.
On the other hand, you were limited by using the Jekyll version that was installed at Github, and you could not just install any add-on that you wanted. You did not control the environment to the level you did in a typical CI.
Moving away from Jekyll
Jekyll just worked. I can’t complain about it. However, I feel too tied to the Github Pages environment. You had to use a Jekyll version that was close to a year behind and live with the plugins that the environment supported, and nothing more.
If I was to setup my own CI workflow to overcome this limitation, why keep using Jekyll?. Hugo started to feel faster, easier to deploy locally and mostly compatible.
I have been using Emacs for more than 10 years. 3 years ago, I switched mail clients from Thunderbird to mu4e on top of Emacs. Then I discovered org-mode as a plain-text personal organization system and gradually started to live more time inside emacs. Microsoft did a so good job with Visual Studio Code that for a moment I thought I would not resist. However, Microsoft created an ecosystem by making the interaction with programming languages a standard, via the Language Server Protocol, and emacs-lsp made my programming experience with emacs just better.
I knew that org-mode was quite good at exporting. After I saw a couple of websites generated from org, I started to toy with the idea of using org too. It could also be a good chance to learn Emacs Lisp for real. So I started learning about org and websites.
Inspiration
As I did not know where to start, I started by reading a lot of solutions by other people, documentation, posts, etc.
Most of the structure of the final solution, its ideas, conventions, configuration and some snippets were taken from the following projects:
- Toon Claes’s blog
- Example org-mode website using GitLab Pages by Rasmus
- https://github.com/bastibe/org-static-blog
- The theme is a port of the original Lagom theme I was using with Jekyll
Implementation
Principles
Once I had a clear picture of the domain, I made up my mind of how I wanted it and my own requirements:
- Use as much standard packages as possible. eg. org-publish. Avoid using “frameworks” on top of emacs/org
- Links to old posts should still work (I had configured Jekyll to use
/year/month/day/post-name.html) - Initially, I thought about the ability to migrate content gradually, eg. supporting Markdown posts for a while
- Self contained. Everything should be in a single git repo, not interfering with my
emacs.d. - Ability to run it from the command line, so that CI could be used to automatically generate the site from git
Emacs concepts to be used
There are a bunch of emacs concepts that help putting all the pieces together:
Emacs batch mode
While I could have most of the configuration in my ~/.emacs.d/init.el, I wanted a self-contained solution, not depending on my personal emacs configuration being available.
There are a bunch of emacs options that help achieving this:
$ emacs --help ... --batch do not do interactive display; implies -q ... --no-init-file, -q load neither ~/.emacs nor default.el ... --load, -l FILE load Emacs Lisp FILE using the load function ... --funcall, -f FUNC call Emacs Lisp function FUNC with no arguments ...
With these options, we can put all our configuration and helper functions in a lisp file, call emacs as a script engine, skip our personal configuration, have emacs load the file with the configuration, and call a function to run everything.
org-mode Export (ox)
org-mode includes an Export subsystem with several target formats (ASCII, beamer, HTML, etc). Every backend/converter is a set of functions that take already parsed org-mode structures (eg. a list, a timestamp, a paragraph) and converts it to the target format. Worg, a section of the Org-mode web site that is written by a volunteer community of Org-mode fans, provide documentation on how to define an export backend (org-export-define-backend). From here is important to understand the filter system and org-export-define-derived-backend, which allows to define a backend by overriding an existing one. This is what I will end using to tweak, for example, how timestamps are exported.
org-publish
org-mode includes a publishing management system that helps exporting a interlinked set of org files. There is a nice tutorial also available.
Using org-publish boils down to defining a list of components (blog posts, assets, RSS), their options (base directory, target directory, includes/excludes, publishing function) being the publishing function one of the most interesting ones, as org comes with a few predefined ones eg. org-html-publish-to-html to publish HTML files and org-publish-attachment to publish static assets. The most important thing to learn here is that you can wrap those in your own to do additional stuff and customize publishing very easily. I use this for example, to skip draft posts or to write redirect files for certain posts in addition to the post itself.
The solution
Directory Structure
├── CNAME
├── css
│ ├── index.css
│ └── site.css
├── index.org
├── Makefile
├── posts
│ ├── 2019-10-31-some-post
│ │ └── index.org
│ ├── 2014-06-11-other-post
│ │ ├── images
│ │ │ ├── someimage.png
│ │ │ └── another-image.png
│ │ └── index.org
│ ├── archive.org
│ └── posts.org
├── public
├── publish.el
├── README.org
├── snippets
│ ├── analytics.js
│ ├── postamble.html
│ └── preamble.html
└── tutorials
└── how-to-something
└── index.org
Inside the directory tree, you can find:
- a publish.el file with the org-publish project description and all the support code and helper functions
- a CNAME file for telling Github Pages my domain name
- a folder with a CSS file for all the site. Another one that is included only on the index page
- a Makefile that just calls emacs with the parameters we described above and calls the function duncan-publish-all
- a subdirectory for each post, and another one for tutorials
- a public directory where the output files are generated and the static assets copied
- a snippets directory with the preamble, postamble and Google analytics snippets
publish.el
The main file containing code and configuration includes a few custom publishing functions that are used as hooks for publishing and creating sitemaps.
org-publish project
The org-publish project (org-publish-project-alist) is defined in the variable duncan–publish-project-alist, and defines the following components:
-
blog
This components reads all org files in the
./posts/directory, and exports them to HTML using duncan/org-html-publish-post-to-html as the publishing function.This function injects the date as the page subtitle in the property list before delegating to the original function. This is a common pattern that you can use to override the publishing function. Note that subtitle is a recognized configuration property of the HTML export backend.
(defun duncan/org-html-publish-post-to-html (plist filename pub-dir) "Wraps org-html-publish-to-html. Append post date as subtitle to PLIST. FILENAME and PUB-DIR are passed." (let ((project (cons 'blog plist))) (plist-put plist :subtitle (format-time-string "%b %d, %Y" (org-publish-find-date filename project))) (duncan/org-html-publish-to-html plist filename pub-dir)))
The function also checks the #+REDIRECT_TO property, and generates redirect pages accordingly, by spawning another export to a different path in the same pub_dir.
This component is configured with a sitemap function which, even if goes through all posts, it is programmed to take only a few ones and write an org file with links to them. This file (posts.org) is then included in index.org and used as the list of recent posts.
-
archive-rss
This component also operates on
/.posts/, but instead of generating HTML, it uses the RSS backend to generate the full site archive as RSS.The sitemap function is also configured, like in the blog component, but this function generates an org file with all posts, not just the latest ones. The sitemap-format-entry function is shared between the sitemaps functions, as the list of posts looks the same.
-
site
The rest of the content of the site, including the index.org page and the generated org files for the archive (archive.org) and latest posts (posts.org).
-
assets
All files that are just copied over
-
tutorials
It works just like posts, but I setup each (don’t expect to have many) to use the ReadTheOrg theme. It uses the default HTML publishing function.
Export workflow
Look & Feel
I managed to port most of the Lagom look and feel by starting a CSS from scratch (learned CSS Grid in the process) and manually fixing each difference. I was quite satisfied with the final result. I had to use some extra page-specific CSS to hide the title in the front-page, or to display FontAwesome icons.
Before
After
Publishing
While locally you can test by running emacs via the Makefile, I wanted a new way to run the generation on every git push:
- I started with Travis, but I could not find container jobs anymore. When I relalized that Ubuntu had an older Emacs version I just lost interest.
- Gitlab was easy to get working. Not only because is very simple and elegant, but some of the examples I took inspiration from where already using it, along the Emacs Alpine container image. However, I did not want to have everything in Github, except my site
- Then I realized Github Actions was in beta, but I was not yet in. Until:
You’re in 🥰
— Nat Friedman (@natfriedman) August 26, 2019
Setting up a workflow to build the site
While, on Linux, Github actions run on Ubuntu, they allow you to execute an action inside a container.
name: Build and publish to pages
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
with:
fetch-depth: 1
- name: build
env:
ENV: production
uses: docker://iquiw/alpine-emacs
if: github.event.deleted == false
with:
args: ./build.sh
- name: deploy
uses: peaceiris/actions-gh-pages@v3
if: success()
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./public
Combining actions/checkout, our own build script running as a container docker://iquiw/alpine-emacs inside the VM, and peaceiris/actions-gh-pages we get the desired results.
As a caveat, you need to setup a personal access token for the action, as the default one will not work and what gets pushed to the gh-pages will not show up in your website.
The experience with Github Actions has been very positive. I will definitely replace most of my TravisCI usage in my repositories. Kudos to the Github team.
Conclusions
Not only I have a website powered by the tool I use daily, but it is also packed with awesome features. For example, the diagrams in this post are inlined as PlantUML code in the org file and exported via Org Babel.
It also gave me project to learn Emacs Lisp. I do plan to add some minor features, like blog tags or categories and perhaps commenting. The learning will also benefit personalizing my editor and mail client.
The source of this site is available on Github.
Regolith Linux | Review from an openSUSE User