If you’re using
If you’re using copilot and by luck also use pass to manage your passwords, you will find that the default configuration, or rather the configuration where you want copilot enabled everywhere, basically creates a risk for your precious passwords… As Copilot will be enabled by default, on text files.

So here’s the snippet I use:
-- initialize copilot
local copilot = {
"zbirenbaum/copilot.lua",
"ofseed/copilot-status.nvim",
cmd = "Copilot",
build = ":Copilot auth",
event = "InsertEnter",
opts = {
filetypes = {
sh = function()
if string.match(vim.fs.basename(vim.api.nvim_buf_get_name(0)), "^%.env.*") then
-- disable for .env files
return false
end
return true
end,
text = function()
if
vim.has_key(vim.environ(), "GIT_CEILING_DIRECTORIES") or vim.has_key(vim.environ(), "PASS_VERSION")
then
-- disable for .env files
return false
end
return true
end,
},
},
}
I should eventually add this too to my dotfiles… Once I have the time to do so.

Deactivating copilot for password managers like pass
If you’re using copilot and by luck also use pass to manage your passwords, you will find that the default configuration, or rather the configuration where you want copilot enabled everywhere, basically creates a risk for your precious passwords… As Copilot will be enabled by default, on text files.

So here’s the snippet I use:
-- initialize copilot
local copilot = {
"zbirenbaum/copilot.lua",
"ofseed/copilot-status.nvim",
cmd = "Copilot",
build = ":Copilot auth",
event = "InsertEnter",
opts = {
filetypes = {
sh = function()
if string.match(vim.fs.basename(vim.api.nvim_buf_get_name(0)), "^%.env.*") then
-- disable for .env files
return false
end
return true
end,
text = function()
if
vim.has_key(vim.environ(), "GIT_CEILING_DIRECTORIES") or vim.has_key(vim.environ(), "PASS_VERSION")
then
-- disable for .env files
return false
end
return true
end,
},
},
}
I should eventually add this too to my dotfiles… Once I have the time to do so.

Looking Back at 2023
Congratulations on finishing yet another ride around the Sol with me. Here’s some noteworthy events from it from the perspective of a dust spec.
Music
While of course I’m stuck in my young taste like everybody else, I do try to pick up some new music. This year I’ve discovered Venjent doing a silly skit (he’s fully aware of that being how people discover his music and really leans into it). In surreal turn of events I went to see him in a club, looking like a misplaced pensioner at 1am, when he finally spun his set. But I seriously enjoy how he can transform the absolute silliness into a total banger I play on repeat.
I’ve published some music of my own as well. After Desync assembled from material from weeklybeats 22 came Solar Coffee and Take Frequent Breaks right at the end of the year. I doubt I’ll release anything this year, because Weekly Beats 24 will probably suck all my energy for music production. Hopefully I’ll enjoy it as much as the first half of 22 :)
FPV
From about 3 videos a week in 2018 I’m down to about 3 a year. One might say I’ve thrown the towel in, but I still enjoy a decent tree surf from time to time.
I’ve completely dropped the ball when it comes to racing though. Didn’t even qualify in Klatovy this year, because absolutely everyone has been training like crazy. It’s been lovely to hang around with the weirdos in the three competitions I attended.
Attending Ubuntu Summit 2023 as an openSUSE User
Music of the week: New Year edition
This is my last blog for 2023, Budapest time. However, it might already be the first blog of the year from me, if you live in Japan or New Zealand :-) This time it’s a single song: “Happy new year” from ABBA (and from me :-) ).
TIDAL: https://listen.tidal.com/album/575781/track/575787
setting up my system for my new job
setting up my system for my new job
equipment
I am starting my new job at Suse on Tuesday. As if to build the excitement over the holidays, they send me a boxes with equipment to get started. This gave me a good opportunity to clear off my desk and set up for a clean start.

They sent me a Thinkpad, a docking station, keyboard and mouse, and headphones. So, I set up all the equipment and made sure everything was comfortable to work with.

software set up
installing opensuse
It was recommended to me to install Leap, with I was happy with because I have been using it over my break on some other computers that I have. So I have already installed and set it up twice, and I had a USB key for it laying around.
I turned off Secure Boot on my laptop. I forgot all about the Secure Boot mess, and my BeeLink computers didn’t require my to make any configuration changes to install OpenSuse. Anyway, it was pretty easy to get the installer kicked off.

While the installer was running, I went about my day, and came back later delighted to see it on the Leap login page.

I logged in and was up and running.
I opened settings to see what wasn’t working, but everything so far was configured perfectly. Even the 4k display was set up properly, and I could tweak the settings if I wanted.

opensuse choice of install options
Three applications that I use everyday are Chrome, Visual Studio Code, and Slack. I also wanted to install Zoom. I don’t think that Suse the company uses Zoom, but other folks do, so I wanted to have it ready.
This is an area where I think Suse really shines. Suse is not trying to force me down a path of installing and using software specifically from their repositories in their specific packaging format. My order of preference for running apps is to:
- Set up a repository and install from there.
- Download an archive and install it locally.
- Download a built binary and run it locally.
I am not interested in FlatPak or Snappy packages. For people who do like those formats, they seem to work fine on OpenSuse, but it turns out that I could follow my personal preferences when I was getting set up.
installing chrome
For better or worse, I am pretty embedded into the Google ecosystem. As such, the first thing I do with any computer is install Chrome and log into my Google account. Of course, I have to use Firefox once to find and install Chrome.
The way I did it was to download the rpm, and click on it.

This opened YaST2 for me, and it was a simple matter of clicking buttons to get Chrome installed.

Then I logged in and made sure I could get to my email and omg.lol account, and all was good.

installing vscode
Code is nice because they let me follow my #1 preference. They had clear instructions for Suse on their downloads page. Just add the repo, and use zypper to install the package.

Worked like a dream:

install slack
For Slack, I downloaded the rpm from the slack download page, double clicked on it, and YaST2 installed it with no issues:

Also “just worked”:

install and run zoom
For some reason, I assumed that installing Zoom was going to be a mess, but … on the contrary, I visited their (linux download page)[https://zoom.us/download?os=linux] and could easily select an archive for my system:

I double clicked on the archive, went through YaST2 again, and it worked as smooth as silk:

webcam and headset
Suse included Plantronics headset with my equipment. It took me a lot of fiddling to get it working on Beelink, but it worked out of the box on Thinkpad. My Webcam also “just worked.”
I watched some videos to make sure all was well, and I had a great 4k experience with good sound.

Setup ownCloud Infinite Scale Real Quick
The ownCloud product Infinite Scale is going to be released in version five soon. The latest stable version is 4.0.5 and I am sure everybody checked it out already and is blown away by it’s performance, elegance and ease of use.
No, not yet?
Ok, well, in that case, here comes rescue: With the little script described here, it becomes really easy to start Infinite Scale to check it out on your computer. It makes it really easy and quick for you, without Linux super admin powers whatsoever.
To use it, you just need to open a terminal on your machine and cd into a directory somewhere in your home where you can afford to host some bytes.
Without further preparation, you type the following command line (NOT as user root please):
curl -L https://owncloud.com/runocis.sh | /bin/bash
What it does is that it automatically pulls the latest stable version of Infinite Scale from the official download server of ownCloud onto your computer. For that, it creates a configuration and a start script, and starts the server. The script detects the platform on which your’re running to download the right binary version. It also looks up the hostname and configures the installation for that name.
Once the server was started Infinite Scale’s web client can be accessed by pointing a browser to the URL https://your-hostname:9200/. Since this is an installation for testing purposes, it does not have a proper certificate configured. That is why your browser is complaining about the cert, and you have to calm it. And indeed, that is one of the reasons why you’re not supposed to use this sneak peak in production or even exposed to the internet.
For the nerds, the script does not really do magic, but just curls the golang single binary of Infinite Scale down to the machine into a sandbox directory, chmod it to be executable and create a working config and a data dir. All happens with the priviledges of the logged in user, no sudo or root involved. You’re encouraged to double check the install script using for example the command curl -L https://owncloud.com/runocis.sh | less - of course you never should trust anybody running scripts from the internet on your machine.
If the server is stopped by pressing Ctrl-C, it later can be started again by the script runocis.sh that was kindly left behind in the sandbox as well.
The installer was tested on these three platforms: 64 bit AMD/Intel CPU based Linux machines, 64 bit Raspberry Pi with Raspbian OS and MacOSX. The flavour of Linux should not make a difference.
If you encounter a problem with the script or if you have suggestions to improve, please find it in my this’n that section on Github. I am happy to receive issue reports or pull requests.
For further information and setups suitable for production please refer to the Infinite Scale documentation.
More Shoots Development
More Shoots Development

I have not just been 3D printing and wrapping Christmas gifts this week, I've been writing software as well!
A couple of weeks ago I put up my first bit of code for shoots.
I think the best way that I can describe it is that I am developing a data warehouse for pandas developers.
When I first posted the code, you could:
- Put a dataframe on the server.
- Get a dataframe from the server.
- Query a dataframe on the server using SQL.
- List dataframes.
Since then I've added:
- Organize dataframes in buckets.
- Tell the server where to save the dataframes.
- More and better tests.
- Documentation.
- Ability to resample data on the server in 2 ways.
I think that the project is starting to get useful. There are a few key things that I think are needed it so that I can really recommend using it:
- Security - support TLS on the server, require TLS on the client.
- Security - generate an admin token required to do anything on the server.
- Concat dataframes on the server.
- Clean dataframes on the server (sort, remove dupes, etc...).
After getting those core things done, I am thinking:
- Ability to store the data in object storage instead of on disk.
- Ability to run Python to run on the server to avoid round trips to the client for things like alerting and such.
Shoots Dataframe Storage
Shoots Dataframe Storage
I just posted some code for a project that I have been working on very part time in the last few weeks.
Shoots is dataframe storage server. Currently is supports pandas, but most likely I will add support for polars in the future.
Shoots is entirely written in Python and is designed for Python users.
Shoots comes in 2 parts, a server and a client library.
Shoots is very early software, but is in a usable state. It is stored on github. Issues and contributions welcome.
shoots_server
The server tries to be a fairly faithful Apache Flight Server, meaning that you should be able to use the Apache Arrow Flight client libraries directly. It is entirely built upon the upstream Apache Arrow project.
Under the hood, the server receives and serves pandas dataframes, storing thenm on disk in Apache Parquet format. However, shoots is designed so that, as a user, you don't need to know about the underlying storage formats and libraries.
shoots_client
The client pieces wrap the Apache FlightClient to offer an interface for pandas developers, abstracting away the Apache Arrow and Flight concepts.
usage
run the server
There are currently no run time options, so running the server is a simple matter of running the python module, depending on your system:
python shoots_server.py
or
python3 shoots_server.py
storing a dataframe
Use the client library to create an instance of the client, and "put" a dataframe. Assuming you are running locally:
from shoots_client import ShootsClient
from shoots_client import PutMode
import pandas as pd
shoots = ShootsClient("localhost", 8081)
df = pd.read_csv('sensor_data.csv')
shoots.put("sensor_data", dataframe=df, mode=PutMode.REPLACE)
retrieving data
You can simply get a dataframe back by using its name:
df0 = shoots.get("sensor_data")
print(df0)
You can also submit a sql query to bring back a subset of the data:
sql = 'select "Sensor_1" from sensor_data where "Sensor_2" < .2'
df1 = shoots.get("sensor_data", sql=sql)
print(df1)
Shoots use Apache DataFusion for executing SQL. The DataFusion dialect is well document.
listing dataframes
You can retrieve a list of dataframes and their schemas, using the list() method.
results = shoots.list()
print("dataframes stored:")
for r in results:
print(r["name"])
dataframes stored:
sensor_data
deleting dataframes
You can delete a dataframe using the delete() method:
shoots.delete("sensor_data")
buckets
You can organize your dataframes in buckets. This is essentially a directory where your dataframes are stored.
creating buckets
Buckets are implicitly created as needed if you use the "bucket" parameter in put():
shoots.put("sensor_data", dataframe=df, mode=PutMode.REPLACE, bucket="my-bucket")
df1 = shoots.get("sensor_data", bucket="my-bucket")
print(df1)
listing buckets
You can use the buckets() method to list available buckets:
print("buckets:")
print(shoots.buckets())
buckets:
['my-bucket', 'foo']
deleting buckets
You can delete buckets with the delete_bucket() method. You can force a deletion of all contents using BucketDeleteMode.DELETE_CONTENTS:
print("buckets before deletion:")
print(shoots.buckets())
shoots.delete_bucket("my-bucket", mode=BucketDeleteMode.DELETE_CONTENTS)
print("buckets after deletion:")
print(shoots.buckets())
buckets before deletion:
['my-bucket', 'foo']
buckets after deletion:
['foo']
Roadmap
I intend to work on the following in the coming weeks, in no particular order:
- [ ] add a runtime option for the root bucket directory, use it for testing
- [ ] document code and generate docs
- [ ] pip packaging
- [ ] pattern matching for
list() - [ ] downsampling via sql on the server
- [ ] combining dataframes on the server
- [ ] compressing and cleaning dataframes on teh server
- [ ] authentication
Librsvg will use Rust-only image decoders starting on 2.58.0
Starting with version 2.58.0, librsvg will no longer use
gdk-pixbuf to decode raster images that are referenced
from SVG documents. For example, an <image> element like this:
<image href="foo.jpg" width="100" height="100"/>
I have just pushed a merge request to make librsvg use the image-rs
crate to decode raster images. This is part of two related changes:
-
Don't load SVG sub-documents with gdk-pixbuf. For historical reasons, librsvg's original C code did not even bother to detect if
<image href="foo.svg"/>referenced another SVG document; it would just ask gdk-pixbuf to render it as for any other image. This works more or less fine in SVG1.1, but for SVG2 we actually have to pay attention to the attributes in the<image>element and the child SVG document, and it's just easier to recurse into librsvg directly. -
Don't load raster images with gdk-pixbuf. Now that Loupe is using the same Rust crates to decode raster images, I think we can begin to move the rest of the platform to not using memory-unsafe codecs.
Librsvg still compiles and installs the gdk-pixbuf loader that lets
other applications render SVG documents as if they were raster images;
nothing is changed there. The C APIs that create GdkPixbuf objects
also remain unchanged.
Testers wanted
As you can imagine, this is the sort of change that gives me a bit of anxiety. Say whatever you want about memory-unsafe code in libpng and libjpeg-turbo, but they are tested and fuzzed all the way to hell, all the time. The Rust crates for decoding images have not been as heavily developed, and there is still plenty of interesting work to do there in terms of performance and support for the more exotic variants of those file formats. I think this is a good opportunity to find exactly what they might be lacking.
Please test the main branch of librsvg, especially if you render
documents which have <image> elements! The changes above are easy
to roll back or to make optional if too much trouble appears.

