SLES 11 - set DNS manually
Hell yeah, I know SLES11 is absolute / not supported anymore. The latest version are SLES11 SP4, but I still using it on my test server. Hmm.. sometimes I forgot how to configure DNS on Suse Linux Enteprise Server (SLES) 11 then I believe puts public notes on my weblog will be great for everyone reference!
back to the topic, setup dns is easier to just overwrite generated file /etc/resolv.conf directly
example:
$ vim /etc/resolv.conf
# Generated by dhcpcd for interface eth1
search localdomain
nameserver 192.168.100.2
nameserver 1.1.1.1
nameserver 8.8.8.8
After that execute /etc/init.d/network restart
Tumbleweed Gets New KDE Frameworks, systemd
KDE Frameworks 5.74.0 and systemd 246.4 became available in openSUSE Tumbleweed after two respective snapshots were released this week.
Hypervisor Xen, libstorage-ng, which is a C++ library used by YaST, and text editor vim were also some of the packages update this week in Tumbleweed.
The most recent snapshot released is 20200919. KDE Frameworks 5.74.0 was released earlier this month and its packages made it into this snapshot. KConfig introduced a method to query the KConfigSkeletonItem default value. KContacts now checks the length of the full name of an email address before trimming it with an address parser. KDE’s lightweight UI framework for mobile and convergent applications, Kirigami, made OverlaySheet of headers and footers use appropriate background colors, updated the app template and introduced a ToolBarLayout native object. Several other 5.74.0 Framework packages were update like Plasma Framework, KTestEditor and KIO. Bluetooth protocol bluez 5.55 fixed several handling issues related to the Audio/Video Remote Control Profile and the Generic Attribute Profile. A reverted Common Vulnerabilities and Exposures patch that was recommended by upstream in cpio 2.13 was once again added. GObject wrapper libgusb 0.3.5 fixed version scripts to be more portable. Documentation was fixed and translations were made for Finnish, Hindi and Russian in the 4.3.42 libstorage-ng update. YaST2 4.3.27 made a change to hide the heading of the dialog when no title is defined or the title is set to an empty string. Xen’s minor updated reverted a previous libexec change for a qemu compatibility wrapper; the path used exists in domU.xml files in the emulator field. The snapshot is trending stable at a 99 rating, according to the Tumbleweed snapshot reviewer.
Snapshot 20200917 just recorded a stable rating of 99 while introducing one of the more difficult packages to do that with the update of systemd 246.4; the suite of basic building blocks for a Linux system reworked how to prevent journald from both enabling auditd and recording audit messages. The new version is easier to maintain after patches reached an all time low for the package in the distro. Text editor vim 8.2.1551 fixed a lengthy list of problems including a memory access error and debugger code that was insufficiently tested. Disk archiver package dar 2.6.12 fixed a bug related to the merging of an archive when re-compressing the data with another algorithm. The only major version to update this week in Tumbleweed was virt-manager from 2.2.1 to 3.0.0; the new release came out on Sept. 15 and provides a new UI that has a ‘Manual Install’ option, which creates a VM without any required install media.
Release manager Dominique Leuenberger highlighted in his Tumbleweed review of week 2020/38 some packages in the staging projects that should make it into a snapshot soon. Those packages include openssl 3.0, glibc 2.32, SELinux 3.1, GNOME 3.36.6 and binutils 2.35.
Triggered
Somebody pointed me to a research article about how many app developers fail to comply with the GDPR and data requests in general.
The sender suggested that I could use it in marketing for Nextcloud.
I appreciate such help, obviously, and often such articles are interesting. This one - I read it for a while but honestly, while I think it is good this is researched and attention is paid for it, I neither find the results very surprising NOR that horrible.
What, a privacy advocate NOT deeply upset at bad privacy practices?
Sir, yes, sir. You see, while the letter of the law is important, I think that intentions also are extremely important. Let me explain.
Not all GDPR violations are made equal
If you or your small business develops an app or runs a website to sell a product and you simply and honestly try to do a decent job while being a decent person, the GDPR is a big burden. Yes, the GDPR is good, giving people important rights. But if you run a mailing list on your local pottery sales website, with no intention other than to inform your prospective customers and followers of what you're up to, it can be a burden to have people send you GDPR takedown and 'delete me' requests instead of just having them, you know - unsubscribe via the link under your newsletter!
The goal of the GDPR, and of my personal privacy concerns, isn't at all related to such a business. If anything, their additional hardship (and we at Nextcloud have this issue too) is at best a by product of the goal. That byproduct isn't all bad - we all make mistakes, and being more aware of privacy is good, even for small businesses. The GDPR has forced many small businesses to re-think how they deal with private data, and that isn't a bad thing at all. But it isn't the main benefit or goal of the GDPR in my eyes. There are big businesses who totally COULD do better but never bothered, and now the GDPR forces them to get their act together. While that's a real good thing, even THAT is not, in my opinion, what the GDPR is about.
Privacy violation as a business
You see, there are businesses who don't violate privacy of people by accident. Or even because it is convenient. There are businesses who do it as their core business model. You know who I'm talking about - Facebook, Google. To a lesser but still serious degree - Microsoft and yes, even Apple, though you can argue they are perhaps in the "side hustle" rather than "it's their primary revenue stream" category.
For these organizations, gathering your private data is their life blood. They exploit it in many ways - some sell it, which is in my opinion definitely among the most egregious 'options'. Others, like Google and Facebook, hoard but also aggressively protect your data - they don't want to leak it too much, they want to monetize it themselves! Of course, in the process of that, they often leak it anyway - think Cambridge Analytica - that was in no way an incident, hundreds of apps get your private data via Google, Facebook, Microsoft and others. But by and large, they want to keep that data to themselves so they can use it to offer services - targeted ads. Which in turn, of course, get abused sometimes too.
My issue with this business model, even without the outright sale of data, is two-fold.
Ads work better than you think
First, in principle - while people might feel ads don't effect them, there is a reason companies pay for them. They DO effect your behavior. Maybe not as much or in the way marketing departments think or hope, but the effect exists.
How bad is that? Depends, I guess. To some degree, it is of course entirely legitimate that companies have a way to present their product to people. But modern targeting does more, including allowing companies to charge specific people different prices, and of course a wide arrange of sometimes nasty psychological tricks is used. The example Facebook once gave to potential advertisers, of targeting an insecure youth "at their most vulnerable" with an ad is... rather disgusting.
This gets worse when we're not just talking about product ads but political ads, either from political countries or, of course, from foreign non-democratic adversaries trying to influence our freedoms in a rather direct and dangerous way. And again - this is more effective than most people realize or are willing to admit and has swayed elections already, making is all less free.
Centralization is bad
Second, there is simply a HUGE issue with all-our-eggs in one basket. Especial when that basket is in a foreign country and not protected by privacy and security laws compatible with those in your own country. Having a single point of failure, how well protected - is just not smart. Things WILL fail, always. Better have slightly more breaches that each are just a single provider, than one breach of all private data of everyone in a country...
And that's not even talking about the fact that this data helps these companies get incredibly huge and then allows them to suppress or kill competition (or just buy it) - think Amazon, Microsoft. These tech molochs are just plain bad because of many reasons. They are anti-competitive, which raises prices, decreases choice, and the much lower innovation-per-dollar they produce is of course a worse deal for society too. They are too easy to control by law enforcement and censorship, impacting our freedoms - even when they're not 'foreign' to you. Yes, it is harder to censor 50000 private servers than one Google server farm!
Triggered
So, as you notice, this question triggered me. Not all privacy violations are equal. Intentions matter. As does market power. And the GDPR is not a golden bullet. It has downsides - compliance is often easier for big companies than small ones, a serious issue.
Luckily, our judicial system tends to look at the intentions behind law, and I would expect a judge to fine an organization heavier for truly bad business models than for honest mistakes. I hope I'm not too optimistic here.
From my side, I don't want to bang on people's head for mistakes. I want to attack and challenge bad business models and bad intentions. A local, small app maker who fails to respond quickly enough to GDPR requests - not my target. Facebook - yes.
And by the way. Maybe it doesn't need to be said to most of you, dear readers, but of course - our open source world is, I still believe, a huge part of solving this problem. KDE, openSUSE and other Linuxes and desktops - and of course Nextcloud, Mastodon, Matrix and other decentralized and distributed and self-hosted platforms. We have ways to go, but we're making progress!
As I concluded to the person who triggered me - I know, this is far too long a reply to what they said
But it triggered me ;-)
Best reply over twitter, (twitter.com/jospoortvliet) or so, this awful Google platform makes commenting nearly impossible. And I know, the irony, replying on twitter, and I still have not moved away from blogger.com... Some day, some day. When I find time.
Feature Requests, Submit Requests for openSUSE Jump Take Shape
The openSUSE Project is progressing with the state of openSUSE Jump, which is the interim name given to the experimental distribution in the Open Build Service.
openSUSE Leap Release Manager Lubos Kocman sent an email to the project titled “Update on Jump and Leap 15.3 and proposed roadmap for the next steps” that explains the progress that has been made with Jump 15.2.1.
“We have some exciting news to share about the openSUSE Jump effort!” Kocman wrote. “We will have a Jira partner setup (coming) for openSUSE this week!”
Access to Jira will allow openSUSE Leap contributors to see updates on community feature requests and be able to comment on requested information or allow them to request information. The process will be tested initially by one of the community members to see if it works properly.
Kocman also informed the project of a new OBS feature that will allow openSUSE Leap contributors to submit code changes “directly” against SUSE Linux Enterprise without having Submit Requests rejected unless they failed review.
“All openSUSE Leap contributors should have a look at Jump submit requests documentation,” Kocman wrote
The OBS team is still working on the sync of comment updates for Submit Requests from SUSE’s Internal Build Service back to OBS. More info about this IBS and OBS topic can be found on https://en.opensuse.org/Portal:Jump:OBS:SRMirroring.
Jump is a prototype rebuild of openSUSE Leap 15.2 that is synchronizing SUSE Linux Enterprise source code and binaries.
Bringing the source code and binaries of both Leap and SLE together is a challenge that has been successfully progressing since the late Spring. Process adjustments, new tools and changes in workflows have been made to help with the efforts for developers, contributors and stakeholders.
Kocman also provided a link to the roadmap listing GO/NOGO decisions for Jump and Leap toward the end of October and beginning of November.
A little rant about talent
It’s become less common to hear people referred to as “resources” in recent times. There’s more trendy “official vocab guidelines”, but what’s really changed? There’s still phrases in common use that sound good but betray the same mindset.
I often hear people striving to “hire and retain the best talent“ as if that is a strategy for success, or as if talent is a limited resource we must fight over.
Another common one is to describe employees as your “greatest asset”.
I’d like to believe both phrases come from the good intentions of valuing people. Valuing individuals as per the agile manifesto. I think these phrases betray a lack of consideration of the “…and interactions”.
The implication is organisations are in a battle to win and then protect as big a chunk as they can of a finite resource called “talent”. It’s positioned as a zero-sum game. There’s an implication that the impact of an organisation is a pure function of the “talent” it has accumulated.
People are not Talent. An organisation can amplify or stifle the brilliance of people. It can grow skills or curtail talent.
Talent is not skill. Talent gets you so far but skills can be grown. Does the team take the output that the people in it have the skill to produce? Or does the team provide an environment in which everyone can increase their skills and get more done than they could alone?

We might hire the people with the most pre-existing talent, and achieve nothing if we stifle them with a bureaucracy that prevents them from getting anything done. Organizational scar tissue that gets in the way; policies that demotivate.
Even without the weight of bureaucracy many teams are really just collections of individuals with a common manager. The outcomes of such groups are limited by the talent and preexisting skill of the people in them.
Contrast this with a team into which you can hire brilliant people who’ve yet to have the opportunity of being part of a team that grows them into highly skilled individuals. A team that gives everyone space to learn, provides challenges to stretch everyone, provides an environment where it’s safe to fail. Teams that have practices and habits that enable them to achieve great things despite the fallibility and limitations of the talent of each of the people in the team.
“when you are a Bear of Very Little Brain, and you Think of Things, you find sometimes that a Thing which seemed very Thingish inside you is quite different when it gets out into the open and has other people looking at it.”—AA Milne
While I’m a bear of very little brain, I’ve had the opportunity to be part of great teams that have taught me habits that help me achieve more than I can alone.
Habits like giving and receiving feedback. Like working together to balance each others weaknesses and learn from each other faster. Like making predictions and observing the results. Like investing in keeping things simple so they can fit into my brain. Like working in small steps. Like scheduled reflection points to consider how to improve how we’re working. Like occasionally throwing away the rules and seeing what happens.
Habits like thinking less and sensing more.
The post A little rant about talent appeared first on Benji's Blog.
Fun with Java Records
A while back I promised to follow up from this tweet to elaborate on the fun I was having with Java’s new Records (currently preview) feature.
Records, like lambdas and default methods on interfaces are tremendously useful language features because they enable many different patterns and uses beyond the obvious.
Java 8 brought lambdas, with lots of compelling uses for streams. What I found exciting at the time was that for the first time lots of things that we’d previously have to have waited for as new language features could become library features. While waiting for lambdas we had a Java 7 release with try-with-resources. If we’d had lambdas we could have implemented something similar in a library without needing a language change.
There’s often lots one can do with a bit of creativity. Even if Brian Goetz does sometimes spoil one’s fun ¬_¬
Records are another such exciting addition to Java. They provide a missing feature that’s hard to correct for in libraries due to sensible limitations on other features (e.g. default methods on interfaces not being able to override equals/hashcode)
Here’s a few things that records help us do that would otherwise wait indefinitely to appear in the core language.
Implicitly Implement (Forwarding) Interfaces
Java 8 gave us default methods on interfaces. These allowed us to mix together behaviour defined in multiple interfaces. One use of this is to avoid having to re-implement all of a large interface if you just want to add a new method to an existing type. For example, adding a .map(f) method to List. I called this the Forwarding Interface pattern.
Using forwarding interface still left us with a fair amount of boilerplate, just to delegate to a concrete implementation. Here’s a MappableList definition using a ForwardingList.
class MappableList implements List, ForwardingList, Mappable {
private List impl;
public MappableList(List impl) {
this.impl = impl;
}
@Override
public List impl() {
return impl;
}
}
The map(f) implementation is defined in Mappable<T> and the List<T> implementation is defined in ForwardingList<T>. All the body of MappableList<T> is boilerplate to delegate to a given List<T> implementation.
We can improve on this a bit using anonymous types thanks to Jdk 10’s var. We don’t have to define MappableList<T> at all. We can define it inline with intersection casts and structural equivalence with a lambda that returns the delegate type.
var y = (IsA> & Mappable & FlatMappable & Joinable)
() -> List.of("Anonymous", "Types");
This is probably a bit obscure for most people. Intersection casts aren’t commonly used. You’d also have to define your desired “mix” of behaviours at each usage site.
Records give us a better option. The implementation of a record definition can implicitly implement the boilerplate in the above MappableList definition
public record EnhancedList(List inner) implements
ForwardingList,
Mappable,
Filterable>,
Groupable {}
interface ForwardingList extends List, Forwarding> {
List inner();
//…
}
Here we have defined a record with a single field named “inner“. This automatically defines a getter called inner() which implicitly implements the inner() method on ForwardingList. None of the boilerplate on the above MappableList is needed. Here’s the full code. Here’s an example using it to map over a list.
Decomposing Records
Let’s define a Colour record
public record Colour(int red, int green, int blue) {}
This is nice and concise. However, what if we want to get the constituent parts back out again.
Colour colour = new Colour(1,2,3);
var r = colour.red();
var g = colour.green();
var b = colour.blue();
assertEquals(1, r.intValue());
assertEquals(2, g.intValue());
assertEquals(3, b.intValue());
Can we do better? How close can we get to object destructuring?
How about this.
Colour colour = new Colour(1,2,3);
colour.decompose((r,g,b) -> {
assertEquals(1, r.intValue());
assertEquals(2, g.intValue());
assertEquals(3, b.intValue());
});
How can we implement this in a way that requires minimal boilerplate? Default methods on interfaces come to the rescue again. What if we could get any of this additional sugary goodness on any record, simply by implementing an interface.
public record Colour(int red, int green, int blue)
implements TriTuple {}
Here we’re making our Colour record implement an interface so it can inherit behaviour from that interface.
Let’s make it work…
We’re passing the decompose method a lambda function that accepts three values. We want the implementation to invoke the lambda and pass our constituent values in the record (red, green, blue) as arguments when invoked.
Firstly let’s declare a default method in our TriTuple interface that accepts a lambda with the right signature.
interface TriTuple,T,U,V>
default void decompose(TriConsumer withComponents) {
//
}
}
Next we need a way of extracting the component parts of the record. Fortunately Java allows for this. There’s a new method Class::getRecordComponents that gives us an array of the constituent parts.
This lets us extract each of the three parts of the record and pass to the lambda.
var components = this.getClass().getRecordComponents();
return withComponents.apply(
(T) components[0].getAccessor().invoke(this),
(U) components[1].getAccessor().invoke(this),
(V) components[2].getAccessor().invoke(this)
);
There’s some tidying we can do, but the above works. A very similar implementation would allow us to return a result built with the component parts of the record as well.
Colour colour = new Colour(1,2,3);
var sum = colour.decomposeTo((r,g,b) -> r+g+b);
assertEquals(6, sum.intValue());
Structural Conversion
Sometimes the types get in the way of people doing what they want to do with the data. However wrong it may be ¬_¬
Let’s see if we can allow people to convert between Colours and Towns
public record Person(String name, int age, double height)
implements TriTuple {}
public record Town(int population, int altitude, int established)
implements TriTuple { }
Colour colour = new Colour(1, 2, 3);
Town town = colour.to(Town::new);
assertEquals(1, town.population());
assertEquals(2, town.altitude());
assertEquals(3, town.established());
How to implement the “to(..)” method? We’ve already done it! It’s accepting a method reference to Town’s constructor. This is the same signature and implementation of our decomposeTo method above. So we can just alias it.
default > R to(TriFunction ctor) {
return decomposeTo(ctor);
}
Replace Property
We’ve now got a nice TriTuple utility interface allowing us to extend the capabilities that tri-records have.
Another nice feature would be to create a new record with just one property changed. Imagine we’re mixing paint and we want a variant on an existing shade. We could just add more of one colour, not start from scratch.
Colour colour = new Colour(1,2,3);
Colour changed = colour.with(Colour::red, 5);
assertEquals(new Colour(5,2,3), changed);
We’re passing the .with(..) method a method reference to the property we want to change, as well as the new value. How can we implement .with(..) ? How can it know that the passed method reference refers to the first component value?
We can in fact match by name.
The RecordComponent type from the standard library that we used above can give us the name of each component of the record.
We can get the name of the passed method reference by using a functional interface that extends from Serializable. This lets us access the name of the method the lambda is invoking. In this case giving us back the name “red”
default TRecord with(MethodAwareFunction prop, R newValue) {
//
}
MethodAwareFunction extends another utility interface MethodFinder which provides us access to the Method invoked and from there, the name.
The last challenge is reflectively accessing the constructor of the type we’re trying to create. Fortunately we’re passing the type information to our utility interface at declaration time
public record Colour(int red, int green, int blue)
implements TriTuple {}
We want the Colour constructor. We can get it from Colour.class. We can get this by reflectively accessing the first type parameter of the TriTuple interface. Using Class::getGenericInterfaces() then ParameterizedType::getActualTypeArguments() and taking the first to get a Class<Colour>
Here’s a full implementation.
Automatic Builders
We can extend the above to have some similarities with the builder pattern, without having to create a builder manually each time.
We’ve already got our .with(namedProperty, value) method to build a record step by step. All we need is a way of creating a record with default values that we can replace with our desired values one at a time.
Person sam = builder(Person::new)
.with(Person::name, "Sam")
.with(Person::age, 34)
.with(Person::height, 83.2);
assertEquals(new Person("Sam", 34, 83.2), sam);
static > TBuild builder(Class cls) {
//
}
This static builder method invokes the passed constructor reference passing it appropriate default values. We’ll use the same SerializedLambda technique from above to access the appropriate argument types.
static > TBuild builder(MethodAwareTriFunction ctor) {
var reflectedConstructor = ctor.getContainingClass().getConstructors()[0];
var defaultConstructorValues = Stream.of(reflectedConstructor.getParameterTypes())
.map(defaultValues::get)
.collect(toList());
return ctor.apply(
(T)defaultConstructorValues.get(0),
(U)defaultConstructorValues.get(1),
(V)defaultConstructorValues.get(2)
);
}
Once we’ve invoked the constructor with default values we can re-use the .with(prop,value) method we created above to build a record up one value at a time.
Example Usage
public record Colour(int red, int green, int blue)
implements TriTuple {}
public record Person(String name, int age, double height)
implements TriTuple {}
public record Town(int population, int altitude, int established)
implements TriTuple {}
public record EnhancedList(List inner) implements
ForwardingList,
Mappable {}
@Test
public void map() {
var mappable = new EnhancedList<>(List.of("one", "two"));
assertEquals(
List.of("oneone", "twotwo"),
mappable.map(s -> s + s)
);
}
@Test
public void decomposable_record() {
Colour colour = new Colour(1,2,3);
colour.decompose((r,g,b) -> {
assertEquals(1, r.intValue());
assertEquals(2, g.intValue());
assertEquals(3, b.intValue());
});
var sum = colour.decomposeTo((r,g,b) -> r+g+b);
assertEquals(6, sum.intValue());
}
@Test
public void structural_convert() {
Colour colour = new Colour(1, 2, 3);
Town town = colour.to(Town::new);
assertEquals(1, town.population());
assertEquals(2, town.altitude());
assertEquals(3, town.established());
}
@Test
public void replace_property() {
Colour colour = new Colour(1,2,3);
Colour changed = colour.with(Colour::red, 5);
assertEquals(new Colour(5,2,3), changed);
Person p1 = new Person("Leslie", 12, 48.3);
Person p2 = p1.with(Person::name, "Beverly");
assertEquals(new Person("Beverly", 12, 48.3), p2);
}
@Test
public void auto_builders() {
Person sam = builder(Person::new)
.with(Person::name, "Sam")
.with(Person::age, 34)
.with(Person::height, 83.2);
assertEquals(new Person("Sam", 34, 83.2), sam);
}
Code is all in this test and this other test. Supporting records with arities other than 3 is left as an exercise to the reader ¬_¬
The post Fun with Java Records appeared first on Benji's Blog.
Rethinking Security on Linux: evaluating Antivirus & Password Manager solutions
Hacked!
Recently I had an experience that let me re-evaluate my approach to Security on Linux. I had updated my Desktop computer to the latest openSUSE Leap (15.2) version. I also installed the proprietary Nvidia drivers. At random points during the day I experienced a freeze of my KDE desktop. I cannot move my mouse or type on my keyboard. It probably involves Firefox, because I always have Firefox open during these moments. So for a couple of days, I try to see in my logs what is going on. In /var/log/messages (there is a very nice YaST module for that) you can see the latest messages.
Suddenly I see messages that I cannot explain. Below, I have copied some sample log lines that give you an impression of what was happening. I have excluded the lines with personal information. But to give you an impression: I could read line for line the names, surnames, addresses and e-mail addresses of all my family members in the /var/log/messsages file. I thought I was being hacked!
2020-09-04T11:01:48.001457+02:00 linux-910h kdeinit5[8884]: calligra.lib.store: Opening for reading "Thumbnails/thumbnail.png" 2020-09-04T11:01:48.002502+02:00 linux-910h kdeinit5[8884]: calligra.lib.store: Closing .... 2020-09-04T11:01:50.884670+02:00 linux-910h kdeinit5[8884]: calligra.lib.main: virtual bool KoDocument::openUrl(const QUrl&) url= "file:///home/username/Documents/Address list of my family.xls" .... 2020-09-04T11:01:52.571006+02:00 linux-910h kdeinit5[8884]: calligra.filter.msooxml: File "/xl/workbook.xml" loaded and parsed. 2020-09-04T11:01:52.571056+02:00 linux-910h kdeinit5[8884]: calligra.lib.store: opening for writing "content.xml" 2020-09-04T11:01:52.571080+02:00 linux-910h kdeinit5[8884]: calligra.lib.store: Closing 2020-09-04T11:01:52.571105+02:00 linux-910h kdeinit5[8884]: calligra.lib.store: Wrote file "content.xml" into ZIP archive. size 72466
I took immediate action, to prevent my PC from sharing more information over the internet:
- I unplugged the internet.
- I changed my root password and my user password.
- I blocked all external access via YaST Firewall
- I de-installed the Calligra suite.
- I informed my family (via my laptop) that I might have been hacked.
- I informed my work (via my laptop) that I might have been hacked.
I needed to find out what was happening. I needed to know if a trojan / mallware was trying to steal my personal information. So I tried searching for the ZIP archive which was referenced. This might still be stored somewhere on my PC. I used KFind to lookup all files which were created in the last 8 hours. And then I found a lot of thumbnail files which were created by… Gwenview. Stored in a temp folder.
I started to realize that it might not be a hack, but something that was rendering previews, just like in Gwenview. I checked Dolphin and detected that I had the preview function enabled. I checked the log files again. Indeed, whenever I had opened a folder with Dolphin, all Word and Excel files in that folder were ‘processed’. I browsed several folders after deleting Calligra and there were no more log lines added. I re-installed the Calligra suite and noticed the calligra-extras-dolphin package. I browsed the same folders and indeed, the log lines started appearing all over again. I had found the culprit. It wasn’t a hack.
The Dolphin plugin ‘calligra-extras-dolphin’ generates previews for Microsoft Office files. And only for Microsoft Office files. I wasn’t aware that this was happening. In Dolphin I had the preview function enabled. However, I didn’t see the previews in Dolphin, because I used the ‘Details view mode’. So while I browsed my Downloads and Documents folders, this plugin started rendering previews for my Word and Excel files. The only Word and Excel files that I have on my PC are send by my family and my friends, because I save everything in ODF. Two of these files contained the addresses of my family members. Possibly these operations also happen for all ODF preview files. But they are not logged in /var/log/messages. The Calligra plugin does log everything in there.
Rethinking Security on Linux
This incident forced me to rethink my security. I didn’t have a virus scanner installed. I knew that not many viruses were aimed at Linux. And the ones that are aimed at Linux systems mostly target WordPress (or other popular server software). I don’t download files from unknown sources.
This was true for my personal use. But because of Covid-19, I am forced to work from home. And as an IT architect, I evaluate a lot of software. To learn more about commercial software offerings, I request a lot of whitepapers. So in the last 6 months I actually did download a lot of PDF files from unknown sources. Also on my Linux system.
I was already on the verge of using a password manager. However, the thought of needing to change over 100 passwords, caused me to wait for ‘the right time’. And waited I did, because I was contemplating the use of a password manager for over 2 years. Now was the time to change. I decided to look for solutions that were native to Linux.
Searching for a Linux antivirus solution
I have used ClamAV in the past. I remembered that I didn’t like the user interface. However that was years ago. Maybe now it looks better. I opened YaST and installed ClamAV and the ClamTK GUI. I opened the application and… this still wasn’t for me. The software might be amazing, don’t get me wrong. But I didn’t like the look and feel.

Which led me to look for commercial alternatives. When searching on Google, it seams that there are quite a few virus scanners that work natively on Linux.
- ESET NOD32 Antivirus 4 Linux
- Comodo Antivirus for Linux
- F-Prot (End-of-Life 31-July-2021)
- AVG for Linux (discontinued)
- Sophos Antivirus for Linux
- Bitdefender GravityZone Ultra Security for Linux and Mac
- F-Secure Linux Security
- Kaspersky Endpoint Security for Linux
When you start to look into it in more detail, most of that information is very dated. Comodo is still selling Linux Antivirus, but hasn’t updated their solution in years. They only support older distributions and it doesn’t look that they will support newer Linux distributions in the future. F-Prot has stopped selling Antivirus for Linux and will end their support in 2021. AVG has stopped their Linux support already.
Sophos, Bitdefender, F-Secure and Kaspersky are still offering solutions for Linux, but not aimed at home users. They are targeted at businesses, running Linux on the server.
So the only option for home users is ESET NOD32 Antivirus 4 Linux. This is serving both as an antivirus and as an antimallware software. It has good documentation. The GUI is very user friendly. I like that there are various predefined scans that you can execute directly. I purchased a license for my Desktop and Laptop for 3 years for 60 euros. Which I feel is a very reasonable price.

Choosing a password manager
I already used Mozilla Firefox to save/prefill my passwords. I have secured my browser with a Master password. This solution changed into Mozilla Lockwise, which also added the Android app. Because I also use Firefox on my Android phone, the Mozilla Lockwise solution was sufficient to sync all my passwords across my devices. However, it falls short compared with the features of a true password manager. I first needed to know what I would be looking for. My personal criteria:
- Cloud hosted (centralized storage of passwords)
- Offers a Web interface
- Offers a Linux client
- Offers a Firefox plugin
- Offers an Android app
- User friendly interface
- Secure / trusted by the community
- Has Multi Factor Authentication
- Is preferably open source
- Easy to import Firefox/Chrome saved passwords
I have looked at the following alternatives:
- LastPass
- 1Password
- Dashlane
- Passbolt
- KeePassXC
- Bitwarden
LastPass is a very good password manager that has been around for a while. It has a command-line client for Linux, which is very cool. Of course they offer a web interface. Furthermore they offer GUI clients for Windows, MacOS, Android and iOS. Unfortunately there is no GUI for Linux. They offer plugins for Firefox, Chrome, Opera, Safari and Edge. They have a free tier that is good for personal use across your devices. And if you go premium, you pay $2,90 / month. The only negative that I can think of is that it is not open source software.
1Password is the biggest competitor to LastPass. They also have a command-line client for Linux, which is a big plus. They also offer a web interface. They have GUI clients for Windows, MacOS, Android and iOS. And (like LastPass) unfortunately no Linux GUI client. They have plugins for Firefox, Chrome, Brave, Safari and Edge. And they are (also) not open source. The pricing is less appealing than LastPass, since there is no free tier. You start paying $3 / month after 30 days of free trial. Therefore I would choose LastPass over 1Password.
Dashlane is a competitor to LastPass and 1Password. They don’t have a Linux client. Of course they offer a web interface. And they have clients for Windows, MacOS, Android and iOS. They offer plugins for Firefox, Chrome, Edge and Safari. They are not open source. And I don’t like their pricing structure. The free tier is limited to 1 device, which is a big NO for me. You can only save 50 free passwords. That means that if you go with Dashlane, you might avoid storing certain passwords in your password manager in order not to pay for their premium service. Which is exactly the opposite of what a password manager is trying to accomplish. I would avoid this one.
Passbolt is interesting. Its an open source password manager that offers a self-hosted option. Which is very cool, making it centrally / cloud hosted. However, they do not offer local clients at all. Everything is browser based. They do have Firefox and Chrome plugins. This password manager is very interesting for development teams or webmaster teams. You can put all your passwords in a central place in a trusted application. You can host it yourself on AWS or Azure or on a VPS provider of choice. They are located in Luxembourg, which is good because its in the EU. Passbolt is very much aimed at businesses. I don’t think it is great as a personal password manager.
KeePassXC is a local password manager. The application is available on Linux, Windows and MacOS. Or you can use the Android Client Keepass2Android. There are plugins for Firefox and Chrome. You need to do some work if you want to use your passwords across multiple devices. You can save you database to Google Drive or Dropbox. Or you can use an application like Syncthing to sync your database. The architecture of this password manager is decentral by default. If you want maximum privacy, go for this option. This one is not for me. I like to have a centrally managed cloud solution.

If someone would combine the strenghts of Passbolt and KeepassXC, I feel this combination would gain a lot of traction in the Linux and Open Source world. But that is a pipe dream for now. The solution that I have chosen, might be just as appealing to the open source community.
Bitwarden is the password manager that I have decided to use. It meets all my criteria. It provides me with the ease of use of a centrally managed solution. But its still an open source solution. Best of all, they offer a Linux GUI client. Furthermore they have GUI clients for Windows, MacOS, Android and iOS. They offer plenty of browser plugins: for Firefox, Chrome, Opera, Vivaldi, Brave, Tor Browser, Edge and Safari. They have command line applications for Linux, Windows, MacOS and even offer an NPM package. I love their pricing structure. Their free tier is probably all that you need. You can sync across all your devices and there is no limit on the number of passwords that you can store. However, they also offer a premium tier for just $10 / year. I decided to go for the payed option, to keep them in business.

Conclusion
Linux is a very secure operating system. But security is always a moving target. You can make your Linux system more secure by limiting user permissions, by configuring the firewall, by configuring AppArmor, by sandboxing applications and by making lower level changes (YaST Security Center).
The solutions mentioned in this article do something different, they improve your online security. By preventing viruses and mallware from running on your system. And by making sure that your passwords are always random, strong and unique for each site. The mentioned solutions are simple ways for everyone to up their security game. I hope my evaluation of these solutions helps other Linux users to become safer online.
Published on: 18 september 2020
openSUSE Tumbleweed – Review of the week 2020/38
Dear Tumbleweed users and hackers,
An average week, with an average number of 5 snapshots (0910, 0914, 0915, 0916, and 0917 – with 0917 just being synced out). The content of these snapshots included:
- KDE Applications 20.08.1
- Qt 5.15.1
- PackageKit 1.2.1
- Systemd 246.4
- Virt-Manager 3.0.0
The staging projects around glibc is barely moving, and as such, the stagings still include:
- KDE Frameworks 5.74.0
- GNOME 3.36.6
- Linus kernel 5.8.9
- glibc 2.32
- binutils 2.35
- gettext 0.21
- bison 3.7.1
- SELinux 3.1
- openssl 3.0
Fast Kernel Builds - 2020
A number of months ago I did an “Ask Me Anything” interview on r/linux on redit. As part of that, a discussion of the hardware I used came up, and someone said, “I know someone that can get you a new machine” “get that person a new machine!” or something like that.
Fast forward a few months, and a “beefy” AMD Threadwripper 3970X shows up on my doorstep thanks to the amazing work of Wendell Wilson at Level One Techs.
Ever since I started doing Linux kernel development the hardware I use has been a mix of things donated to me for development (workstations from Intel and IBM, laptops from Dell) machines my employer have bought for me (various laptops over the years), and machines I’ve bought on my own because I “needed” it (workstations built from scratch, Apple Mac Minis, laptops from Apple and Dell and ASUS and Panasonic). I know I am extremely lucky in this position, and anything that has been donated to me, has been done so only to ensure that the hardware works well on Linux. “Will code for hardware” was an early mantra of many kernel developers, myself included, and hardware companies are usually willing to donate machines and peripherals to ensure kernel support.
This new AMD machine is just another in a long line of good workstations that help me read email really well. Oops, I mean, “do kernel builds really fast”…
For full details on the system, see this forum description, and this video that Wendell did in building the machine, and then this video of us talking about it before it was sent out. We need to do a follow-on one now that I’ve had it for a few months and have gotten used to it.
Benchmark tools
Below I post the results of some benchmarks that I have done to try to show the speed of different systems. I’ve used the tool Fio version fio-3.23-28-g7064, kcbench version v0.9.0 (from git), and perf version 5.7.g3d77e6a8804a. All of these are great for doing real-world tests of I/O systems (fio), kernel build tests (kcbench), and “what is my system doing at the moment” queries (perf). I recommend trying all of these out yourself if you haven’t done so already.
Fast Builds
I’ve been using a laptop for my primary development system for a number of years now, due to travel and moving around a bit, and because it was just “good enough” at the time. I do some local builds and testing, but have a “build machine” in a data center somewhere, that I do all of my normal stable kernel builds on, as it is much much faster than any laptop. It is set up to do kernel builds directly off of a RAM disk, ensuring that I/O isn’t an issue. Given that is has 128Gb of RAM, carving out a 40Gb ramdisk for kernel builds to run on (room for 4-5 at once), this has worked really well, with kernel builds of a full kernel tree in a few minutes.
Here’s the output of kcbench on my data center build box which is running Fedora 32:
Processor: Intel Core Processor (Broadwell) [40 CPUs]
Cpufreq; Memory: Unknown; 120757 MiB
Linux running: 5.8.7-200.fc32.x86_64 [x86_64]
Compiler: gcc (GCC) 10.2.1 20200723 (Red Hat 10.2.1-1)
Linux compiled: 5.7.0 [/home/gregkh/.cache/kcbench/linux-5.7]
Config; Environment: defconfig; CCACHE_DISABLE="1"
Build command: make vmlinux
Filling caches: This might take a while... Done
Run 1 (-j 40): 81.92 seconds / 43.95 kernels/hour [P:3033%]
Run 2 (-j 40): 83.38 seconds / 43.18 kernels/hour [P:2980%]
Run 3 (-j 46): 82.11 seconds / 43.84 kernels/hour [P:3064%]
Run 4 (-j 46): 81.43 seconds / 44.21 kernels/hour [P:3098%]
Contrast that with my current laptop:
Processor: Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz [8 CPUs]
Cpufreq; Memory: powersave [intel_pstate]; 15678 MiB
Linux running: 5.8.8-arch1-1 [x86_64]
Compiler: gcc (GCC) 10.2.0
Linux compiled: 5.7.0 [/home/gregkh/.cache/kcbench/linux-5.7]
Config; Environment: defconfig; CCACHE_DISABLE="1"
Build command: make vmlinux
Filling caches: This might take a while... Done
Run 1 (-j 8): 392.69 seconds / 9.17 kernels/hour [P:768%]
Run 2 (-j 8): 393.37 seconds / 9.15 kernels/hour [P:768%]
Run 3 (-j 10): 394.14 seconds / 9.13 kernels/hour [P:767%]
Run 4 (-j 10): 392.94 seconds / 9.16 kernels/hour [P:769%]
Run 5 (-j 4): 441.86 seconds / 8.15 kernels/hour [P:392%]
Run 6 (-j 4): 440.31 seconds / 8.18 kernels/hour [P:392%]
Run 7 (-j 6): 413.48 seconds / 8.71 kernels/hour [P:586%]
Run 8 (-j 6): 412.95 seconds / 8.72 kernels/hour [P:587%]
Then the new workstation:
Processor: AMD Ryzen Threadripper 3970X 32-Core Processor [64 CPUs]
Cpufreq; Memory: schedutil [acpi-cpufreq]; 257693 MiB
Linux running: 5.8.8-arch1-1 [x86_64]
Compiler: gcc (GCC) 10.2.0
Linux compiled: 5.7.0 [/home/gregkh/.cache/kcbench/linux-5.7/]
Config; Environment: defconfig; CCACHE_DISABLE="1"
Build command: make vmlinux
Filling caches: This might take a while... Done
Run 1 (-j 64): 37.15 seconds / 96.90 kernels/hour [P:4223%]
Run 2 (-j 64): 37.14 seconds / 96.93 kernels/hour [P:4223%]
Run 3 (-j 71): 37.16 seconds / 96.88 kernels/hour [P:4240%]
Run 4 (-j 71): 37.12 seconds / 96.98 kernels/hour [P:4251%]
Run 5 (-j 32): 43.12 seconds / 83.49 kernels/hour [P:2470%]
Run 6 (-j 32): 43.81 seconds / 82.17 kernels/hour [P:2435%]
Run 7 (-j 38): 41.57 seconds / 86.60 kernels/hour [P:2850%]
Run 8 (-j 38): 42.53 seconds / 84.65 kernels/hour [P:2787%]
Having a local machine that builds kernels faster than my external build box has been a liberating experience. I can do many more local tests before sending things off to the build systems for “final test builds” there.
Here’s a picture of my local box doing kernel builds, and the remote machine doing builds at the same time, both running bpytop to monitor what is happening (htop doesn’t work well for huge numbers of cpus). It’s not really all that useful, but is fun eye-candy:

SSD vs. NVME
As shipped to me, the machine booted from a raid array of an NVME disk. Outside of laptops, I’ve not used NVME disks, only SSDs. Given that I didn’t really “trust” the Linux install on the disk, I deleted the data on the disks, and installed a trusty SATA SSD disk and got Linux up and running well on it.
After that was all up and running well (btw, I use Arch Linux), I looked into the NVME disk, to see if it really would help my normal workflow out or not.
Firing up fio, here are the summary numbers of the different disk systems using the default “examples/ssd-test.fio” test settings:
SSD:
Run status group 0 (all jobs):
READ: bw=219MiB/s (230MB/s), 219MiB/s-219MiB/s (230MB/s-230MB/s), io=10.0GiB (10.7GB), run=46672-46672msec
Run status group 1 (all jobs):
READ: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=6855MiB (7188MB), run=60001-60001msec
Run status group 2 (all jobs):
WRITE: bw=177MiB/s (186MB/s), 177MiB/s-177MiB/s (186MB/s-186MB/s), io=10.0GiB (10.7GB), run=57865-57865msec
Run status group 3 (all jobs):
WRITE: bw=175MiB/s (183MB/s), 175MiB/s-175MiB/s (183MB/s-183MB/s), io=10.0GiB (10.7GB), run=58539-58539msec
Disk stats (read/write):
sda: ios=4375716/5243124, merge=548/5271, ticks=404842/436889, in_queue=843866, util=99.73%
NVME:
Run status group 0 (all jobs):
READ: bw=810MiB/s (850MB/s), 810MiB/s-810MiB/s (850MB/s-850MB/s), io=10.0GiB (10.7GB), run=12636-12636msec
Run status group 1 (all jobs):
READ: bw=177MiB/s (186MB/s), 177MiB/s-177MiB/s (186MB/s-186MB/s), io=10.0GiB (10.7GB), run=57875-57875msec
Run status group 2 (all jobs):
WRITE: bw=558MiB/s (585MB/s), 558MiB/s-558MiB/s (585MB/s-585MB/s), io=10.0GiB (10.7GB), run=18355-18355msec
Run status group 3 (all jobs):
WRITE: bw=553MiB/s (580MB/s), 553MiB/s-553MiB/s (580MB/s-580MB/s), io=10.0GiB (10.7GB), run=18516-18516msec
Disk stats (read/write):
md0: ios=5242880/5237386, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=1310720/1310738, aggrmerge=0/23, aggrticks=63986/25048, aggrin_queue=89116, aggrutil=97.67%
nvme3n1: ios=1310720/1310729, merge=0/0, ticks=63622/25626, in_queue=89332, util=97.63%
nvme0n1: ios=1310720/1310762, merge=0/92, ticks=63245/25529, in_queue=88858, util=97.67%
nvme1n1: ios=1310720/1310735, merge=0/3, ticks=64009/24018, in_queue=88114, util=97.58%
nvme2n1: ios=1310720/1310729, merge=0/0, ticks=65070/25022, in_queue=90162, util=97.49%
Full logs of both tests can be found here for the SSD, and here for the NVME array.
Basically the NVME array is up to 3 times faster than the SSD, depending on the specific read/write test, and is faster for everything overall.
But, does my normal workload of kernel builds matter when building on such fast storage? Normally a kernel build is very I/O intensive, but only up to a point. If the storage system can keep the CPU “full” of new data to build, and writes do not stall, a kernel build should be limited by CPU power, if the storage system can go fast enough.
So, is a SSD “fast” enough on a huge AMD Threadripper system?
In short, yes, here’s the output of kcbench running on the NVME disk:
Processor: AMD Ryzen Threadripper 3970X 32-Core Processor [64 CPUs]
Cpufreq; Memory: schedutil [acpi-cpufreq]; 257693 MiB
Linux running: 5.8.8-arch1-1 [x86_64]
Compiler: gcc (GCC) 10.2.0
Linux compiled: 5.7.0 [/home/gregkh/.cache/kcbench/linux-5.7/]
Config; Environment: defconfig; CCACHE_DISABLE="1"
Build command: make vmlinux
Filling caches: This might take a while... Done
Run 1 (-j 64): 36.97 seconds / 97.38 kernels/hour [P:4238%]
Run 2 (-j 64): 37.18 seconds / 96.83 kernels/hour [P:4220%]
Run 3 (-j 71): 37.14 seconds / 96.93 kernels/hour [P:4248%]
Run 4 (-j 71): 37.22 seconds / 96.72 kernels/hour [P:4241%]
Run 5 (-j 32): 44.77 seconds / 80.41 kernels/hour [P:2381%]
Run 6 (-j 32): 42.93 seconds / 83.86 kernels/hour [P:2485%]
Run 7 (-j 38): 42.41 seconds / 84.89 kernels/hour [P:2797%]
Run 8 (-j 38): 42.68 seconds / 84.35 kernels/hour [P:2787%]
Almost the exact same number of kernels built per hour.
So for a kernel developer, right now, a SSD is “good enough”, right?
It’s not just all builds
While kernel builds are the most time-consuming thing that I do on my systems, the other “heavy” thing that I do is lots of git commands on the Linux kernel tree. git is really fast, but it is limited by the speed of the storage medium for lots of different operations (clones, switching branches, and the like).
After I switched to running my kernel trees off of the NVME storage, it “felt” like git was going faster now, so I came up with some totally-artifical benchmarks to try to see if this was really true or not.
One common thing is cloning a whole kernel tree from a local version in a new directory to do different things with it. Git is great in that you can keep the “metadata” in one place, and only check out the source files in the new location, but dealing with 70 thousand files is not “free”.
$ cat clone_test.sh
#!/bin/bash
git clone -s ../work/torvalds/ test
sync
And, to make sure the data isn’t just coming out of the kernel cache, be sure to flush all caches first.
SSD output:
$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
$ perf stat ./clone_test.sh
Cloning into 'test'...
done.
Updating files: 100% (70006/70006), done.
Performance counter stats for './clone_test.sh':
4,971.83 msec task-clock:u # 0.536 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
92,713 page-faults:u # 0.019 M/sec
14,623,046,712 cycles:u # 2.941 GHz (83.18%)
720,522,572 stalled-cycles-frontend:u # 4.93% frontend cycles idle (83.40%)
3,179,466,779 stalled-cycles-backend:u # 21.74% backend cycles idle (83.06%)
21,254,471,305 instructions:u # 1.45 insn per cycle
# 0.15 stalled cycles per insn (83.47%)
2,842,560,124 branches:u # 571.734 M/sec (83.21%)
257,505,571 branch-misses:u # 9.06% of all branches (83.68%)
9.270460632 seconds time elapsed
3.505774000 seconds user
1.435931000 seconds sys
NVME disk:
$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
~/linux/tmp $ perf stat ./clone_test.sh
Cloning into 'test'...
done.
Updating files: 100% (70006/70006), done.
Performance counter stats for './clone_test.sh':
5,183.64 msec task-clock:u # 0.833 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
87,409 page-faults:u # 0.017 M/sec
14,660,739,004 cycles:u # 2.828 GHz (83.46%)
712,429,063 stalled-cycles-frontend:u # 4.86% frontend cycles idle (83.40%)
3,262,636,019 stalled-cycles-backend:u # 22.25% backend cycles idle (83.09%)
21,241,797,894 instructions:u # 1.45 insn per cycle
# 0.15 stalled cycles per insn (83.50%)
2,839,260,818 branches:u # 547.735 M/sec (83.30%)
258,942,077 branch-misses:u # 9.12% of all branches (83.25%)
6.219492326 seconds time elapsed
3.336154000 seconds user
1.593855000 seconds sys
So a “clone” is faster by 3 seconds, nothing earth shattering, but noticable.
But clones are rare, what’s more common is switching between branches, which checks out a subset of the different files depending on what is contained in the branches. It’s a lot of logic to figure out exactly what files need to change.
Here’s the test script:
$ cat branch_switch_test.sh
#!/bin/bash
cd test
git co -b old_kernel v4.4
sync
git co -b new_kernel v5.8
sync
And the results on the different disks:
SSD:
$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
$ perf stat ./branch_switch_test.sh
Updating files: 100% (79044/79044), done.
Switched to a new branch 'old_kernel'
Updating files: 100% (77961/77961), done.
Switched to a new branch 'new_kernel'
Performance counter stats for './branch_switch_test.sh':
10,500.82 msec task-clock:u # 0.613 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
195,900 page-faults:u # 0.019 M/sec
27,773,264,048 cycles:u # 2.645 GHz (83.35%)
1,386,882,131 stalled-cycles-frontend:u # 4.99% frontend cycles idle (83.54%)
6,448,903,713 stalled-cycles-backend:u # 23.22% backend cycles idle (83.22%)
39,512,908,361 instructions:u # 1.42 insn per cycle
# 0.16 stalled cycles per insn (83.15%)
5,316,543,747 branches:u # 506.298 M/sec (83.55%)
472,900,788 branch-misses:u # 8.89% of all branches (83.18%)
17.143453331 seconds time elapsed
6.589942000 seconds user
3.849337000 seconds sys
NVME:
$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
~/linux/tmp $ perf stat ./branch_switch_test.sh
Updating files: 100% (79044/79044), done.
Switched to a new branch 'old_kernel'
Updating files: 100% (77961/77961), done.
Switched to a new branch 'new_kernel'
Performance counter stats for './branch_switch_test.sh':
10,945.41 msec task-clock:u # 0.921 CPUs utilized
0 context-switches:u # 0.000 K/sec
0 cpu-migrations:u # 0.000 K/sec
197,776 page-faults:u # 0.018 M/sec
28,194,940,134 cycles:u # 2.576 GHz (83.37%)
1,380,829,465 stalled-cycles-frontend:u # 4.90% frontend cycles idle (83.14%)
6,657,826,665 stalled-cycles-backend:u # 23.61% backend cycles idle (83.37%)
41,291,161,076 instructions:u # 1.46 insn per cycle
# 0.16 stalled cycles per insn (83.00%)
5,353,402,476 branches:u # 489.100 M/sec (83.25%)
469,257,145 branch-misses:u # 8.77% of all branches (83.87%)
11.885845725 seconds time elapsed
6.741741000 seconds user
4.141722000 seconds sys
Just over 5 seconds faster on an nvme disk array.
Now 5 seconds doesn’t sound like much, but I’ll take it…
Conclusion
If you haven’t looked into new hardware in a while, or are stuck doing kernel development on a laptop, please seriously consider doing so, the power in a small desktop tower these days (and who is traveling anymore that needs a laptop?) is well worth it if possible.
Again, many thanks to Level1Techs for the hardware, it’s been put to very good use.
Fast Kernel Builds
A number of months ago I did an “Ask Me Anything” interview on r/linux on redit. As part of that, a discussion of the hardware I used came up, and someone said, “I know someone that can get you a new machine” “get that person a new machine!” or something like that.
Fast forward a few months, and a “beefy” AMD Threadwripper 3970X shows up on my doorstep thanks to the amazing work of Wendell Wilson at Level One Techs.