Lenovo BIOS Update method for Linux and USB thumb drive
If your like me and you don't ever like the methods that are laid out by the manufacturer to burn an ISO to CD or use a Windows Update utility then follow these instructions. Linux Instructions. 1. Get the bios update ISO from the lenovo support site. I have a ThinkPad W530 (2447-23U) You will want to grab your Machine Type (mine was 2447) You will also want to grab your Model (mine was 23U) In order to grab this information you can use dmidecode or hwinfo. Here is a link to the ISO file I downloaded for my Machine type and model. http://support.lenovo.com/en_US/downloads/detail.page?DocID=DS029170#os (g5uj17us.iso) Once you have the right ISO file then move to step 2. 2. Get 'geteltorito' and extract the boot image from the iso. Execute the below commands. $ wget 'http://userpages.uni-koblenz.de/~krienke/ftp/noarch/geteltorito/geteltorito.pl' $ perl geteltorito.pl g5uj17us.iso > biosupdate.img
Note: if your wondering where to get the geteltorito perl script and the link above is not working for you then you can visit the developers website at http://freecode.com/projects/geteltorito
3. Copy the image to the USB thumdrive once your thumb drive is connected. $ sudo dd if=biosupdate.img of=/dev/usbthumdrive bs=512K Reboot, Press F12 and boot from USB Execute the Flash Utility Have a lot of fun!
Latest Images
I have not been very active with my blog lately. Things are always changing and having time to gather some visuals is harder at times.
Over the past few weeks I have been involved in making some artwork for the openSUSE Summit coming up in November in Florida. It has been a great opportunity to put out some simple designs for our newest conference format. This conference will follo SUSECon set to be right before the Summit. If you are in the Florida area, head on over to summit.opensuse.org and find out the latest about this conference.
In the mean time I took a trip around Utah (my backyard) to see some good landscape and take some pictures. I believe Utah has a lot to offer to those who love the cooler seasons. Its landscape is made up of beautiful mountains and wonderful shrubbery. When fall (Autumn) comes around, many people from all over come to Utah to travel the national parks looking for the perfect changes in leaf color and a mix between patches of snow and amazing sunsets.
Taking this to heart, I dedicated an afternoon to take some shots and test my luck with light. Here are the results. Maybe, even some of these could do for an awesome fall wallpaper.
Enjoy!
Graphics DevRoom at FOSDEM2014.
It's not called the X.org DevRoom this time round, but a hopefully more general Graphics DevRoom. As was the case with the X.org DevRooms before, anything related to graphics drivers and windowing systems goes. While the new name should make it clearer that this DevRoom is about more than just X, it also doesn't fully cover the load either, as this explicitly does include input drivers as well.
Some people have already started wondering why I haven't been whining at them before. Well, my trusted system of blackmailing people into holding talks early on failed this year. The FOSDEM deadline was too early and XDC was too late, so I decided to take a chance, and request a devroom again, in the hope that enough people will make it over to the fantastic madness that is FOSDEM.
After endless begging and grovelling the FOSDEM organizers got so fed up that they gave us two full days again. This means that we will be able to better group things, and avoid a scheduling clash like with the ARM talks last year (where ARM system guys were talking in one room exactly when ARM graphics guys were talking in another). All of this doesn't mean that First Come, First Serve doesn't apply, and if you do not want to hold a talk with a hangover in an empty DevRoom, you better move quickly :)
The FOSDEM organizers have a system called pentabarf. This is where everything is tracked and the schedules are created, and, almost magically, at the other end, all sorts of interesting things fall out, like the unbelievably busy but clear website that you see every year. This year though, it is expected that speakers themselves manage their own details, and that the DevRoom organizers oversee this, so we will no longer use the trusted wiki pages we used before. While i am not 100% certain yet, i think it is best that people who have spoken at the DevRoom (most of whom i will be poking personally anyway) in the past few years first talk to me first before working with pentabarf, as otherwise there will be duplicate accounts which will mean more overhead for everyone. More on that in the actual call for speakers email which will hit the relevant mailing lists soon.
FOSDEM futures for ARM
Connor Abbott and I both have had chromebooks for a long long time. Connor bought his when it first came out, which was even before the last FOSDEM. I bought mine at a time where I thought that Samsung was never going to sell it in germany, and the .uk version arrived on my doorstep 3 days before the announcement for Europe went out. These things have been burning great big holes in our souls ever since, as i stated that we would first get the older Mali models supported properly with our Lima driver, and deliver a solid graphics driver before we lose ourselves again in the next big thing. So while both of us had this hardware for quite a while, we really couldn't touch these nice toys with an interesting GPU at all.Now, naturally, this sort of thing is a bit tough to impose on teenagers, as they are hormonally programmed to break rules. So when Connor got bored during the summer (as teenagers do), he of course went and broke the rules. He did the unspeakable, and grabbed ARMs standalone shader compiler and started REing the Mali Midgard ISA. When his father is at FOSDEM this year, the two of us will have a bit of 'A Talk' about Connors wild behaviour, and Connor will be punished. Probably by forcing him to finish the beers he ordered :)
Luckily, adults are much better at obeying the rules. Much, much better.
Adults, for instance, would never go off and write a command stream tracer for this out of bounds future RE project. They would never ever dare to replay captured command streams on the chromebook. And they definitely would not spend days sifting through a binary to expose the shader compiler of the Mali Midgard. Such a thing would show weakness in character and would just undermine authority, and I would never stoop so low.
If I had done such an awful thing, then I would definitely not be talking about how much harder capture and replay were, err, would be, on this Mali, and that the lessons learned on the Mali Utgard will be really useful... In future? I would also not be mentioning how nice it would be to work on a proper linux from the get-go. I would also never be boasting at how much faster Connor and I will be at on turning our RE work on T6xx into a useful driver.
It looks like Connor and I will have some very interesting things to own up to at FOSDEM :)
Kraft Release 0.53
Only short time after Kraft’s release 0.51 I am announcing version 0.53 today. Kraft is the KDE software that helps you to handle your daily quotes and invoices in your small business.
The new release fixes a problem with the tarball of 0.51 which contained a wrong source revision. That did not cause any harm, but also did not bring the announced fixes. That was brought up by community friends, thanks for that.
Additionally another, actually the last known bug of Kraft’s catalog management was fixed. That was the problem that it did not work to drag sub chapters onto the top level of the catalog. That is working now.
Please update to the new version and help us with your feedback.
A Cosmic Dance in a Little Box
It’s Hack Week again. This time around I decided to look at running TripleO on openSUSE. If you’re not familiar with TripleO, it’s short for OpenStack on OpenStack, i.e. it’s a project to deploy OpenStack clouds on bare metal, using the components of OpenStack itself to do the work. I take some delight in bootstrapping of this nature – I think there’s a nice symmetry to it. Or, possibly, I’m just perverse.
Anyway, onwards. I had a chat to Robert Collins about TripleO while at PyCon AU 2013. He introduced me to diskimage-builder and suggested that making it capable of building openSUSE images would be a good first step. It turned out that making diskimage-builder actually run on openSUSE was probably a better first step, but I managed to get most of that out of the way in a random fit of hackery a couple of months ago. Further testing this week uncovered a few more minor kinks, two of which I’ve fixed here and here. It’s always the cross-distro work that seems to bring out the edge cases.
Then I figured there’s not much point making diskimage-builder create openSUSE images without knowing I can set up some sort of environment to validate them. So I’ve spent large parts of the last couple of days working my way through the TripleO Dev/Test instructions, deploying the default Ubuntu images with my openSUSE 12.3 desktop as VM host. For those following along at home the install-dependencies script doesn’t work on openSUSE (some manual intervention required, which I’ll try to either fix, document, or both, later). Anyway, at some point last night, I had what appeared to be a working seed VM, and a broken undercloud VM which was choking during cloud-init:
Calling http://169.254.169.254/2009-04-04/meta-data/instance-id' failed Request timed out
Figuring that out, well… There I was with a seed VM deployed from an image built with some scripts from several git repositories, automatically configured to run even more pieces of OpenStack than I’ve spoken about before, which in turn had attempted to deploy a second VM, which wanted to connect back to the first over a virtual bridge and via the magic of some iptables rules and I was running tcpdump and tailing logs and all the moving parts were just suddenly this GIANT COSMIC DANCE in a tiny little box on my desk on a hill on an island at the bottom of the world.
It was at this point I realised I had probably been sitting at my computer for too long.
It turns out the problem above was due to my_ip being set to an empty string in /etc/nova/nova.conf on the seed VM. Somehow I didn’t have the fix in my local source repo. An additional problem is that libvirt on openSUSE, like Fedora, doesn’t set uri_default="qemu:///system". This causes nova baremetal calls from the seed VM to the host to fail as mentioned in bug #1226310. This bug is apparently fixed, but apparently the fix doesn’t work for me (another thing to investigate), so I went with the workaround of putting uri_default="qemu:///system" in ~/.config/libvirt/libvirt.conf.
So now (after a rather spectacular amount of disk and CPU thrashing) there are three OpenStack clouds running on my desktop PC. No smoke has come out.
- The seed VM has successfully spun up the “baremetal_0” undercloud VM and deployed OpenStack to it.
- The undercloud VM has successfully spun up the “baremetal_1” and “baremetal_2” VMs and deployed them as the overcloud control and compute nodes.
- I have apparently booted a demo VM in the overcloud, i.e. I’ve got a VM running inside a VM, although I haven’t quite managed to ssh into the latter yet (I suspect I’m missing a route or a firewall rule somewhere).
I think I had it right last night. There is a giant cosmic dance being performed in a tiny little box on my desk on a hill on an island at the bottom of the world.
Or, I’ve been sitting at my computer for too long again.
Automatically Login to wifi.free.fr Wifi
The not so expansive French mobile phone operator free.fr offers some nice extras for their clients. People can use the free.fr hotspots, which you can find at many places in French cities. Unfortunately you have to login manually using the gateway page https://wifi.free.fr/ for every wifi reconnect. Ähm – do you really have to? :wink:
I’m one of these lucky guys who can actually receive this hotspot in the own flat. So why should I pay extra for cable internet?
To still have some selling points for ADSL customers free.fr doesn’t make it very convenient to use this free wifi. The network comes without any security modes like WPA/WPA2/WAP, but requires you to authenticate yourself an a gateway page every time you reconnect to the network. There is only one other option which incorporates SIM card authentication which is not an option for most computers due to the lack of a SIM card.
So why not using curl to send automatically an POST request with your login data as soon as you got connected to the so-called “FreeWifi” network?
Using openSuSE 12.3 or any other Linux distribution based on NetworkManager you just place the following file freewifi-up in /etc/NetworkManager/dispatcher.d.
# file: 'freewifi-up'
#! /bin/sh
#
# auto login freewifi from free.fr
#
#
. /etc/rc.status
case "$2" in
up)
if iwgetid | grep -qs :\"FreeWifi\"; then
curl -s --retry 10 --retry-max-time 0 -X POST -d 'login=000000000&password=mypassword&submit=Valider' https://wifi.free.fr/Auth > /dev/null
fi
;;
*)
exit 0
;;
esac
The actual name of the file is not very important. But pay attention to set the appropriate file rights chmod 755 freewifi-up. Of course, you have to replace the login number and the password by your own.
The script will be run by networkmanager automatically when the connection status get changed. Only if a connection was setup the wifi SSID will be checked to be FreeWifi. Only In this case an authentication request will be send to the gateway webpage.
Hackweek: Hot Chili Sauce for Hot Lizards
This hot sauce can be used to spice up your food, give you creative
energy, and is a nice gift for your friends and family members — not
only for Hackweek.
Name: hot-chili-sauce
Summary: Toms’s Hot Chili Sauce
License: BSD
Version: 1.0
Group: Cooking/Sauce/Chili
# Ingredients
#
Requires: chili => 5pcs
Requires: sugar = 3pcs
Requires: ginger = 30gr
Requires: vinegar = 3tbsp
Requires: water = 250ml
# Equipment
#
Requires: libfunnel
Requires: libglassjars-multiple => 50ml
Requires: libcookingpots
Requires: libtap
#
Recommends: salt
Recommends: wheat-flour
%description
This hot sauce can be used to spice up your food, give you creative
energy, and is a nice gift for your friends and family members -- not
only for Hackweek.
%prep
%setup
head /dev/tap/water > /dev/pot1/WATER
mv /dev/jars > /dev/pot1
heat --target-temp 100C --gentle /dev/pot1
%build
%define very_hot 1
%define thicken 0
split -b 5 CHILI1 chili/CHILI1-CHOPPED
split -b 5 CHILI2 chili/CHILI2-CHOPPED
split -b 5 CHILI3 chili/CHILI3-CHOPPED
%ifdef %{very_hot}
split -b 5 CHILI4 chili/CHILI4-CHOPPED
split -b 5 CHILI5 chili/CHILI5-CHOPPED
%endif
peel GINGER > PEELED_GINGER && split -b 5 PEELED_GINGER ginger/GINGER-CHOPPED
peel GARLIC > PEELED_GARLIC && split -b 5 PEELED_GARLIC garlic/GARLIC-CHOPPED
mv SUGAR /dev/pot2
heat --target caramel /dev/pot2
mv chili/* garlic/* /dev/pot2
heat --time 5min --target-temp 80C /dev/pot2
mv ginger/* SALT WATER VINEGAR /dev/pot2
stir --dont-shake /dev/pot2
mv TOMATO_PASTE /dev/pot2
heat --time 15min --target-temp 100C /dev/pot2
%ifdef %{thicken}
mv WHEAT-FLOUR /dev/pot2
%endif
%install
# FIXME: add adequate safety measures. This is hot.
mv /dev/pot1/jars /dev
mv /dev/pot2/* /dev/jars
seal /dev/jars && turn /dev/jars
sleep 60m
%files /dev/jars
openSUSE 13.1: install 'bumblebee' and disable discrete graphics adapter on NVIDIA Optimus laptop
My scenario: I am running openSUSE 13.1 on an Optimus laptop and would like to be able to turn the discrete graphics adapter on and off. Either (a) I have a clean install of openSUSE 13.1 or (b) I have already done things like install bumblebee, NVIDIA drivers, etc. in the past.
Read more »
Optimization Tips & Tricks used by MimeKit: Part 2
In my previous blog post, I talked about optimizing the most critical loop in MimeKit's MimeParser by:
- Extending our read buffer by an extra byte (which later became 4 extra bytes) that I could set to '\n', allowing me to do the bounds check after the loop as opposed to in the loop, saving us roughly half the instructions.
- Unrolling the loop in order to check for 4 bytes at a time for that '\n' by using some bit twiddling hacks (for 64-bit systems, we might gain a little more performance by checking 8 bytes at a time).
After implementing both of those optimizations, the time taken for MimeKit's parser to parse nearly 15,000 messages in a ~1.2 gigabyte mbox file dropped from around 10s to about 6s on my iMac with Mono 3.2.3 (32-bit). That is a massive increase in performance.
Even after both of those optimizations, that loop is still the most critical loop in the parser and the MimeParser.ScanContent() method, which contains it, is still the most critical method of the parser.
While the loop itself was a huge chunk of the time spent in that method, the next largest offender was writing the content of the MIME part into a System.IO.MemoryStream.
MemoryStream, for those that aren't familiar with C#, is just what it sounds like it is: a stream backed by a memory buffer (in C#, this happens to be a byte array). By default, a new MemoryStream starts with a buffer of about 256 bytes. As you write more to the MemoryStream, it resizes its internal memory buffer to either the minimum size needed to hold the its existing content plus whatever number of bytes your latest Write() was called with or double the current internal buffer size, whichever is larger.
The performance problem here is that for MIME parts with large amounts of content, that buffer will be resized numerous times. Each time that buffer is resized, due to the way C# works, it will allocate a new buffer, zero the memory, and then copy the old content over to the new buffer. That's a lot of copying and creates a situation where the write operation can become exponentially worse as the internal buffer gets larger. Since MemoryStream contains a GetBuffer() method, its internal buffer really has to be a single contiguous block of memory. This means that there's little we could do to reduce overhead of zeroing the new buffer every time it resizes beyond trying to come up with a different formula for calculating the next optimal buffer size.
At first I decided to try the simple approach of using the MemoryStream constructor that allows specifying an initial capacity. By bumping up the initial capacity to 2048 bytes, things did improve, but only by a very disappointing amount. Larger initial capacities such as 4096 and 8192 bytes also made very little difference.
After brainstorming with my coworker and Mono runtime hacker, Rodrigo Kumpera, we decided that one way to solve this performance problem would be to write a custom memory-backed stream that didn't use a single contiguous block of memory, but instead used a list of non-contiguous memory blocks. When this stream needed to grow its internal memory storage, all it would need to do is allocate a new block of memory and append it to its internal list of blocks. This would allow for minimal overhead because only the new block would need to be zeroed and no data would need to be re-copied, ever. As it turns out, this approach would also allow me to limit the amount of unused memory used by the stream.
I dubbed this new memory-backed stream MimeKit.IO.MemoryBlockStream. As you can see, the implementation is pretty trivial (doesn't even require scary looking bit twiddling hacks like my previous optimization), but it made quite a difference in performance. By using this new memory stream, I was able to shave a full second off of the time needed to parse that mbox file I mentioned earlier, getting the total time spent down to 5s. That's starting to get pretty respectable, performance-wise.
As a comparison, let's compare the performance of MimeKit with what seems to be the 2 most popular C# MIME parsers out there (OpenPOP.NET and SharpMimeTools) and see how we do. I've been hyping up the performance of MimeKit a lot, so it had better live up to expectations, right? Let's see if it does.
Now, since none of the other C# MIME parsers I could find support parsing the Unix mbox file format, we'll write some test programs to parse the same message stream over and over (say, 20 thousand times) to compare MimeKit to OpenPOP.NET.
Here's the test program I wrote for OpenPOP.NET:
using System;
using System.IO;
using System.Diagnostics;
using OpenPop.Mime;
namespace OpenPopParser {
class Program
{
public static void Main (string[] args)
{
var stream = File.OpenRead (args[0]);
var stopwatch = new Stopwatch ();
stopwatch.Start ();
for (int i = 0; i < 20000; i++) {
var message = Message.Load (stream);
stream.Position = 0;
}
stopwatch.Stop ();
Console.WriteLine ("Parsed 20,000 messages in {0}", stopwatch.Elapsed);
}
}
}
Here's the SharpMimeTools parser I wrote for testing:
using System;
using System.IO;
using System.Diagnostics;
using anmar.SharpMimeTools;
namespace SharpMimeParser {
class Program
{
public static void Main (string[] args)
{
var stream = File.OpenRead (args[0]);
var stopwatch = new Stopwatch ();
stopwatch.Start ();
for (int i = 0; i < 20000; i++) {
var message = new SharpMessage (stream);
stream.Position = 0;
}
stopwatch.Stop ();
Console.WriteLine ("Parsed 20,000 messages in {0}", stopwatch.Elapsed);
}
}
}
And here is the test program I used for MimeKit:
using System;
using System.IO;
using System.Diagnostics;
using MimeKit;
namespace MimeKitParser {
class Program
{
public static void Main (string[] args)
{
var stream = File.OpenRead (args[0]);
var stopwatch = new Stopwatch ();
stopwatch.Start ();
for (int i = 0; i < 20000; i++) {
var parser = new MimeParser (stream, MimeFormat.Default);
var message = parser.ParseMessage ();
stream.Position = 0;
}
stopwatch.Stop ();
Console.WriteLine ("Parsed 20,000 messages in {0}", stopwatch.Elapsed);
}
}
}
Note: Unfortunately, OpenPOP.NET's message parser completely failed to parse the Star Trek message I pulled out of my test suite at random (first message in the jwz.mbox.txt file included in MimeKit's UnitTests project) due to the Base64 decoder not liking some byte or another in the stream, so I had to patch OpenPOP.NET to no-op its base64 decoder (which, if anything, should make it faster).
And here are the results running on my 2011 MacBook Air:
[fejj@localhost OpenPopParser]$ mono ./OpenPopParser.exe ~/Projects/MimeKit/startrek.msg Parsed 20,000 messages in 00:06:26.6825190 [fejj@localhost SharpMimeParser]$ mono ./SharpMimeParser.exe ~/Projects/MimeKit/startrek.msg Parsed 20,000 messages in 00:19:30.0402064 [fejj@localhost MimeKit]$ mono ./MimeKitParser.exe ~/Projects/MimeKit/startrek.msg Parsed 20,000 messages in 00:00:15.6159326
Whooooosh!
Not. Even. Close.
MimeKit is nearly 25x faster than OpenPOP.NET even after making its base64 decoder a no-op and 75x faster than SharpMimeTools.
Since I've been ranting against C# MIME parsers that made heavy use of regex, let me show you just how horrible regex is for parsing messages (performance-wise). There's a C# MIME parser called MIMER that is nearly pure regex, so what better library to illustrate my point? I wrote a very similar loop to the other 2 that I listed above, so I'm not going to bother repeating it again. Instead, I'll just skip to the results:
[fejj@localhost MimerParser]$ mono ./MimerParser.exe ~/Projects/MimeKit/startrek.msg Parsed 20,000 messages in 00:16:51.4839129
Ouch. MimeKit is roughly 65x faster than a fully regex-based MIME parser. It's actually rather pathetic that this regex parser beats SharpMimeTools.
This is why, as a developer, it's important to understand the limitations of the tools you decide to use. Regex is great for some things but it is a terrible choice for others. As Jamie Zawinski might say,
Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems.



































