Skip to main content

the avatar of Andrew Wafaa

EuroBSDCon DevSummit Day 2

Day 2 of EuroBSDCon (overview of Day 1) kicked off for me with the morning being dedicated to virtualization. In Linux we’re all used to the usual suspects - KVM/Xen/LXC/VMWare, so I was interested to hear what’s available on FreeBSD especially bhyve which is probably as close to KVM as one will get. Xen support is available in FreeBSD, but only as DomU (guest VM). Dom0 (host server) support is actively being worked on though to rectify this shortfall (this does not apply to NetBSD).

the avatar of Andrew Wafaa

EuroBSDCon DevSummit Day 1

As part of my role at work I get to interact with various Open Source projects, and not all of them are Linux related. This week I’m on the beautiful, historic, sunny and warm island of Malta attending EuroBSDCon. As you can work out from the name, it is the European gathering of BSD developers and users covering all the BSD variants like FreeBSD/NetBSD/OpenBSD/PCBSD/Dragonfly. I’m giving a talk on ARMv8 and AArch64 on Sunday, so hopefully that will go well, I’m somewhat nervous as I finished my slides before I left which is somewhat unusual.
the avatar of Klaas Freitag

DAV Torture

Currently we speak a lot about performance of the ownCloud WebDAV server. Speaking with a computer programmer about performance is like speaking with a doctor about pain. It needs to be qualified, the pain, and also the performance concerns.

To do a step into that direction, here is a little script collection for you to play with if you like: the DAV torture collection. We started it quite some time ago but never really introduced it. It is still very rough.

What it does

The first idea is that we need a reproducable set of files to test the server with. We don’t want to send around huge tarballs with files, so Danimo invented two perl scripts called torture_gen_layout.pl and torture_create_files.pl. With torture_gen_layout.pl one can create a file that contains the layout of the test file tree, a so called layout( or .lay)-file. The .lay-file describes the test file tree completely, with names, structure and size.

torture_gen_layout.pl takes the .lay-file and really creates the file tree on a machine. The cool thing about is that we can commit on a .lay-file as our standard test tree and just pass a file around with a couple of kbytes size that describes the tree.

Now that there is a standard file tree to test with, I wrote a little script called dav_torture.pl. It copies the whole tree described by a .lay file and created on the local file system to an ownCloud WebDAV server using PUT requests. Along with that, it produces performance relevant output.

Try it

Download the tarball and unpack it, or clone it from github.

After having installed a couple of perl deps (probably only modules Data::Random::WordList, HTTP::DAV, HTTP::Request::Common are not in perl’s core) you should be able to run the scripts from within the directory.

First, you need to create a config file. For that, copy t1.cfg.in to t1.cfg (don’t ask about the name) and edit it. For this example, we only need user, passwd and url to access ownCloud. Be careful with the syntax, it gets sourced into a perl script.

Now, create the local reference tree with a .lay-file which I put into the tarball: ./torture_create_files.pl small.lay tree This command will build the file tree described by small.lay into the directory called tree.

Now, you can already treat your server: Call ./dav_torture.pl small.lay tree This will perform PUT commands to the WebDAV server and output some useful information. It also appends to two files results.dat and puts.tsv. results.dat just logs the results of subseqent call. The tsv file is the data file for the html file index.html in the same directory. That opened in a browser gives a curve over the average transmission rate of all subsequent runs of dav_torture.pl (You have run dav_torture.pl a couple of times to make that visible). The dav_torture.pl script can now be hooked into our Jenkins CI and performed after every server checkin. The resulting curve must never raise :-)

To create your own .lay-file, open torture_gen_layout.pl and play with the variables on top of the script. Simply call the script and redirect into a file to create a .lay-file.

All this is pretty experimental, but I thought it will help us to get to a more objective discussion about performance. I wanted to open this up in a pretty early stage because I am hoping that this might be interesting for somebody of you: Treat your own server, create interesting .lay files or improve the script set (testing plain PUTs is rather boring) or the result html presentation.

What do you think?

a silhouette of a person's head and shoulders, used as a default avatar

SpriteKit on Xamarin.iOS: fun without compromise

TL;DR: get the source of the game on github.

With the release of iOS7, and same day support in Xamarin.iOS, I'm spending all my free time playing with the new APIs. I particularly love iBeacon and Background Fetch. Next on my list was SpriteKit.

SpriteKit is a 2D game engine, not unlike cocos2d at all. We'll see next that it's going from one platform to another is very simple.

 I'm not a game developer, but I have some experience with cocos2d as I co-wrote (with Miguel) the cocos2d bindings for Xamarin.iOS. At that time, to test the binding, I ported a sample platform jumper game from obj-c cocos2d to c#. And I again ported the same game to SpriteKit. Here are my findings about the experience:

The Pros


  • you can almost to a line by line port between cocos2d and SpriteKit, once you figure out the basics
  • less boilerplate. Or even no boilerplate at all with SpriteKit. a SKView is a UIView, and is ready to serve as soon as you PresentScene() it. Compare that with the directors you have to put in place before writing a line of game logic in cocos2d. It's a win.
  • less leaky abstractions. I haven't compared performances, but SpriteKit doesn't expose stuffs like BatchNodes, so it's one stuff you don't have to bother about.
  • SKAction is a nice addition for objects you don't want to animate yourself in your Update() loop (they're executed just after Update(), and before physics simulation)

The Cons


  • I found myself looking for BitmapFonts. It seems like a standard in game development, but missing from SpriteKit (or I haven't found it). I'm quite confident that's a stuff we'll see in an upcoming new version.

The Rest


  • As the original game, this port doesn't leverage any Physics engine, so I have nothing to say about SKPhysics* stuffs. From the API, it looks very similar to what Chipmunk offer.

So it was a very pleasant port, at the end taking even less lines of code than the original port, and only 2 hours to write. If you want to start developing games, going for SpriteKit on Xamarin.iOS is really a no brainer. No boilerplate, no gotchas, just fun.

The Code

The code is on github, it's MIT/X11 so do whatever you want with it. The graphics are borrowed from the original game, read the LICENSE about their usage.
the avatar of Klaas Freitag

ownCloud Client Release 1.4.1

I am happy to announce that we today were able to release version 1.4.1 of the ownCloud Desktop Client on the three platforms Linux, MacOS and Windows.

You find suitable download links as usual at: http://owncloud.org/sync-clients/

Version 1.4.1 is a bugfix release for the 1.4.0 version released a few of weeks ago which brought a lot of new features. This one solves a couple of problems that were coming up during the last few weeks. For example, the problem that the client lost its configuration (at least) on the Win32 platform when the machine was shut down is fixed. Also a lot of redundant uploads wont happen any more. And there are even more fixes, as the detailed Changelog rules out.

We thank you for your ongoing support and the good work on bugs that came up. As usual we are looking forward to your feedback. Please work with us in the Github bugtracker if you experience issues.

a silhouette of a person's head and shoulders, used as a default avatar

Olé to you nonetheless

Very inspired TED talk:
http://www.ted.com/playlists/11/the_creative_spark.html

"If your job is to dance, do your dance. If the divine, cockeyed genius assigned to your case decides to let some sort of wonderment be glimpsed, for just one moment, through your efforts, then "Olé!". And if not, do your dance anyhow. And "Olé!" to you nonetheless."

 The talk is about how artists can get broken because of the anxiety of thinking they may not be able to do a great creation but I think that can actually be applied to any aspect of your life.

 In ancient times, people used to believe there was a God that was ruling our lifes and what was happening was because of his plans. This was a way of managing uncertainty in their lifes, since there were a plan and rules designed for some higher entity, despite people was not able to understand them.

Then, people started to think that we are the center of the universe, ... and then we killed God when we tried to demonstrate every single thing through science and rules that are understandable by humans (well, not by all of them, but a set of humans). And we started believing we are in control and so we are responsibles of our own future (don't get me wrong, I think science has bring very very good things to humanity).

However, believing we can control, brings great responsability on our lifes, new rules (to avoid damages we can avoid because WE are in control), and at the end anxiety and stress. Then ... should we go back to believe in unnatural things?

Well, why not go back to believe we are not in control and that we do our best and then, because of "Gods", coincidences or other unnatural "things" that we don't know, don't understand and can't control, things will go in a way or in another, thus leaving us living with some uncertainty? Is this the next step?
the avatar of Sankar P

Introducing Find Many Strings v2 - A chrome extension


Some of you might remember my chrome extension to search and highlight multiple strings simultaneously. I made an update to it recently and gave the ability to input multiple strings in one click. Here is an introductory video of the extension in action. Recommended to be watched in full screen.



The video along with the full subtitle support (so as to help a11y) was made Tharkuri. A big thanks for this strenuous job. If you are looking for someone to do [online] marketing / professional writing work in India, I highly recommend Tharkuri.

You can get the extension from the chrome store and the sources from the github repository. Please report any bugs / features to the github page and any feedback in the extension page or by mail.

a silhouette of a person's head and shoulders, used as a default avatar

Xamarin.iOS 7 : iBeacons - Part 1: Advertizing

One of my favorite new feature of iOS 7 is the addition of iBeacons. You'll find plenty of articles explaining how it'll change the way you do shopping (if you still shop offline), allows indoor localization, and even more. Glue a beacon on your kids, and you'll know when they go too far away (digital leash). Attach one to your wallet and your phone will tell you you're leaving the house with no cash. Applications are endless.

iBeacons is a protocol on top of Bluetooth LE (Low Energy, also advertised as Bluetooth Smart, and part of the Bluetooth 4.0). It doesn't require any new hardware and is nothing really new by itself. What's new and exciting is its position in iOS: first-class citizen, integrated with CoreLocation, working in the background, ...

The recipe: advertising your position

As of now, the only objects you can use as iBeacons are iOS devices. The protocol should be made available soon, allowing 3rd party components to appear, and at that time they should sell for a few dollars a piece (based on a general purpose BT chip, like the BLE112, a beacon should cost around 40$ to build. Prices will surely drop by using cheaper and specialized chips).

Turning your iOS device into a beacon is really only a few steps:

1. Initialize a CLBeaconRegion

That'll be your beacon. uuid and identifier are mandatory. Major and minor are optional. There's a trick here, uuid doesn't have to be unique. All beacons of a common group will probably have the same uuid, and different major/minor versions.

var beaconId = new NSUuid ("5a2bf809-992f-42c2-8590-6793ecbe2437");  //uuidgen on MacOS, nguid in VS
var beaconRegion = new CLBeaconRegion (beaconId, "yourOrg.yourBeacon");

2. Initialize a CBPeripheralManager

var peripheralManager = new CLPeripheralManager (new PeripheralManagerDelegate(beaconRegion), DispatchQueue.DefaultGlobalQueue, new NSDictionary ());

class PeripheralManagerDelegate : CBPeripheralManagerDelegate
{
    CLBeaconRegion beaconRegion;

    public CBPeripheralManager (CLBeaconRegion beaconRegion)
        this.beaconRegion = beaconRegion;
    }

    public override void StateUpdated (CBPeripheralManager peripheralManager)
    {
        //State will be Unsupported on devices like iPhone4 and prior, PoweredOff if BT is down
        if (peripheralManager.State != CBPeripheralManagerState.PoweredOn)
            return;
        var options = beaconRegion.GetPeripheralData (null);
        peripheralManager.StartAdvertizing (options);
    }
}

And that's it.

Notes and further reading


  • As devices BT ids changes from time to time, your device will be sometimes advertised twice. That won't happen with real iBeacons.
  • If you want to advertize in the Background, enable "Acts as Bluetooth LE accessory" in your info.plist
  • Read Region Monitoring if you need more details.







the avatar of Klaas Freitag

Kraft Release 0.51

I am happy to release Kraft 0.51 today. Kraft is the KDE solution to handle daily business documents like quotes and invoices in your small business.

This is a bugfix release which brings a handful of useful fixes against bugs which were reported by Kraft users since the last release.

In the catalog view, now drag and drop is working to sort templates. Removing of sub chapters is also working now. A bug in the unit handling was fixed that picked wrong units in some cases. The path to document templates is not utf8 save.

As a new feature the address of the own company now can be picked from Kraft’s settings dialog also after first setup routine.

A source tarball can be downloaded from the Sourceforge Project, binary packages are on the way. Please also report bugs on SF.

Thanks for your interest and contribution to Kraft. If you want to support Kraft, please give feedback, spread the word or buy cool stuff.

the avatar of Jeffrey Stedfast

Time for a rant on mime parsers...

Warning: Viewer discretion is advised.

Where should I begin?

I guess I should start by saying that I am obsessed with MIME and, in particular, MIME parsers. No, really. I am obsessed. Don't believe me? I've written and/or worked on several MIME parsers at this point. It started off in my college days working on Spruce which had a horrendously bad MIME parser, and so as you read farther along in my rant about shitty MIME parsers, keep in mind: I've been there, I've written a shitty MIME parser.

As a handful of people are aware, I've recently started implementing a C# MIME parser called MimeKit. As I work on this, I've been searching around on GitHub and Google to see what other MIME parsers exist out there to find out what sort of APIs they provide. I thought perhaps I'll find one that offers a well-designed API that will inspire me. Perhaps, by some miracle, I'd find one that was actually pretty good that I could just contribute to instead of writing my own from scratch (yea, wishful thinking). Instead, all I have found are poorly designed and implemented MIME parsers, many probably belong on the front page of the Daily WTF.

I guess I'll start with some softballs.

First, there's the fact that every single one of them were written as System.String parsers. Don't be fooled by the ones claiming to be "stream parsers", because all any of those did was to slap a TextReader on top of the byte stream and start using reader.ReadLine(). What's so bad about that, you ask? For those not familiar with MIME, I'd like for you to take a look at the raw email sources in your inboxes particularly if you have correspondence with anyone outside of the US. Hopefully most of your friends and colleagues are using more-or-less MIME compliant email clients, but I guarantee you'll find at least a few emails with raw 8bit text.

Now, if the language they were using was C or C++, they might be able to get away with doing this because they'd technically be operating on byte arrays, but with Java and C#, a 'string' is a unicode string. Tell me: how does one get a unicode string from a raw byte array?

Bingo. You need to know the charset before you can convert those bytes into unicode characters.

To be fair, there's really no good way of handling raw 8bit text in message headers, but by using a TextReader approach, you are really limiting the possibilities.

Next up is the ReadLine() approach. One of the 2 early parsers in GMime (pan-mime-parser.c back in the version 0.7 days) used a ReadLine() approach, so I understand the thinking behind this. And really, there's nothing wrong with this approach as far as correctness goes, it's more of a "this can never be fast" complaint. Of the two early parsers in GMime, the pan-mime-parser.c backend was horribly slow compared to the in-memory parser. Of course, that's not very surprising. More surprising to me at the time was that when I wrote GMime's current generation of parser (sometime between v0.7 and v1.0), it was just as fast as the in-memory parser ever was and only ever had up to 4k in a read buffer at any given time. My point is, there are far better approaches than ReadLine() if you want your parser to be reasonably performant... and why wouldn't you want that? Your users definitely want that.

Okay, now come the more serious problems that I encountered in nearly all of the mime parser libraries I found.

I think that every single mime parser I've found so far uses the "String.Split()" approach for parsing address headers and/or for parsing parameter lists on headers such as Content-Type and Content-Disposition.

Here's an example from one C# MIME parser:

string[] emails = addressHeader.Split(',');

Here's how this same parser decodes encoded-word tokens:

private static void DecodeHeaders(NameValueCollection headers)
{
    ArrayList tmpKeys = new ArrayList(headers.Keys);

    foreach (string key in headers.AllKeys)
    {
        //strip qp encoding information from the header if present
        headers[key] = Regex.Replace(headers[key].ToString(), @"=\?.*?\?Q\?(.*?)\?=",
            new MatchEvaluator(MyMatchEvaluator), RegexOptions.IgnoreCase | RegexOptions.Multiline);
        headers[key] = Regex.Replace(headers[key].ToString(), @"=\?.*?\?B\?(.*?)\?=",
            new MatchEvaluator(MyMatchEvaluatorBase64), RegexOptions.IgnoreCase | RegexOptions.Multiline);
    }
}

private static string MyMatchEvaluator(Match m)
{
    return DecodeQP(m.Groups[1].Value);
}

private static string MyMatchEvaluatorBase64(Match m)
{
    System.Text.Encoding enc = System.Text.Encoding.UTF7;
    return enc.GetString(Convert.FromBase64String(m.Groups[1].Value));
}

Excuse my language, but what the fuck? It completely throws away the charset in each of those encoded-word tokens. In the case of quoted-printable tokens, it assumes they are all ASCII (actually, latin1 may work as well?) and in the case of base64 encoded-word tokens, it assumes they are all in UTF-7!?!? Where in the world did he get that idea? I can't begin to imagine his code working on any base64 encoded-word tokens in the real world. If anything is deserving of a double facepalm, this is it.

I'd just like to point out that this is what this project's description states:

A small, efficient, and working mime parser library written in c#.
...
I've used several open source mime parsers before, but they all either
fail on one kind of encoding or the other, or miss some crucial
information. That's why I decided to finally have a go at the problem
myself.

I'll grant you that his MIME parser is small, but I'd have to take issue with the "efficient" and "working" adjectives. With the heavy use of string allocations and regex matching, it could hardly be considered "efficient". And as the code pointed out above illustrates, "working" is a bit of an overstatement.

Folks... this is what you get when you opt for a "lightweight" MIME parser because you think that parsers like GMime are "bloated".

On to parser #2... I like to call this the "Humpty Dumpty" approach:

public static StringDictionary parseHeaderFieldBody ( String field, String fieldbody ) {
    if ( fieldbody==null )
        return null;
    // FIXME: rewrite parseHeaderFieldBody to being regexp based.
    fieldbody = SharpMimeTools.uncommentString (fieldbody);
    StringDictionary fieldbodycol = new StringDictionary ();
    String[] words = fieldbody.Split(new Char[]{';'});
    if ( words.Length>0 ) {
        fieldbodycol.Add (field.ToLower(), words[0].ToLower().Trim());
        for (int i=1; i<words.Length; i++ ) {
            String[] param = words[i].Trim(new Char[]{' ', '\t'}).Split(new Char[]{'='}, 2);
            if ( param.Length==2 ) {
                param[0] = param[0].Trim(new Char[]{' ', '\t'});
                param[1] = param[1].Trim(new Char[]{' ', '\t'});
                if ( param[1].StartsWith("\"") && !param[1].EndsWith("\"")) {
                    do {
                        param[1] += ";" + words[++i];
                    } while ( !words[i].EndsWith("\"") && i<words.Length);
                }
                fieldbodycol.Add ( param[0], SharpMimeTools.parserfc2047Header (param[1].TrimEnd(';').Trim('\"', ' ')) );
            }
        }
    }
    return fieldbodycol;
}

I'll give this guy some credit, at least he saw that his String.Split() approach was flawed and so tried to compensate by piecing Humpty Dumpty back together again. Of course, with his String.Trim()ing, he just won't be able to put him back together again with any level of certainty. The white space in those quoted tokens may have significant meaning.

Many of the C# MIME parsers out there like to use Regex all over the place. Here's a snippet from one parser that is entirely written in Regex (yea, have fun maintaining that...):

if (m_EncodedWordPattern.RegularExpression.IsMatch(field.Body))
{
    string charset = m_CharsetPattern.RegularExpression.Match(field.Body).Value;
    string text = m_EncodedTextPattern.RegularExpression.Match(field.Body).Value;
    string encoding = m_EncodingPattern.RegularExpression.Match(field.Body).Value;

    Encoding enc = Encoding.GetEncoding(charset);

    byte[] bar;

    if (encoding.ToLower().Equals("q"))
    {
        bar = m_QPDecoder.Decode(ref text);
    }
    else
    {
        bar = m_B64decoder.Decode(ref text);
    }                    
    text = enc.GetString(bar);

    field.Body = Regex.Replace(field.Body,
        m_EncodedWordPattern.TextPattern, text);
    field.Body = field.Body.Replace('_', ' ');
}

Let's pretend that the regex pattern strings are correct in their definitions (because they are god-awful to read and I can't be bothered to double-check them), the replacing of '_' with a space is wrong (it should only be done in the "q" case) and the Regex.Replace() is just evil. Not to mention that there could be multiple encoded-words per field.Body which this code utterly fails to handle.

Guys. I know you love regular expressions and that they are very very useful, but they are no substitute for writing a real tokenizer. This is especially true if you want to be lenient in what you accept (and in the case of MIME, you really need to be).