Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

Digital game distribution

UnReal World RPG have come long way how it have been distributed digitally since it started on 1992. First it was on multiple BBS as Shareware application and if you ordered then it was delivered by 4 disks by mail and you copied them to your hard disk. It was pure DOS application at that time. Game author Sami Maaranen have been always modern about this kind of things you could send order by email or normal mail on late ’90. After millennium real digital revolution started.

Internet early days

As that Big company released Window 98 and after that Windows XP URW started to look akward as it was still a DOS (it included graphics but still). Distribution model changed from BBS to Internet WWW-page were you could download installation package (still Shareware) and pay for full version by mail.

In this point distributional things started to rise. People wanted to buy a game with credit card and that wasn’t popular way in Finland which caused little bit hustle. Answer for that was start using ‘Albert’s Ambry’ net distribution system. It was working as expected and it had all needed facilities only problem was Albert’s Ambry got out of business rather soon after URW entered to it.

Times goes by and Internet grows

After Albert’s Ambry get out of business years has passed and UnReal World RPG was distributed through it’s web-page. Everything worked just fine until download rate began to be so huge that Internet service provider started to complain about the traffic that URW caused monthly. Time for mirrors and bigger pipes. Paypal arrived and it made credit card payment things simpler.

Distribution models changes 2013 and UnReal World turned away from old-school Shareware system. Shareware is very good model but it’s rather heavy to use. You provide two versions of your application (one that is for testing and another with full whistles and bells or one that can be modelled to full version with password). Shareware was changed to donation based system (and if you want to donate to URW please do so. All money goes to development of the game [Except that food what two cats can eat]). Donation model is easier games like UnReal World which are regularly updated. You get full game straight a head and you donate if you like it.

Modern web days

In same year 2013 URW changed from Shareware to donation based it also was ported to Mac OS X and Linux (openSUSE, Fedora and Ubuntu/Debian supported currently every build). Port was done by me and it was rumble as you try get DOS game to another platform but it was also ported few years back to SDL1 so it wasn’t that bad.

I assume all you gamers know there is one mammoth in modern PC game distribution: Steam. There were also smaller players like: Desura. Their idea is the same. You have distribution system that is standardized (which all games should be working) and you can buy new games to your own catalogue with credit card from their system. URW entered to Desura last year and now Desura is out-of-business. Desura was great channel delivering applications (supported Linux, Mac OS X and Windows) but I understand it’s very difficult to make money when you are wrestling against Steam.

In near future UnReal World RPG will in Steam as it get through Green light program. Hopefully it doesn’t give death kiss to that service too.

Lessons learned

Take time to make your installers as good as you can get. There is no excuse to have awful installer or Tar/RPM/DEB/DMG-package. If anyone can’t play your game straight away they will complain to you or they just move away without taking any second though. That is the power of Steam.

Don’t trust just one distribution model. If your distributor goes out of business have plan B on hand (very near if possible). If you are doing games for Apple iOS then you are out of luck but If you can use multiple ways to distribute your software do it. You need to work little bit more but more places more eyes to stare your application but. Remember these days many youngsters just play what they can download from Steam (and not blaming them from that) or some other shop that is tied to their device. They don’t know how to install game from Internet or they just don’t care.

If you support Linux (as you should). Ubuntu is biggest but also Arch people are fast growing player community (Fedora is non exist and openSUSE is mediocre). Have at least Ubuntu and provide DEB-package (in future Snappy) and if possible TAR-package (If you are willing provide RPM and Arch package). Have menu integration and do you homework with Linux. In future Linux gaming will be on rise if Steam machines take any market share and today you can buy many Linux AAA-games (over 1000) from Steam today!

Sorry to write this but truth is: In many cases user are all wrong (they just can’t understand how to play your game) but they pay the bills so make yourself a favour: fix bugs fast and give support at least at some forum. Community around the game is very hard to build and it even easier to lose. Final wisdom: last support multi platform. Mac OS X and Linux downloads are minority but do you want to cut these customers out?

Ok summer is crazy.. endless and crazy.. Peace out!

a silhouette of a person's head and shoulders, used as a default avatar

Loops

Running in circles Another run without earphones was completed last night – I’ve reverted to using a Polar chest strap hear rate monitor whilst the Jabras are in their non-working state. Relatively faster and slow interval running last night – at over 30C and high humidity it’s not much fun. Jabra have offered to check […]

a silhouette of a person's head and shoulders, used as a default avatar

Web Open Font Format (WOFF) for Web Documents

The Web Open Font Format (short WOFF; here using Aladin font) is several years old. Still it took some time to get to a point, where WOFF is almost painless to use on the linux desktop. WOFF is based on OpenType style fonts and is in some way similar to the more known True Type Font (.ttf). TTF fonts are widely known and used on the Windows platform. Those feature rich kind of fonts are used for high quality font displaying for the system and local office-and design documents. WOFF aims at closing the gap towards making those features available on the web. With these fonts it becomes possible to show nice looking fonts on paper and web presentations in almost the same way. In order to make WOFF a success, several open source projects joined forces, among them Pango and Qt, and contributed to harfbuzz, a OpenType text shaping engine. Firefox and other web engines can handle WOFF inside SVG web graphics and HTML web documents using harfbuzz. Inkscape uses at least since version 0.91.1 harfbuzz too for text inside SVG web graphics. As Inkscape is able to produce PDF’s, designing for both the web and print world at the same time becomes easier on Linux.

Where to find and get WOFF fonts?
Open Font Library and Google host huge font collections . And there are more out on the web.

How to install WOFF?
For using inside inkscape one needs to install the fonts locally. Just copy the fonts to your personal ~/.fonts/ path and run

fc-cache -f -v

After that procedure the fonts are visible inside a newly started Inkscape.

How to deploy SVG and WOFF on the Web?
Thankfully WOFF in SVG documents is similar to HTML documents. However simply uploading a Inkscape SVG to the web as is will not be enough to show WOFF fonts. While viewing the document locally is fine, Firefox and friends need to find those fonts independent of the localy installed fonts. Right now you need to manually edit your Inkscape SVG to point to the online location of your fonts . For that open the SVG file in a text editor and place a CSS font-face reference right after the <svg> element like:

</svg>
<style type=”text/css”>
@font-face {
font-family: “Aladin”;
src: url(“fonts/Aladin-Regular.woff”) format(“woff”);
}
</style>

How to print a Inkscape SVG document containing WOFF?
Just convert to PDF from Inkscape’s file menue. Inkscape takes care for embedding the needed fonts and creates a portable PDF.

In case your prefered software is not yet WOFF ready, try the woff2otf python script for converting to the old TTF format.

Hope this small post gets some of you on the font fun path.

a silhouette of a person's head and shoulders, used as a default avatar

More running

Running Unfortunately, it’s more running without heart rate monitoring earphones, as the fifth pair have now failed. I make that just about 5 pairs failed inside 5 months. I am not going to warranty replace these ones – there is quite clearly a flaw and I don’t really see the point in continuing to trek […]

the avatar of Cameron Seader

SUSE® OpenStack Cloud 5 Admin Appliance – The Easier Way to Start Your Cloud

If you used the SUSE OpenStack Cloud 4 Admin Appliance, you know it was a downloadable, OpenStack Icehouse-based appliance, which even a non-technical user could get off the ground to deploy an OpenStack cloud. Today, I am excited to tell you about the new Juno-based SUSE OpenStack Cloud 5 Admin Appliance.

With the SUSE OpenStack Cloud 4 release we moved to a single integrated version. After lots of feedback from users it was clear that no one really cared that downloading something over 10GB mattered as long as it had everything they needed to start an OpenStack private cloud. In version 5 the download is over 15GB, but it actually has all of the software you might need from SLES 11 or SLES 12 compute infrastructure to SUSE Enterprise Storage integration. I was able to integrate the latest SMT mirror repositories at a reduced size and have everything you might need to speed your deployment.

The new appliance incorporates all of the needed software and repositories to set up, stage and deploy OpenStack Juno in your sandbox lab, or production environments. Coupled with it are the added benefits of automated deployment of highly available cloud services, support for mixed-hypervisor clouds containing KVM, Xen, Microsoft Hyper-V, and VMware vSphere, integration of our award winning, SUSE Enterprise Storage, support from our award-winning, worldwide service organization and integration with SUSE Engineered maintenance processes. In addition, there is integration with tools such as SUSE Studio™ and SUSE Manager to help you build and manage your cloud applications.

With the availability of SUSE OpenStack Cloud 5, and based on feedback from partners, vendors and customers deploying OpenStack, it was time to release a new and improved Admin Appliance. This new image incorporates the most common use cases and is flexible enough to add in other components such as SMT (Subscription Management Tool) and SUSE Customer Center registration, so you can keep your cloud infrastructure updated.

The creation of the SUSE OpenStack Cloud 5 Admin Appliance is intended to provide a quick and easy deployment. The partners and vendors we are working with find it useful to quickly test their applications in SUSE OpenStack Cloud and validate their use case. For customers it has become a great tool for deploying production private clouds based on OpenStack.

With version 5.0.x you can proceed with the following to get moving now with OpenStack.

Its important that you start by reading and understanding the Deployment Guide before proceeding. This will give you some insight into the requirements and an overall understanding of what is involved to deploy your own private cloud.

As a companion to the Deployment Guide we have provided a questionnaire that will help you answer and organize the critical steps talked about in the Deployment Guide.

To help you get moving quickly the SUSE Cloud OpenStack Admin Appliance Guide provides instructions on using the appliance and details a step-by-step installation.

The most updated guide will always be here

A new fun feature to try out in SUSE OpenStack Cloud 5 is the batch deployment capability. The appliance includes three templates in the /root home directory ( NFS.yaml, DRBD.yaml, simple-cloud.yaml )

NFS.yaml will deploy a 2 node controller cluster with NFS shared storage and 2 compute nodes with all of the common OpenStack services running in the cluster.

DRBD.yaml will deploy a 2 node controller cluster with DRBD replication for the database and messaging queue and 2 compute nodes with all of the common OpenStack services running in the cluster.

simple-cloud.yaml will deploy 1 controller and 1 compute node with all of the common OpenStack services running in a simple setup. 

Now is the time. Go out to http://www.suse.com/suse-cloud-appliances and start downloading version 5, walk through the Appliance Guide, and see how quick and easy it can be to set up OpenStack. Don't stop there. Make it highly available and set up more than one hypervisor, and don't forget to have a lot of fun.

a silhouette of a person's head and shoulders, used as a default avatar

DocBook Authoring and Publishing Suite (DAPS) 2.0 Released

After more than two years of development, 15 pre-releases and more than 2000 commits we proudly present release 2.0 of the DocBook Authoring and Publishing Suite, in short DAPS 2.0.

DAPS lets you publish your DocBook 4 or Docbook 5 XML sources in various output formats such as HTML, PDF, ePUB, man pages or ASCII with a single command. It is perfectly suited for large documentation projects by providing profiling support and packaging tools. DAPS supports authors by providing linkchecker, validator, spellchecker, and editor macros. DAPS exclusively runs on Linux.

Download & Installation

For download and installation instructions refer to https://github.com/openSUSE/daps/blob/master/INSTALL.adoc
Highlights of the DAPS 2.0 release include:

  • fully supports DocBook 5 (production ready)
  • daps_autobuild for automatically building and releasing books from different sources
  • support for EPUB 3 and Amazon .mobi format
  • default HTML output is XHTML, also supports HTML5
  • now supports XSLT processor saxon6 (in addition to xsltproc)
  • improved “scriptability”
  • properly handles CSS, JavaScript and images for HTML and EPUB builds (via a “static/” directory in the respective stylesheet folder)
  • added support for JPG images
  • supports all DocBook profiling attributes
  • improved performance by only loading makefiles that are needed for the given subcommand
  • added a comprehensive test suite to ensure better code quality when releasing
  • tested on Debian Wheezy, Fedora 20/21 openSUSE 13.x, SLE 12, and Ubuntu 14.10.

Please note that this DAPS release does not support webhelp. It is planned to re-add webhelp support with DAPS 2.1.

For a complete Changelog refer to https://github.com/openSUSE/daps/blob/master/ChangeLog

Support

If you have got questions regarding DAPS, please use the discussion forum at https://sourceforge.net/p/daps/discussion/General/. We will do our best to help.

Bug Reports

To report bugs or file enhancement issues, use the issue Tracker at https://github.com/openSUSE/daps/issues.

The DAPS Project

DAPS is developed by the SUSE Linux documentation team and used to generate the product documentation for all SUSE Linux products. However, it is not exclusively tailored for SUSE documentation, but supports every documentation written in DocBook.
DAPS has been tested on Debian Wheezy, Fedora 20/21 openSUSE 13.x, SLE 12, and Ubuntu 14.10.

The DAPS project moved from SourceForge to GitHub and is now available at https://opensuse.github.io/daps/.

the avatar of Klaas Freitag

ownCloud Chunking NG

Recently Thomas and me met in person and thought about an alternative approach to bring our big file chunking to the next level. “Big file chunking” is ownClouds algorithm to upload huge files to ownCloud with clients.

This is the first of three little blog posts in which we want to present the idea and get your feedback. This is for open discussion, nothing is set in stone so far.

What is the downside of the current approach? Well, the current algorithm needs a lot of distributed knowledge between server and client to work: The naming scheme of the part files, semi secret headers, implicit knowledge. In addition to that, due to the character of the algorithm the server code is too much spread over the whole code base which makes maintaining difficult.

This situation could be improved with the following approach.

To handle chunked uploads, there will be a new WebDAV route, called remote.php/uploads. All uploads of files larger than the chunk size will go through this route.

In a nutshell, an upload of a big file will happen as parts to a directory under that new route. The client creates it through the new route. This initiates a new upload. If the directory could be created successfully, the client starts to upload chunks of the original file into that directory. The sequence of the chunks is set by the names of the chunk files created in the directory. Once all chunks are uploaded, the client submits a MOVE request the renames the chunk upload directory to the target file.

Here is a pseudo code description of the sequence:

1. Client creates an upload directory with a self choosen name (ideally a numeric upload id):

MKCOL remote.php/uploads/upload-id

2. Client sends a chunk:

PUT remote.php/uploads/upload-id/chunk-id

3. Client repeats 2. until all chunks have successfully been uploaded 4. Client finalizes the upload:

MOVE remote.php/uploads/upload-id /path/to/target-file

5. The MOVE sends the ETag that is supposed to be overwritten in the request header to server. Server returns new ETag and FileID as reply headers of the MOVE.

During the upload, client can retrieve the current state of the upload by a PROPFIND request on the upload directory. The result will be a listing of all chunks that are already available on the server with metadata such as mtime, checksum and size.

If the server decides to remove an upload, ie. because it hasn’t been active for a time, it is free to remove the entire upload directory and return status 404 if a client tries to upload to. Also, a client is allowed to remove the entire upload directory to cancel an upload.

An upload is finalized by the MOVE request. Note that it’s a MOVE of a directory on a single file. This operation is not supported in normal file systems, but we think in this case, it has a nice well descriptive meaning. A MOVE is known as an atomic and fast operation, and that way it should be implemented by the server.

Also note that only with the final MOVE the upload operation is associated with the final destination file. We think that this approach already is a great improvement, because there is always a clear state of the upload with no secret knowledge hidden in the process.

In the next blog I will discuss an extension to this that adds more features to the process.

What do you think so far? Your feedback is appreciated, best on the ownCloud devel mailinglist!

a silhouette of a person's head and shoulders, used as a default avatar

Adding more to the thermostat

Adding more to the Raspberry Pi Having already basically created a thermostat using a Raspberry Pi and a temperature monitor in conjunction with the WiFi power socket I was wondering what functionality I could add. Having looked around for a bit I found that there is a camera module available which can pick up infra-red […]

the avatar of Chun-Hung sakana Huang

GNOME.Asia Summit 2015


GNOME.Asia Summit 2015
It's always happy and pleasure to join and organize GNOME.Asia Summit. :-)

You could see many photos in Flickr group with GNOME.Asia Summit 2015 all of the pictures come from our great Participant and Organizer.

If you ask me "Why you work hard for GNOME, freeware and open source?"
---- The answer is "Friendship and Smile"

-Smiles is the greatest power to promote GNOME and FOSS to me-

I meet many new friends and contribute  to GNOME and FOSS in many different events.


It also a good time to see old friends get together. ^^


It's very good to work in GNOME and GNOME.Asia team, I learn very much from GNOME and GNOME.Asia team.

This year, we have very strong local committee.

- Thanks Utian give us close speech and BinLi -

- Thanks Haris make everything smooth and smile-

- Thanks Estu always give everything right away, you should go to GUADEC ^^-

- Thanks Siska and our host [ We have 2 great host this year ^^ ] keep everyone powerful -

- Thanks our local committee and organizer always smile and keep contribute-

I want to list all the pictures with all our local committee and organizer, I really suggest you to see the pictures in flickr group.

Great picture to see in GNOME.Asia Summit 2015.


I also want to let you know

- Great photo comes from Contributors ( In diff method ) get together-

I really love the topic this year "GNOME   Desktop for Everyone"


Everyone could use GNOME for Desktop ^^ -- I love it

I am very enjoy GNOME.Asia Summit this year.

- I love my new job "Human timer for lighting talks" this year -

It's great moment to see GNOME.Asia Summit in Universitas Indonesia.



We have great time in Day 0 workshop.

-Thanks our speakers give us workshop ^^-

Also love to see openSUSE community guys in Indonesia, and it's my pleasure to join openSUSE Indonesia facebook Group.


It's great to have friends join GNOME.Asia Summit from Taiwan together.


- Thanks Shing Yuan introduce how to promote open source with their team  in Aletheia University-

Thanks everyone join GNOME.Asia Summit




We have lots fun in GNOME.Asia.


Thanks speakers come to GNOME.Asia Summit and blog for summit.





I want to Thanks all our sponsor and GNOME Foundation.
Without their their support, we could not have this amazing GNOME.Asia Summit.







a silhouette of a person's head and shoulders, used as a default avatar

My Browsing Environment

I did a huge cleanup of my firefox profile today, which resulted in me removing tons of stuff (from dead bookmarks to useless addons to many other subtle or not that subtle stuff).
The whole process helped to return firefox in a rather working/healthy state, but also served as a good reminder of something i had in mind for quite a while now.
I wanted to document my firefox setup once i decided that it is stable or has reached a state that pleases me usability wise (hard if you consider all the changes that happened to it throughout the years).

This post serves as a good linkable source too for all those that ask me how i configure my firefox and a list of addons i practically recommend (through my own usage of them). So without further ado:
For simplicity and to give their usage in a glance i have separated them in 4 distinct categories. The list is of the form “Addon - reasoning”, i didn’t include links since all of them are readily available on addons.mozilla.org

Safe Browsing

NoScript - Blocks many stuff (js, flash, java etc) can allow whitelisting of sites that you deem safe enough to allow their code to run.
RequestPolicy - Similar in spirit to NoScript but for any cross site requests.
Certificate Patrol - Monitors and shows changes in certificates (suspicious or not)
HTTPS-Everywhere - Defaults to HTTPS by default for those sites that provide it but not offer it as the default redirect
Privacy Badger - Pretty much covered by the others but there to catch anything the others might actually miss
RefControl - To see and track what the Ref header sends to the site i visit.
Priv3 - Blocks tracking by social buttons (like, +1, etc)
Self-Destructing Cookies - Self-Explanatory addon. once i leave the site, the cookies go BOOM!

Quality of Life

GreaseMonkey - Allows me to overwrite or extend site behaviour through js
Stylish - overwrite the site CSS (really a must for more sites than i can count)
Session Manager - I always had issues with tabs management with the default manager, this solves it
uBlock Origin - Ads! many of them are so distracting and intrusive i just don’t like them.
Reddit Enchancement Suite - I browse Reddit A LOT, this makes it a tad more bearable.
Cleanest Addon Manager - with that many addons…
Google Search by Image - To make finding the source of an image quicker …

User Interface/User Experience

Pentadactyl - I am a vim and a very keyboard/keybinding heavy user, this adds better kb based control.
Socialite - Shows the current vote rating on reddit for the link i browse or allows me to quick submit it there.
The AddonBar (restored) - Firefox removed the addonbar recently, this brings it back.
Config Descriptions - I fiddle a lot with about:config sometimes, this adds explanations.
Tree Style Tab - a must, a way better visualization of the tabs i open and their relationship.

Misc

Rikaichan - This allows me to get an explanation of an unknown japanese word by just hovering it with my mouse.

Just to see how things evolve (or hopefully progress) i’ll also be keeping track of this and updating/making a listing yearly, seeing how my browsing environment will have changed by next June!