Maemo Music Player Client (MMPC) for Diablo
Maemo Music Player Client (MMPC) for Diablo
Finally I took the time to respond to multiple requests about providing mmpc 0.2.1 packages for Diablo. That also gave me the possibility to try the maemo autobuilder to upload my packages to the extras repository. And well, it worked out. It's especially useful for those people not running debian based systems, because with autobuilder you don't have to care about signing and uploading the packages with debian tools like debsign and dput.
So, this is not a new release, these are just new packages build for the Diable 4.1 distribution. I still did not manage to find the time to enhance mmpc further. There are just too many other things like work and university I have to care about. Either download the packages from Maemo Garage, or just get it through the extras repository as usual.
Driving the D-Bus with Ruby
Having looked at the D-Bus from the client perspective before, its now time to get behind the wheel.
In order to drive the D-Bus, you need to decide on a couple of things:
- which bus to drive
you can choose between the system(-wide) bus or the (per-)session bus - the service name
this is the name other clients can find your service, the name has dotted notation, e.g. my.awesome.Service - the objects to publish
each service can provide any number of objects, denoted a slash-separated path name, e.g. /my/awesome/Service/thing - the interface name
methods offered by objects are grouped by interface name. Usually its used to flag object capabilities. Interface names have dotted notation, usually prefixed by the service name, e.g. my.awesome.Service.Greeting - the methods
these are the exported functions of the objects. Methods, also called members in D-Bus speak, have input parameters and a return value.
Obtaining a drivers license
Allowance for driving the D-Bus is controlled by config files below /etc/dbus-1. /etc/dbus-1/system.conf controls the system bus, /etc/dbus-1/session.conf controls the session bus. Both load additional configuration files from sub-directories (/etc/dbus-1/system.d/ resp. /etc/dbus-1/session.d/)
These config files are xml-based busconfig snippets, defining policies on who's allowed to own a service who can use which interfaces of the service.
A typical service configuration looks like this
<!DOCTYPE busconfig PUBLIC "-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN"
"http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd">
<busconfig>
<policy user="root">
<allow own="my.awesome.Service" />
<allow send_destination="my.awesome.Service" />
</policy>
<policy context="default">
<allow send_destination="my.awesome.Service"
send_interface="my.awesome.Service.Greeting" />
<!-- introspection is allowed -->
<allow send_destination="my.awesome.Service"
send_interface="org.freedesktop.DBus.Introspectable" />
</policy>
</busconfig>
It gives root the right to own the service my.awesome.Service and to use it as a client. Any other user can only use the my.awesome.Service.Greeting
interface of my.awesome.Service and can introspect it.
Convention tells you to name this busconfig file after your service with a .conf extension, e.g my.awesome.Service.conf. Place it below /etc/dbus-1/session.d/ or /etc/dbus-1/system.d/, depending which bus you want to drive.
Coding a D-Bus service
Examples for creating a D-Bus service using C are scarce and scary. Esp. programming in C while avoiding Glib, things become really ugly.
As we will see, using the ruby-dbus extension to Ruby makes creating D-Bus services a breeze.
We start with creating a DBus::Object defining (dbus_)methods under a (dbus_)interface.
#!/usr/bin/env ruby
require 'dbus'
class Awesome < DBus::Object
# Create an interface.
dbus_interface "my.awesome.Service.Greeting" do
dbus_method :hola, "in name:s" do |name|
puts "Hola #{name}!"
end
dbus_method :hello, "in name:s, out res:s" do |name|
"hello #{name}!"
end
dbus_method :ahoj, "in name:s, in title:s" do |name,title|
puts "Ahoj #{title} #{name}!"
end
end
end
The dbus_method call get the method name passed as a symbol, followed by its signature. The signature describes the input ("in") and output ("out") parameters by name (before the colon) and type (after the colon). The parameter names are just syntactic sugar, but the type is essential and enforced by D-Bus. Look at D-Bus type signatures to find out how to specify types.
ruby-dbus uses the method name and signature to automatically generate an XML representation of the interface needed for D-Bus introspection. This allows clients to find out about available interfaces and their methods at runtime.
The following do ... end block contains the method implementation. Every "in" parameter is passed as a value to the block in typical ruby fashion.
Next is connecting to the bus and obtaining the service. You can easily export multiple services if the bus configuration allows that.
# Choose the bus (could also be DBus::session_bus)
bus = DBus::system_bus
# Define the service name
service = bus.request_service("my.awesome.Service")
Then we create the object and export it through the service.
# Set the object path
obj = Awesome.new("/my/awesome/Service/thing")
# Export it!
service.export(obj)
Note that there is no requirement to create and export objects in advance. You can start with a simple object and method, creating additional objects as requested by the client. Hal is a typical example for such a service.
Finally we start the listener and wait for incoming requests.
# Now listen to incoming requests main = DBus::Main.new main << bus main.run
main.run never returns, as you would expect from a service daemon. Either kill(1) it, code a timeout signal or implement a method calling exit.
Running the service
Running the D-Bus service is as easy as running any other Ruby program:
kkaempf> ruby my_service.rbThis, however, will get you a Exception `DBus::Error' at (eval):26 - org.freedesktop.DBus.Error.AccessDenied: Connection ":1.967" is not allowed to own the service "my.awesome.Service" due to security policies in the configuration file
Looking at our busconfig defined above immediately reveals the error. Only user root is allowed to own the service. So we better do a
kkaempf> sudo ruby my_service.rbto get this running.
Now congrats for passing your drivers exam ! ;-)
Not a Good Start Into a Problematic Year
KDE Project:
Like some other [open]SUSE developers I was casted and am now forced to look for a new day job. It could have happened in better economic times for sure. :-(
Pointers to new interesting job positions are gladly accepted. Bonus points the more they have to do with Open Source, Linux, Qt and KDE.
openSUSE-like update repositories for third party projects
Starting with 11.0, the openSUSE-Education project hosts it’s own, separate update repository. This is our solution for the strategic decision not to use the openSUSE Build Service as repository for endusers but for development only.
So for production purposes, we always recommend to use our frozen repositories on http://download.opensuse-education.org/. But as “frozen” implies, the repositories there are frozen at the time, the openSUSE-Education team declares them as “Goldmaster” (which is the case for all except the 11.1 repo at the moment) – and no package update or changes happens for this repositories.
The openSUSE-Education team has relatively long development and testing cycles – but as everywhere, shi* happens, and so it might be that some of the packages in the frozen repository are broken or need a security fix. For this, we have created update repositories (for at least 11.0 and the upcomming 11.1 Edu-Release) which are disabled per default, but added to the system during installation of the openSUSE-Education-release package. (Reason behind this decision: if an administrator installs openSUSE-Education in a school, he wants to “mirror” the update repositories and not point every client to the official ones. All a user has to do is to enable this update repository via YaST or via “zypper mr -e ‘openSUSE-Education Updates'”.
We’re using the “updateinfo.xml” file formal described in the openSUSE-Wiki. Currently, we’ve 5 package updates/fixes for 11.0 in the update repository – and this might grow over the time. The updates are shown in the current online-update-applets as “normal” updates like the openSUSE ones. Interestingly, the user can’t see if an update is from the official openSUSE or the openSUSE-Education update repository – even if we use a different “from” tag. Perhaps we have to “play” with the “release” or other tags: testing is needed as it looks like nobody tries this before…
update
Bought some new digitial cameras. Two cheap from eBay for gphoto2 testing, and one Nikon D90 to replace the D70s to take good pictures. Not much pictures taken yet with the latter one.
Spent two trainrides with more improvements of the libgphoto2 API to come to a clearer API and one without fixed size buffers...
My API adjustements are not done yet though. One new CameraFile type (beside memory and fd style) missing, and some more cleanups necessary.
And I am ready for spring, just the weather throws snow at us still here in Nuernberg.
So much for Friday afternoon.
NTP With Autokey Authentication
Recently I had to verify the functionality of NTP server with autokey authentication. Though I thought that it would be simple, it took me several days to find out how to configure the authentication. I found out dozens of different howtos, but none of them worked for me. So when I finally came to one that worked, I created several shell scripts that helped me to configure different kind of (unicast) NTP authentication.
The shell scripts are paired: setup_server_XYZ.sh and setup_client_XYZ.sh where XYZ is used identity scheme. MV identity scheme seems not to work on sles9.The usage of the scripts is:
- Edit PASS/SPASS/CPASS variables in both client and server script! These are password for generated keys
- Edit SERVER variable in setup_server_XYZ.sh - that is the server which will your server use for own synchronization.
- Copy setup_server_XYZ.sh to the ntp server.
- Run setup_server_XYZ.sh on the server. If XYZ is mv, script takes one argument N, which is number of client keys to generate + 1 (N-1 clients will be able to use the server).
- Copy setup_client_XYZ.sh to the client.
- Run setup_client_XYZ.sh
. If XYZ is mv, script takes additional argument X which is client id - number between 1 and N-1 - which specifies which key client will use. The script will ask for server root's password twice - first time to identify the key to download from the server, and the second time to actually download the key. - Make sure that both server and client are not configured to run in chroot (or copy/move /etc/ntp/ and /etc/ntp.conf to appropriate chroot directory).
- Start ntpd (manually or with initscript) on server and clients. It takes some time to establish the link between client and server (approximately 15 minutes).
When ntp is set up and running, the status can be checked with several commands:
- ntpq -p - prints a list of server the host has configured as its NTP servers. If there's a star '*' before the server name, host uses the server for synchronization.
- ntp -c as - print list of associations - including authentication status. Following output shows successfully authenticated association:
ind assID status conf reach auth condition last_event cnt =========================================================== 1 26132 f694 yes yes ok sys.peer reachable 9
The scripts have following limitations:
- Server is configured to use one server - which is not a good idea - it should use more than one (same for clients). To fix this just add more server lines to /etc/ntp.conf.
- Usage of /dev/urandom is not the best one as well, /dev/random (or file generated by command openssl rand) should probably be used for maximal security.
- Configuration is saved to /etc but ntpd usually run in chroot.
- The script setup_server_mv.sh always endlessly loops while generating client keys - seems to continuously reject duplicate keys - this is probably not problem in the script. So, try to use it but don't be surprised ;-)
setup_server_gq.sh:
#!/bin/bash PASS=lala SERVER=ntp2.suse.de rm -fr /etc/ntp mkdir -p /etc/ntp cat <<> /etc/ntp.conf server $SERVER keysdir /etc/ntp crypto pw $PASS randfile /dev/urandom logfile /var/log/ntp logconfig =all EOF cd /etc/ntp RANDFILE=/dev/urandom ntp-keygen -T -G -p $PASS echo server is set up. Start it with \'ntpd -u ntp [-d]\'
setup_client_gq.sh:
#!/bin/bash PASS=lala ping -c1 "$1" > /dev/null if [ $? -ne 0 -o $# -ne 1 ] ; then echo "Needs one argument - ntp server address" exit 1 fi rm -fr /etc/ntp mkdir -p /etc/ntp cat <<> /etc/ntp.conf server $1 autokey crypto pw $PASS randfile /dev/urandom logfile /var/log/ntp logconfig =all keysdir /etc/ntp EOF cd /etc/ntp RANDFILE=/dev/urandom ntp-keygen -H -p $PASS KEY="`ssh root@$1 ls /etc/ntp/ntpkey_GQpar_\* | sed 's,.*/,,'`" if [ "x$KEY" = "x" ] ; then echo Error while fetching key name exit 1 fi scp root@$1:/etc/ntp/$KEY . LINK="`echo $KEY | sed 's/^ntpkey_GQpar_\(.*\)\.[0-9]\+$/ntpkey_gp_\1/'`" ln -s $KEY $LINK echo client is set up. Start it with \'ntpd -u ntp [-d]\'
setup_server_iff.sh:
#!/bin/bash SPASS=lala CPASS=alal SERVER=ntp2.suse.de rm -fr /etc/ntp mkdir -p /etc/ntp cat <<> /etc/ntp.conf server $SERVER keysdir /etc/ntp crypto pw $SPASS randfile /dev/urandom logfile /var/log/ntp logconfig =all EOF cd /etc/ntp RANDFILE=/dev/urandom ntp-keygen -T -I -p $SPASS RANDFILE=/dev/urandom ntp-keygen -e -q $SPASS -p $CPASS > tmp_key echo keyname "'`head -n1 tmp_key | sed 's/^# *//'`'" mv tmp_key `head -n1 tmp_key | sed 's/^# *//'` echo server is set up. Start it with \'ntpd -u ntp [-d]\'
setup_client_iff.sh:
#!/bin/bash CPASS=alal ping -c1 "$1" > /dev/null if [ $? -ne 0 -o $# -ne 1 ] ; then echo "Needs one argument - ntp server address" exit 1 fi rm -fr /etc/ntp mkdir -p /etc/ntp cat <<> /etc/ntp.conf server $1 autokey crypto pw $CPASS randfile /dev/urandom logfile /var/log/ntp logconfig =all keysdir /etc/ntp EOF cd /etc/ntp RANDFILE=/dev/urandom ntp-keygen -H -p $CPASS KEY="`ssh root@$1 ls /etc/ntp/ntpkey_IFFkey_\* | sed 's,.*/,,'`" if [ "x$KEY" = "x" ] ; then echo Error while fetching key name exit 1 fi scp root@$1:/etc/ntp/$KEY . LINK="`echo $KEY | sed 's/^ntpkey_IFFkey_\(.*\)\.[0-9]\+$/ntpkey_iff_\1/'`" ln -s $KEY $LINK echo client is set up. Start it with \'ntpd -u ntp [-d]\'
setup_server_mv.sh (loops!!!):
#!/bin/bash PASS=lala SERVER=ntp2.suse.de if [ "x`echo $1 | grep '^[1-9][0-9]*$'`" = "x" ] ; then echo Needs one argument n - number of client certificates to create + 1. exit 1 fi rm -fr /etc/ntp mkdir -p /etc/ntp cat <<> /etc/ntp.conf server $SERVER keysdir /etc/ntp crypto pw $PASS randfile /dev/urandom logfile /var/log/ntp logconfig =all EOF cd /etc/ntp RANDFILE=/dev/urandom ntp-keygen -V $1 -p $PASS echo server is set up. Start it with \'ntpd -u ntp [-d]\'
setup_client_mv.sh (untested, because setup_server_mv.sh loops!):
#!/bin/bash PASS=lala ping -c1 "$1" > /dev/null if [ $? -ne 0 -o $# -ne 2 -o "x`echo $2 | grep '^[1-9][0-9]*$'`" = "x" ] ; then echo "Needs two arguments:" echo " 1. ntp server address" echo " 2. ntp client id 1..n-1 - which certificate to get." exit 1 fi rm -fr /etc/ntp mkdir -p /etc/ntp cat <<> /etc/ntp.conf server $1 autokey crypto pw $PASS randfile /dev/urandom logfile /var/log/ntp logconfig =all keysdir /etc/ntp EOF cd /etc/ntp RANDFILE=/dev/urandom ntp-keygen -H -p $PASS KEY="`ssh root@$1 ls /etc/ntp/ntpkey_MVkey$2_\* | sed 's,.*/,,'`" if [ "x$KEY" = "x" ] ; then echo Error while fetching key name exit 1 fi scp root@$1:/etc/ntp/$KEY . LINK="`echo $KEY | sed 's/^ntpkey_MVkey[1-9][0-9]*_\(.*\)\.[0-9]\+$/ntpkey_mv_\1/'`" ln -s $KEY $LINK echo client is set up. Start it with \'ntpd -u ntp [-d]\'
That's all, I hope this can spare some time...
(This text has also been published on openSUSE wiki)
Why the Buildservice is currently not for endusers
Everytime, when I hear people rendering homage to the openSUSE Build Service as “nice tool for endusers to get packages”, I’m a bit confused. From my point of view, the Build Service in it’s current state is not really usable for endusers.
Here are some reasons why:
- No displayed License before adding the repository (have a look at the Non-OSS or Education Repositories to get an idea how this works) – this might be important if people crash their hardware after adding a repository with special kernel modules without being informed about possible problems…
- No package groups called “patterns” – so people can’t get an easy overview about the various software areas a project provides (using YaST -> Software -> RPM Groups is not usable for me since someone decided to show just a plain structure there).
- No package translations (sometimes I like to see the Summary and Description of a package translated in my own language to understand the real area of application for this package).
- As the Repositories change over and over again (sometimes just because a dependend package in another project has changed), you need to download at least the metadata of a project again and again. For the Education repository in the Build Service this means: transfering 4MB of data each time you call “zypper”. Not really nice for people with a low bandwith.
- Endusers will “upgrade” their installed packages again and again – without knowing the real reason for this upgrade until they can have a look at the package changelog (hoping the developer has added some informations there). Not even the Factory distribution has this problem with automatic rebuilds (resulting just in increased release numbers) and no information for endusers why they had to download tons of MB each week…
- Real Packagers use their Build Service repositories for development and testing – so some packages will likely be broken during this phase – and just the packager knows when it will be really “stable”. (I’m talking about “real packagers” as I often see people using their home projects in the Build Service just to create an own repository containing packages they use – just by linking packages from other projects without any changes. But this is another topic…)
Projects like KDE or GNOME try to balance some of this problems by using the “publish disabled” option until they want to release a new version of their project. But this makes testing for developers and testers harder: with publish disabled, developers and testers can’t easily download and test new packages via the synced, public repositories – they have to download their packages via the API of the Build Service one by one.
With the upcomming releases of the Build Service, this will hopefully change. Up with 1.5 project admins can create a “full featured YaST Repository” for their projects covering the points 1, 2 and perhaps 3. But I think we also need more or less “frozen” repositories beside the current project repositories to cover 4, 5 and 6.
These “frozen” repositories should IMO be placed in separate directories (like the official openSUSE distribution directories) to make it clear to everyone that these repositories have an “enduser focus” and not a “developer focus”. This way, a project like KDE or GNOME could:
- use the current repositories below download.opensuse.org/repositories as their “bleeding edge” development repos for packagers and testers.
- have a separate enduser repository with additional features like patterns and translations using the “ftp tree generation” feature.
…and thats the reason why the summary of this blog contains the word “currently” – I think the Build Service is on a good way to be a really good solution for developers and endusers. But we should find a final solution for the open issues mentioned above before we point endusers to this tool.
How developers see openSUSE
You probably know that a lot of openSUSE developers are sitting in the SUSE office in Prague, Czech Republic. They are also openSUSE users.
The whole story started by a flame on a mailing list why some of us are not happy with the current state of openSUSE. It turned out there is a lot of different issues. So, we’ve met on a raining winter Friday 3 weeks ago to collect those issues as well as things that people consider to be good about openSUSE.
The result of the hours-long discussion is a list of positive and negative things about openSUSE, very subjective view of the group of developers in Prague. Go, look at the list. There is a lot of problems that I personally see lurking in our community, spelled out loud. The range is wide, from basic community issues to very technical problems that are basically missing features in the distribution.
So, we have collected the feedback. But the question is, what to do with it?
Firstly, I believe the lists are great food for thought. You might not agree with everything, but still, there is some truth in it. At least, those are problems that people consider important enough to try to solve them – encouraging.
Secondly, consider this blog as call for contribution. If you believe some of the areas are really worth improving, get in touch with people listed on the wiki, improve the description in the wiki, propose solutions. One restriction though – please, do not add additional items to the page. We want to keep the ideas where they belong – features eventually to end up in openFate, project-related problems on the mailing lists, …Also, this is not a general list of issues the openSUSE project needs to address. As I’ve written above – the page is a subjective view of a group of people. If you think we need a more general approach, please, bring the idea on openSUSE mailing lists.
Looking forward to your feedback!
Overloading library function
Sometimes during testing (especially while testing maintenance update), I come to a problem how to test the reaction of the program to an unusual function return value (e.g. special error), data, etc. The problem is, that I need to test the unmodified program binary and that it is almost impossible to set up a situation when the error "naturally" occurs.
If the function is in a shared library, there is a simple way to force the program to use another function instead of the one from the library. The program will call the new function, which will return the specific value (unusual error I need to test).
The magic "tool" is LD_PRELOAD variable. When the program is loaded to memory, libraries specified in LD_PRELOAD are loaded before libraries needed by the program. If I have function strdup() in my custom library (loaded by LD_PRELOAD), it will be used instead of strdup() from standard C library.
So for (very simplified) example, let's say I need to test program prog correctly checks whether strdup() returns NULL (I assume that prog segfaults on NULL access): I'll write simple library libexploit which will only contains my new strdup().
exploit.c:
char *strdup(const char *s)
{
return NULL;
}
Makefile:
all: libexploit.so.1.0
clean:
rm -f libexploit.so.1.0 *.o
libexploit.so.1.0: exploit.o
gcc -shared -Wl,-soname,libexploit.so.1 -o libexploit.so.1.0 exploit.o
exploit.o: exploit.c
gcc -Wall -fPIC -c exploit.c
Now when I make the simple library, I'll get result libexploit.so.1.0 which I'll preload to the program:
LD_PRELOAD="`pwd`/libexploit.so.1.0" prog
And it is all - the program got NULL from all calls to strdup().
Simple, right? There is one catch, however. Or two.
First problem is that the original function is replaced by the new one, so you cannot call the original one (not even from your "new" function).
Second problem is that even the original library will start using your function. So if the library is using the overloaded function, it uses the new one. This may result in unexpected library behavior.
Both these problems are not (AFAIK) solvable in a easy way. The only way I've found out so far is the re-factoring of the original library in following steps:
- Rename the original function - e.g. from strdup() to __strdup().
- Replace all calls of function strdup() with calls of function __strdup() in the library source - this will ensure that the library code will always call the correct function.
- Create new function strdup() which will contain the testing code (and can also call the original __strdup() function).
- Compile the library and load it with LD_PRELOAD as described above.
Now, that's finally all.
(This text has also been published on openSUSE wiki)
MonoDevelop 2.0 beta 1
The one I like more is the support for per-project policies. This feature has been planned for long time but other work has been delaying it. Me and Michael Hutchinson had a chance to talk quite a lot about it while I was at Boston a couple of months ago. The policies model allows setting properties at global, solution, folder and project levels. Settings such as tab width can be defined in any of those levels and will cascade down to the lower levels (where it can be overriden if required). Many settings are already available in this way, and many more will be in future releases.

Another new feature, or rather improvement, is the support for multiple frameworks. MD already had support for targeting the 1.1/2.0 CLR for quite a long time, but did not have the concept of 'target framework', which is more generic. For example, .NET 3.0 is based on the 2.0 CLR and it just includes some additional assemblies. What complicates things a bit is that Mono does not follow the .NET releases, so for example Mono 2.0 includes bits from all .NET versions. To simplify all this and to be compatible with MSBuild, it is now possible to select the target .NET framework, which includes 1.1, 2.0, 2.1 (Silverlight), 3.0 and 3.5. The project system is fully aware of the chosen target framework, so for example it won't let you reference a 3.5 project from a 3.0 project.
The source editor keeps improving in many ways. Mike Krueger has spent quite a lot of time fixing issues in code completion, which now works in many more contexts. My contribution on code completion (besides stabilization work in the parser database) is support for completion of generic types with constraints. For example, in the following class code completion is showing the Dispose method because there is a constraint forcing the generic argument to implement it:

There are other improvements, such as the new Go to File dialog I blogged about some time ago, better support for completion in ASP.NET projects, and fixes in the GTK# designer. There is still a lot of work to do, but we are getting close to 2.0.