Android Application for openSUSE Conference
Thanks to Matt Barringer, we now have an Android application for showing the openSUSE Conference schedule, details about talks and presentations and also some information about the presenters.The application can be found at Google Play Store under name SUSE Conferences and it's, of course, for free. It uses internet connection for downloading the data - there will be wireless network at the site, SSID Conference (get the details during registration/getting your badge).
When you start the application, you can select the conference schedule you are interested in. There are these two conferences now:
- Bootstrapping Awesome - 2012
- openSUSE Summit - 2012
|
Selecting the conference: |
Conference schedule, click on a talk to get more details:
|
|
Details for a selected talk, click on the gray star to add it to your personal schedule (turns to yellow): |
Talk added to your personal schedule, you still have some time left for more talks :) |
|
Currently downloaded news: |
OT: Shimano Alfine 11 Di2 SEIS and backwards compatibility with existing hubs
This post is completely OT for openSUSE, but I don’t have a better place to put it to share this useful snippet of information.

Shimano has just released its electronic shifting system for Alfine internally geared hubs. In mechanical Alfine, the gear cable pull is translated into rotation and gear selection by a detachable unit that sits on the end of the hub. Alfine Di2 SEIS replaces this with a MU-S705 motor unit controlled by an electronic brake lever. But it also introduces a new hub (SG-S705). I wondered whether the motor unit can be retrofitted to existing hubs, as I have the original Alfine 11 (SG-S700) on my Genesis Day One, and I’m not completely happy with the Versa drop bar brake lever integrated shifter*.
So I mailed Paul Lange, Shimano’s German distributor, to ask. The answer I got is that the SG-S700 hub can not be used with the Di2 components, because it has a return spring for upshifts, whereas SG-S705 does not since the gear selection in both directions is actively performed by the motor. If you put a MU-S705 motor unit on an SG-S700 it would be working against the return spring.
As far as I understood it, there is a spring in the SM-S700 cable end unit – I didn’t know there is also one in the hub itself, but I’ll check next time I have the wheel out. Until then, my dreams of perfect drop bar shifting are just that, because at 400 quid SRP the hub is a big investment. Maybe Shimano will take pity on me and make a mechanical STI…
* Mostly because there is no little cam decoupling the upshift lever from the cable spool inside the Versa shifter, so sometimes it shifts up several gears at once.
Snapper for Everyone
With the release of snapper 0.1.0 also non-root users are able to manage snapshots. On the technical side this is achieved by splitting snapper into a client and server that communicate via D-Bus. As a user you should not notice any difference.
So how can you make use of it? Suppose the subvolume /home/tux is already configured for snapper and you want to allow the user tux to manage the snapshots for her home directory. This is done in two easy steps:
- Edit /etc/snapper/configs/home-tux and add ALLOW_USERS=”tux”. Currently the server snapperd does not reload the configuration so if it’s running either kill it or wait for it to terminate by itself.
- Give the user permissions to read and access the .snapshots directory, ‘chmod a+rx /home/tux/.snapshots’.
For details consult the snapper man-page.
Now tux can play with snapper:
tux> alias snapper="snapper -c home-tux" tux> snapper create --description test tux> snapper list Type | # | Pre # | Date | User | Cleanup | Description | Userdata -------+---+-------+----------------------------------+------+----------+-------------+--------- single | 0 | | | root | | current | single | 1 | | Tue 16 Oct 2012 12:15:01 PM CEST | root | timeline | timeline | single | 2 | | Tue 16 Oct 2012 12:21:38 PM CEST | tux | | test |
Snapper packages are available for various distributions in the filesystems:snapper project.
So long and see you at the openSUSE Conference 2012 in Prague.
Heading to MonkeySpace
MONKEYSPACE 2012
MonkeySpace, formerly known as Monospace, is the official cross platform and open-source .NET conference. Want to learn more about developing for the iPhone, Android, Mac, and *nix platforms using .NET technologies? How about developing games or learning more about open-source projects using .NET technologies? MonkeySpace has provided an annual venue to collaborate, share, and socialize around these topics and more.
———————————————————————————————–
I’m really exciting to see all the new stuff that is going on in the mono world. Looks like they have some awesome presentations and presenters lined up for this conference. If you have some time and going to be in the area you’d be foolish to not attend.
Of course, big thanks to the awesome sponsors and not to be forgotten the organizers!
Also, pay attention to Open Space on the last day from 13:45 – 17:15. Come and help contribute to open source projects! I’ll be there working on f-spot.
xtrabackup for MySQL
If you run data-driven applications like me, you are probably already running some kind of backup and have plans for disaster recovery. I hope you are not still using SQL dumps?
I have been using Percona XtraBackup professionally for MySQL backups for a while now. Especially if your database access is highly transactional you will find it useful that you can get consistent non-blocking, non-purging backups while continuing to serve transactions. Who wants downtime anyway?
Under the hood the software will take a dirty copy of the InnoDB tablespaces on disk, and extract binary logs required to bring all of these to a specific point in time, or rather LSN, using a patched version of the mysqld binary. The preparation / restore requires applying the binary log to the files which results in MySQL tablespaces and binary log files equivalent to how they would have been with a clean MySQL shutdown.
Mixing transactional with non-transactional database engines is possible if you are willing to accept some blocking time while backing them up. If you are using MySQL replication, you can also use this to create a new slave from either a master or to clone a slave from another without downtime of either.
The upgrade to the 2.0 series adds, among other things, parallel IO and parallel compression. This requires a new streaming file format xbstream in addition the previous tar. Think of it as a tar with multiple input pipes.
I added the xtrabackup package to openSUSE, it is available in the server:database project (repo, SLE 11) right now and will also be part of the next openSUSE release.
Remember that these are only tools. Love your data and protect your business. A copy is not a backup. A backup that isn’t monitored for success is not a backup. A backup that is not proven to restore successfully is barely a backup.
Contact me if you need help setting this up.
Chairing the openSUSE Board, SUSEcon & openSUSE Summit
I'm supposed to be flying over the Atlantic right now to attend the OpenStack Summit, but British Airways had other plans for me: I'm stuck in London for a few hours, and will head towards New York tonight, before going to the west coast. But since I have Internet access, I guess it's a good opportunity to write about something that happened last month: I joined the openSUSE Board as chairman!
(And if you were wondering: I'm still part of the SUSE Cloud team, and the chairman position simply comes on top. The fact that I'm heading to the OpenStack Summit should have given you a hint already ;-))
For those who don't know about the governance structure of openSUSE, the openSUSE Board is a group of six people that exists to serve and guide the community. This includes working on legal and financial topics, talking to our different sponsors, etc., but it specifically does not deal with the technical side of the project. The Board is made of six members: five who are elected by the community, and one (the chairman) who is appointed by SUSE.
The new openSUSE Board Chairman. Picture by Andreas Jaeger
Until recently, Alan Clark was the chairman, but he recently got elected chairman of the OpenStack Foundation. I was surprised when I got asked if I'd be willing to step up, but that was a pleasant surprise: I was actually considering running for the next board elections, so it didn't take me too much thinking to accept :-) I got interviewed twice about this new position. This is quite cool, as it shows how much people are interested in what's going on in the openSUSE world.
I do believe there's a lot the Board can do to help the project, and there are many ideas I'd like to push, a lot of them coming from my experience at the GNOME Foundation. But the way I (and I hope, many others) see it, the chairman is just one member among others; of course, the chairman should be a bit more proactive in pushing the others, but that's the main difference. It's therefore important to have great people in the Board, like we do today. But guess what, we also have elections coming in a few weeks, so if you feel you can make a difference, consider running! If you don't want to run but have ideas to share, don't hesitate to mail the board or me to send us your input.
Because of this new position, I went last month to Orlando, in order to attend SUSEcon and the openSUSE Summit that was organized just after SUSEcon. This was really a last minute decision: I booked my flights three days before leaving... Both were amazing events, especially when you think that this was the first year for both events.
SUSEcon
Of course, it was a great opportunity for me to chat about openSUSE and the Board with many people, including Ralf Flaxa (VP of Engineering at SUSE) and Michael Miller (VP of Global Alliances & Marketing at SUSE) who both care a lot about openSUSE. It turns out they simply told me, when I asked if they were expecting anything special from the chairman: do what's good for the project!
Pretty cool to hear :-)
It was no surprise, but there was quite some discussion about the cloud during SUSEcon. And actually, I was surprised at how much interest there was from everyone. I was helping on the SUSE Cloud booth, and many people came in — some to just learn about the field in general, while others had some pretty deep questions about the technologies. Everyone was mentioning OpenStack during the keynotes, and the SUSE Cloud product was deployed live during the closing keynote to show how easy it is to deal with. SUSE also produced some fun videos about the cloud.
SUSE's birthday cake. Picture by Andreas Jaeger
Since SUSE is 20 years old now, SUSEcon was also the perfect time to celebrate SUSE's birthday. Some kernel hackers were nice and took time to participate in a happy birthday video, we had a fun birthday party, and we also went to see the Blue Man Group (great show!). Andreas Jaeger uploaded pictures of the whole event, if you want to remember what you enjoyed there, or see what you missed ;-)
openSUSE Summit
The openSUSE Summit had many people coming (more than I expected!), and it was a lot of fun. Bryen and the whole team did an amazing job with the organization, and I think everybody enjoyed the family atmosphere that this event had. There were also great sessions (although I only attended two of them), and thanks to ownCloud and Omnibond, we had fun parties in the evenings. I especially loved building the small boats (or a car, like Simona and I did).
The openSUSE Summit also hosted a GNOME hackfest on user observation. Anna, Federico and Cosimo wrote about it already. It looked like it was a useful hackfest, from what I could see!
Scott loved the Summit! Picture by Andreas Jaeger
If you want to see pictures from the openSUSE Summit, go check Andreas' gallery. Between the sessions, the geeko lounge, the parties, huge geekos, a raffle to win a Raspberry Pi (all profits went to the GNOME Foundation), and more, there's lots to see :-)
Oh, and I had the opportunity to talk with Sam Varghese during SUSEcon about how GNOME is doing. I hope the resulting article gives a new perspective about the current direction to people outside the GNOME community.
My flight is probably about to leave; time to look for the boarding gate...
[ANN] Automated Testing on Mac (ATOMac) 1.0.1 released
With this announcement LDTP is now cross platform GUI testing ! I'm excited to share this news. Please spread the news.
The ATOMac team is proud to announce a new release of ATOMac.
About ATOMac:
Short for Automated Testing on Mac, ATOMac is the first Python library
to fully enable GUI testing of Macintosh applications via the Apple
Accessibility API. Existing tools such as using appscript to send
messages to accessibility objects are painful to write and slow to
use. ATOMac has direct access to the API. It's fast and easy to use to
write GUI tests.
Changes in this release:
* LDTP compatibility added. LDTP allows testers to write a single
script that will automate test cases on Linux, Windows, and now Mac OS
X. Information and documentation on LDTP can be found at the LDTP home
page.
* Detailed documentation - Sphinx has been configured to generate
documentation for ATOMac. When this documentation is uploaded, it will
be linked from the home page[1].
* Various fixes to reading and writing certain accessibility attributes.
* Sending function keys and newlines now works as intended.
A detailed changelog is available.
Download source
Documentation references:
Sphinx documentation is being uploaded. In the meantime, please see
the readme at the bottom of the github page listed above.
Report bugs
To subscribe to ATOMac mailing lists, visit
IRC Channel - #atomac on irc.freenode.net
cups-pk-helper & desktop-file-utils releases
In the last two weeks, I took some time to review patches submitted for cups-pk-helper and desktop-file-utils, and worked a bit on the code. This means new releases, which keeps me on track for the "two releases a year" schedule followed for those software :-)
cups-pk-helper 0.2.3 0.2.4
It is recommended to update to the 0.2.3 version of cups-pk-helper, due to a security flaw in the old code (CVE-2012-4510). I found it while fixing a compiler warning about a return value being ignored; re-reading that old code, I realized that it was, hrm, not really solid, that it was not checking permissions, and that it could actually be abused to overwrite any file (among other issues)... Thankfully, this can only be exploited if the user explicitly approves the action since it's protected with polkit authentication (using the admin password). So this is not as severe as it could have been. I want to thank Sebastian Krahmer from the SUSE Security Team, who was really helpful in reviewing my iterative fixes.
The other changes are build-time compatibility with cups 1.6, some additional paranoid processing of the input we get via dbus, and updated translations (thanks to transifex).
Update: the 0.2.3 tarball had a small bug when detecting the cups version, try 0.2.4 instead ;-)
desktop-file-utils 0.21
The 0.21 release of desktop-file-utils is mainly about an update of the validator to deal with several recent (and not so recent) changes in the XDG Menu specification: a main category is not required anymore (although still recommended if one main category makes sense for the application), Science is now a main category, and new categories have been registered (including the Spirituality one, that has been discussed years ago).
The validator now also correctly handles the new values for the AutostartCondition field used by GNOME 3, and features some experimental hints in the output for .desktop files that could possibly be improved. Those hints are experimental since I'm unsure if they will really help, or just annoy people (note that they can be ignored with the --no-hints option). At the moment, they only deal with categories, but I guess it shouldn't be hard to find more hints to add (such as hey, you're missing an icon!
).
Of course, while working on desktop-file-utils, I took a look at some patches and issues that were recently discussed on the xdg mailing list, and pushed some changes to the menu specification. I'm a bit sad about the fact that nearly nobody is actively working on most specs (blaming myself too, since I look at patches/issues only a few times a year) and that feedback about the proposed changes is rare (these days, I'd say getting two or more people to approve a change is an exception). It'd be great to have a few people step up and bring new energy to this effort!
Csync for ownCloud Client 1.1.0 - A New Sync Engine
Along with todays ownCloud 4.5 release we released the new ownCloud Client 1.1.0 with a new syncing concept.
This blog will shed some light on the details. I apologize, it’s a long read.
Time Issues
ownCloud Client versions 1.0.x worked with csyncs traditional way of using the file modification times to detect updates between the two repositories that should be synced to each other. That works fine and conforms to our idea to ideally not use any other metadata in syncing than what the file system has anyway.
However, there is one drawback which we all know from daily life: If at least two parties sync on time its important that all clocks are set exactly the same way. Remember good crime movies where a bank robbery always starts with a clock adjustment of all gangsters? We have exactly the same in ownClouds syncing: All involved have to have the same time setting, otherwise modification times of files can not be compared reliably.
There are solutions for computers to set the exact time (like ntp) so in general that works. However, in real life scenarios these are not reliable because either people do not have them started on the system or because the daemon updates the time once in a while and in that time span the clock skews already too much.
Users all the time reported problems with that and other experts continued to advise that we never get around that problems if we don’t change something fundamental and go away from pure time based syncing.
Well, we did that with our csync version 0.60.0 which is the sync engine for ownCloud Client 1.1.0.
An Unique Id
Now, every file and directory inside a sync directory has an unique Id associated. The idea is that the Id changes if the file changes. So in the sync process the need for a file update in either direction can be computed by comparing the two Ids of the file. If the id has changed on one repository the file was changed there and needs to be synced to the other side.
The Ids are generated on the ownCloud server and one challenge for the client is to always download the correct Id of a file. The Ids are just random tags for a file version. It is not associated to the file content as MD5 sums would be. Actually it was a frequent advise to use MD5 sums or a similar approach which digests the files content to detect updates. That would have come very handy because that means comparing file contents directly and, more important, it’s reproducable on either side. Also the client would have been able to recalculate the MD5-Sum of the local files and would not have depended on a local database with Ids that were pulled from the server before.
But we decided against hashes. Calculating MD5-Sums is costly in terms of CPU and time, especially for large files. The CPU problem is small on clients, but not on servers where a lot of clients connect to. Even though the sums can be calculated during upload, the problems remain for the case where the server does not see the upload stream, think of the “mount my Dropbox” case.
For files on the ownCloud server, the Id is always updated when the file gets updated. On the client side the last Id of a file is in the client database. It is invalidated in case the files modification time changed meanwhile to detect local changes.
Change Propagation
Another remarkable change in the 1.1.0 client is that change events in the file tree propagate up to the top directory on the owncloud server, ie. if a file changes in a directory, the id of the directory changes as well as the one of its parent directory etc.
That means that to detect if a file tree has changed, it’s enough to check the top most directories Id. If that has changed, ok, than the client needs to dig deeper, but in the not so rare case that nothing has changed, the one call is enough to detect that. That dramatically lowers the server load with clients because instead of digging through the whole directory structure what we did with the 1.0.x series it is a few requests now.
CSync and ownCloud for Success
These are very intrusive changes to csync. For example, we had to add two additional fields to the database, add code that is able to build a representation of the local file tree from the database and make csync query for the file Ids from the server if needed. Deep under the hood the updater, reconciler and propagator code needed changes to work with the Ids. All these changes did not go back to csync upstream yet.
To not conflict with the upstream version of csync we decided to rename our csync version to ocsync. But: This is a temporar solution for the time we need to catch up with upstream again. That will take a while until everything is sorted again but we will work on that.
I am are very excited about the new version of csync. But obviously there are other changes in the ownCloud Client 1.1.0 which will be subject of another blog post.
My Letter to Amazon
I am writing to ask you to make my Amazon Instant Video purchases available for download, DRM-free.
I am a big fan of your service, and in the last year I have cancelled my cable service and begun to rely heavily on digital purchases from your store to bring new content to my family.
However, as we have settled into this, some major disadvantages of your service have become clear:
1) Many of my devices (phones, tablets, etc) do not have support for viewing Amazon Instant Video content.
2) If I'm having a bad internet night, I cannot watch the content I purchased.
3) If I am making a long drive for a vacation, my son cannot watch any Amazon content I have purchased.
4) I cannot lend my content to friends who want to check it out for themselves, like I can with DVDs.
5) I cannot back up my content in case something catastrophic happens to your service.
I am aware of the existing download option. It is Windows-only and uses DRM and is therefore useless to me. DRM'd files cannot be unlocked in the event of you turning off your authentication servers. And, obviously, they cannot be played in arbitrary devices.
Now, if there were good tools for removing the DRM, this would not be as big of an issue for me as a consumer. I would just remove it and go on my merry way.
Unfortunately, such tools do not exist. It would be far easier for me to pirate content I've already purchased from Amazon Instant Video in order to watch on my tablet, make backups, lend to friends, and have offline access.
But it seems inevitable to me that if I begin taking the risk to pirate content I've paid for, I will quickly find that it is easier to never pay for it in the first place. At that point, you will have lost me as a recurring customer.
I'm not trying to threaten you, but I think you should realize that I am becoming a dissatisfied customer of yours, and the natural consequence in order to get the sort of access I want to the content I buy appears to be piracy.
I love that your Amazon MP3 service is DRM-free. I don't buy a lot of music, but I have very happily used your service for years when an album came out that I wanted. I have preferred it heavily when compared to iTunes and similar services.
Please follow the great example you set with Amazon MP3, and make Amazon Instant Video purchases available for download without DRM as well.
Trying to stay a loyal customer,
Sandy Armstrong







