Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

openEMS packages

openEMS packages are available from Science repository and main Tumbleweed repository.

openEMS is a free and open electromagnetic field solver using the FDTD method. Matlab or Octave are used as an easy and flexible scripting interface.

It features:

  • Fully 3D Cartesian and cylindrical coordinates graded mesh.
  • Multi-threading, SIMD (SSE) and MPI support for high speed FDTD.

See official site to get more information – openems.de.

the avatar of Sankar P

FOSS System Software in 2015

Prelude

About an year ago, I was playing around with Cassandra for a quick prototype in then dayjob. It opened up the world of distributed systems to me and I was piqued. Audaciously, I decided to implement a simple distributed database, Keeri, to have a grasp of the fundamentals of the implementation of distributed databases. In the past, I have implemented a simple filesystem which has helped me immensely when I was working as a filesystem engineer. Also, I have always been fascinated by the theory behind the database internals right from college days, but did not get my hands dirty.

After a few weeks of work, I was able to implement a recursive-descent SQL parser, which analysed the incoming SELECT queries, made a tree with the subqueries properly branched as sub-trees. I made a simple columnar store that appends data (via an API as opposed to SQL) but without any atomicity guarantees. In short, it was a rudimentary, in-memory system that functions decently. However, I was nowhere near the initial goal of implementing a distributed database.

I realised that there were plenty of design choices in a distributed database implementation, right from architecture, replication, membership, consensus, CAP, etc. I even took a coursera course that helped me understand the basics in detail. As of today, I have enough confidence in my skills and knowledge to implement a distributed database which could serve as a good teaching material, if not as a production software. However, I have not made a single line of code in the past seven months to the project. Abandoning (at least temporarily) the project, hurts.

Yesterday, my daughter decided to wake me up from my sleep after midnight, I spent the remaining night wide awake, while she slept, thinking why I have not made progress in keeri. I realised that I have been overwhelmed by the amount of things to do, that are not core to the system. For example, after I decided that the database has to be NEWSQL based, it is imperative that I needed a SQL parser. But there are a dozen types of parsing (LL, LR, Recursive Descent, ANTLR etc.) techniques. Understanding the pros and cons of the each type and finding the most suitable candidate is a non-trivial task. SQL Parsing is just one component of the system. There are other components such as the choice of datastructures (based on read/write ratio, type of load etc.) One approach is to proceed with the simplest choices for each component with well-defined borders. The individual components can be later replaced. By the time, I completed the SQL query to a decision tree code, I felt exhausted, even before I began the core database and distributed systems functionality.

Observations

From my past open source experience, I have known the synergic boost that developers experience in FOSS communities. It is always good to work in a like minded team of developers rather than individually when working on big problems. I started thinking what other FOSS projects exist for distributed databases (or any other large scale system software) that were created in the last 5-6 years. A few things that came to my mind were:

Hadoop: An umbrella of projects, initially started by Yahoo, including core projects such as HDFS, HBase and a laundry list of supporting projects. Most of the code is now under the Apache project with a plenty of companies sponsoring the development and using the projects.

Docker: The coolest kid in the town. Initially started by Solomon Hykes funded by dotCloud as a side project. Arguably the most active project to date used heavily by almost all tech companies worth their salt. This spawned off a series of other projects too.

CoreOS: Linux re-thought for being a Cloud focussed distro. Backed by a company with the same name. Founded by ex SUSE, Rackspace people, collaborating with Greg KH himself.

Redis: Started by antirez, funded by vmware, pivotal and most recently redis labs. Probably the most used k-v database, probably challenged only by the older memcached.

Cassandra: Initially started by Facebook and later became an Apache project. Heavily used by companies like Netflix, Twitter, Applet etc. even after facebook has moved away.

Kafka: Initially started by Linkedin and later became an Apache project. Used by linkedin and almost every company today. Most of the original team that created the project have now jumped off to a new company named Confluent working on the project full time.

CockroachDB: A project that claims to be the open source equivalent for Google's Spanner. Started by ex-googlers. Development funded and managed by Cockroachlabs.

As I kept thinking about these (and a few other) projects in a state of semi-sleep, I had a eureka moment when I realised that all these projects, even though are open source, began funded by a company / investor money. This is in complete contrast to the FOSS projects of the previous generation like GNU, Linux, GNOME etc. It is a welcome change that developers are now not scared of becoming C*Os and spend time in management or bootstrapping a company. Perhaps, only today, we have companies like Zenefits (disclaimer: employer ;) ) making it easy to start a company and so more developers find it easier to start companies. VCs being in a bullish mindset also helps.

However, I have one concern. Unless the project usage explodes and gains contributors from multiple companies, there is a high chance that the projects may compromise on quality to accommodate a business need. For example, if Torvalds was working for Google, wakelocks *might* have merged into kernel much earlier to suit a Google release cycle. Torvalds being a neutral outsider, without having any commercial interests in any company (directly) has helped Linux immensely. If a FOSS project is started with a backing company in place, from day one, how high will the company's benefits influence the design / features / review processes of the FOSS project ?

As I think more, I realised that, most of these new system software are developed to address the pains of a "as a service" providers. So unless there is a business case, it is perhaps difficult to create a new modern system software, as the era of one size fits all is over. This also makes the previous concern about, the chief maintainer being company neutral, irrelevant. Only when a software is made, backed by a company, with real use cases and customers, instead of theoretical / intellectual curiosities we will get live data. Personal pet projects may have to be satisfied with machine generated data, which may not be the best testsuite for data intensive software like databases.

Questions

To sum up, I wonder if developers (students) any more interested / will be able to develop FOSS projects in their own hobby time, that could grow as big as Linux, without having a corporate backup, for the first few years at least, What do you think ?

Also, if you think it is possible, any recommendations for developers with family and personal needs to spend time, off the regular day job, to persist with pet FOSS projects without exhaustion ? Are there any statistics available on contributor details for popular FOSS projects (similar to Kernel stats prepared by Greg KH and LWN) ?

Any other aspects that I have missed ?

P.S: I was sharing the gist of this post with a friend who shrugged off saying, "Get a job in the company which works on a project which appeals to you". However, it is not that simple, considering most of these young companies do not even have an office anywhere outside the developed world. Also after a certain point in life, switching jobs is not trivial and depends on various other factors.

a silhouette of a person's head and shoulders, used as a default avatar

Use your distro's kernel in OBS

The Open Build Service has the nifty feature that you can tell it to use a specific kernel to boot the worker VMs that build your software. To use that, you don't need any special setup, just a package which contains a kernel and an initrd:

   /.build.kernel.kvm # used by KVM workers
   /.build.kernel.xen # used by Xen workers
   /.build.initrd.kvm
   /.build.initrd.xen

So you just need this package and make sure it is installed in the VM using the VMinstall: tag in the project config.
If the build service worker script detects that after preparing the VM, such a kernel and initrd are present, they will be used for booting the worker VM that finally builds your package or image. If it is *not* detected, then the kernel the worker server is running with (usually a SUSE kernel) will also be used for the VM.

In the openSUSE Buildservice instance, all "recent" SUSE distributions are configured for that: they use the kernel-obs-build package, which gets created automatically when building the kernel rpms.

Now I am right now using a buildservice instance for cross-distribution package- and imagebuilds. The challenges of trying to build RHEL/CentOS 7 images with KIWI in OBS warrant at least one additional blog post, but one thing I noticed was, that some of the kiwi stuff, when done with a CentOS 7 userland, apparently also needs a CentOS kernel, otherwise kiwi's parted calls, for example, will exit with code 1 (without issuing an error message, btw).
So I have built a kernel-obs-build from the CentOS 7 kernel and configured my OBS instance to use it, which brought me quite some steps further to building CentOS images with KIWI in OBS.
The code (or rather: the spec files) to "convert" the CentOS kernel to an OBS kernel is at https://github.com/seife/kernel-obs-build, a short README on how  to use it is included.

Note that right now it only works with KVM workers as I was not able to get the worker code to boot the kernel correctly in a Xen VM, even though drivers are all there, the reason is probably that the obs worker scripts rely on some of the specifics of a Xen-specific kernel (e.g. the device name of the block devices being passed through to the VM from the config, which is not true for a generic PV-capable kernel).
But I guess this will improve soon, now that openSUSE has dropped the kernel-xen package, they will face the same issues and hopefully someone will fix them ;)
a silhouette of a person's head and shoulders, used as a default avatar

Hi how are you doing? Sorry you can’t get through. Just leave your name and you number and I’ll get back to you

No.. I haven’t forget you! I think of you every day, night and if I’m honest all the time. You and you and you and especially you who are reading these lines. This is going to be sort blog entry. I want you to know what you should start doing! Yes just stop being social in internet. Get out of your comfort zone and start spank the monkey (oh.. sorry not spank the monkey this is children approved blog..) er.. learning new stuff.

Start you maker project and learn how to 3D with Blender (it’s marvelous 3D application). After Blender there is no excuse and it’s free (but remember if you really like it give something back). Are you on more on CAD? Learn 3D CAD with FreeCAD (Again amazing tool). Want to ride IoT wave but you don’t have too much money then get new shiny Tiny $5 Raspberry Pi Zero or 9$ C.H.I.P and make your Fritzing electric boogie with ease (and by the way you can commit those boards to library. It’s open source!). You are in music how about doing some DJ:n with linux? Mixxx just got shiny 2.0 RC1 out. Are you more reading type and need something to manage you e-books: Calibre is here to help. Huh so much to do so less time!

Where to get them? openSUSE have RPM for all of them just learn to search them from Packman or from OBS. Sorry to say Raspberry Pi Zero is currently not supported but you can help to add it to ARM boards working with openSUSE same problems with C.H.I.P (If you have Raspberry Pi 1/2 just get image for them from openSUSE ARM image and start hacking). You should learn how to add new repos to YaST2 and add Packman repo for new FreeCAD, Fritzing and Mixxx. Yes! most of them run also on Windows and Mac OS X. Now smile on, thumbs up and get your groove on with title song: De La Soul – Ring Ring Ring (Ha Ha Hey)

the avatar of Vojtěch Zeisek

Kurz práce v příkazové řádce Linuxu nejen pro MetaCentrum 2016

Course of work in Linux command line not only for MetaCentrum 2016

Don’t be afraid of command line! It is friendly and powerful tool. Practically identical is command line also in Mac OS X, BSD and another UNIX-based systems, not only in Linux. Basic knowledge of Linux is not conditional. Course will be taught in Linux, but most of the point are applicable also for another UNIX systems like Mac OS X. Knowledge of Linux/UNIX is useful e.g. for working with molecular and another data. MetaCentrum is service provided by CESNET allowing access to huge computational capacity.

vojta

the avatar of Sankar P

AWStruck

tldr: 

A long post about my experience with implementing a quiz software in my college, a decade ago and wondering how easy things have become now due to AWS.

Prelude

In 2002 (iirc) (thirteen years ago, as of composing this post) when I was in college, we had an inter-collegiate technical symposium, where Online Quiz was one of the events. A Microsoft Visual Basic 6.0 (which I personally consider to be one of the best software ever developed) application was developed in-house and installed on about 50 computers, where various contestants from different colleges could come and take the test. However, as Murphy predicted, due to various virus issues, the software failed spectacularly. Some answers/responses got corrupt, accumulation of responses from different machines proved faulty, the scoring went awry in some corner cases, etc. Overall, the application turned out to be total chaos. However, since India is populous, we were able to throw more people at the problem and finish the event, with a lot of manual effort, inspite of a few unhappy participants.

In the planning phase for the subsequent edition of the symposium two years later, a software development committee for formed. It would do all the software for the entire event,  (like creating a website, developing flash/swish videos, software for the individual events, etc.). The quiz event had two rounds, a preliminary round where all the appearing colleges contested and a final round where six (or probably more) top colleges from the previous round were selected. An eloquent person was made incharge of the quiz event. I proposed to the person that we do the software for the preliminary rounds ourselves, instead of depending on the committee. The committee was already swamped with work and they were happy to get rid of a piece that has more chances of failure. Some adventurous people (like Antony) expressed their interest in joining the project. Thus it all began.

The Adventure

Much to the amusement of my roommate Bala, I started with planning the architecture and design on paper (complete with UML diagrams, etc.), instead of starting with coding as is the norm for us those days. Much later I came across an interesting quote by Alan Kay, "At scale, architecture dominates material". Having learnt from the mistakes of the previous years, I made some decisions.

* The software should follow the web (client-server) model, that is getting popular. At least this is an excuse to learn some new (then) technologies, like JSP, Javascript, Tomcat etc.
* The server machine becomes a single point of failure for the entire system. It could prove to be a performance bottleneck to, as our machines were all having a humongous 32 MB of RAM. There was one 64 MB ram in our lab which I planned to use as the server. In our hostel, some had a machine with luxurious 128 MB of RAM, which I was planning to borrow if the need comes.
* The single point of failure, the server should not be susceptible to virus attacks. So we should experiment installing Solaris or this thing called Linux (There was no Ubuntu then).
* Internet was a luxury and for the entire college we had access to it in about three computers, only in the evenings. So anything that requires too much internet access for development is automatically rejected.
* The software should scale at any cost, for at least 200 parallel connections
* We should regularly backup the sources in different machines, in case the development boxes gets a virus attack. We had no idea of version control systems then.
* We will be using Mysql/oracle or some real database instead of writing to files. MS Access was ruled out automatically as Visual Basic was eliminated already. In hindsight, sqlite would have been an excellent choice.
* The quiz webpage when saved on the client browser should save the file along with the answers chosen/typed.
* Each quiz session will last for about 30 minutes. There will be username/passwords generated for each unique participant.

We developed the JSP webapp running in Tomcat in a few weeks. We used the generous help of my classmates to throughly test the correctness of our scoring system. As with any manual system, it was prone to errors. A tester made a mistake in scoring and we broke our head trying to find a non-existent bug in our code for a few hours. This testing also helped us get the load numbers for the current system, with about 30 concurrent users. We had some performance monitoring hooks written in our code for this.

We survived multiple virus attacks during the development, because of the distributed source backup techniques that we have employed. At one stage, we even burnt our sources in a CD when the administrators decided to Norton Ghost all the hard disks in our lab, with a fresh Windows XP image, to minimise the virus effects.

I learnt the magical world of performance monitoring, database indexes, high availability, connection pools etc. during this project. I learnt much more in this single experiment than the almost half a dozen papers we had on software engineering, process management, quality assurance etc. taught by lecturers with no real world knowledge and questionable scoring practices. Some of the fascination that I acquired with database engines has still not subsided.

Having finished the coding one week prior to the event, we focussed more on scaling and testing. I prepared a backup server, another high-RAM machine in case our main server went kaput. Much to the jovial criticisms of my friend Sangeeth, we tested our system, the night before the event, for 2000 parallel users and it worked well without breaking a sweat. This is such a silly number in today's figures, but we were easily satisfied then with low numbers in both server performance (and salary). Almost all the front end code was handwritten Javascript with no frameworks / libraries (as mostly none existed or we were not aware). I was satisfied with what we have done, irrespective of however the results may turn out to be the next day.

Having lost a good sleep due to the stress testing the previous night, I woke up late and missed the delicious Pongal in our hostel for breakfast. Ruing about the missed breakfast caused a weird stare from the rest of the team. I rushed for the preliminary quiz event ahead of time and two among us did a final test on the last day. We planned on using some Rational test suites for automated testing but could never get to that, thanks to all the virus related frequent re-installs of the base operating system.

The participants came in numbers, attended the event. Surprisingly for us, a lot of people did not use the half-an-hour duration and finished much earlier, even with negative questions. The event chief too had a moment of doubt, if we have prepared the questions easy. But looking at the instant results in the server and the high percentage of low marks taught the lesson that many people have come to the event to have fun and not to seriously compete or win.

Before we could ruminate on that philosophical thought, a participant had a problem. Her network cable went broke and she could not submit her quiz. I felt bad that I should have implemented auto-save for responses as soon as people make a choice. I intentionally avoided that to reduce load on the server. I was about to ask if the person could take the test from a different machine. But the inimitable event-head the presence of mind, to ask the participant to save the quiz on the same computer and that we will evaluate that offline. Antony did the scoring and that particular person turned out to be a topper. This particular event taught me a lot about presence of mind and how we should always plan for failure in computer systems, how ever thorough we test. The scalability as expected was never found to be a problem.

After the event is finished, our lab admin, Marshal joked that we should start a company with this quiz software, as we have done it as a generic survey software where questions can be added. We laughed at the suggestion and moved away. The event was successful. Some of the software developed by the committee for some other events were affected by the recurring virus problem. But I went and slept like a log on a temporary bed made of three office chairs, next to my classmate Saktheesh who was working on a closing video for the event.

The Present

The long story above is not to just narrate my/our work, but also to highlight how much approachable the programming / technology landscape has become. A quiz / questionnaire software can be implemented today (2015) in probably a few hours, thanks to the large number of frameworks (such as Rails, Django, etc.). In fact, most of the tutorials have better code that you can merely copy/paste than what we have implemented a decade ago. The most striking thing today, however, is not the story of coding, but the story of deployment.

Anyone with an internet connection, a basic course on programming and decent googling skills can program any service easily today. What is even more fascinating is that such a software can be very easily deployed on the internet, served to the whole of the world, complete with a domain name, auto-scaling, DoS prevention etc. in just a few clicks. This is all made possible through Amazon Web Services. There are other players like Google, Heroku etc. but it is AWS that is way ahead of any other players and provide more services. The reach of AWS is what made me choose the title of this blog post.

AWS has done more to spur the startup ecosystem. The social impact of AWS is much higher than what Google did for online Ads, Microsoft did for PCs. Disruptive companies like airbnb, slack, netflix (which was just an online video rental service 7 years ago) can exist today, only because their devops, installation and maintenance of machines could be outsourced to AWS. They could not have grown to such 800 hundred pound gorillas if the AWS infrastructure was not available, in such a short time. Sure, there are some companies like Uber, Whatsapp that do not use AWS, but they would not have got funded easily if not for the startup scenario, which was formed with AWS as the backbone.

The Future

I have been visiting various buildings in Bangalore trying to find an office space for Zenefits India, as I am the first engineer here. All the places have a Server Room, which is not used by any of the startups. Almost all the startups use a Mac for developer machine and have their deployment servers in AWS (or some public cloud). The office spaces of Bangalore have not caught on with the trend. We are hiring btw, so if you consider yourself an extremely good engineer and one of the best at what you do in your job, do apply.

Most of the new services offered by Amazon, such as Amazon Lambda, DynamoDB etc. and also things like Containerisation, have made development of scalable applications, easier. Developers need not worry about failover systems, HA system, clusters etc. any more. I wonder what kind of an impact this will have on the job market. I wonder how long it might take job positions like mysql admins, sysadmins, devops engineers, DBAs to become as old/obsolete as, say mainframe programmers are considered today (2015). Perhaps it may not be soon but it is very much possible soon. Ubiquitous applications like SAP, Office etc. are also now cloud first and it will only become more cloud focussed in the future.

I wonder how much of system software research will be affected in the long term. Many of the modern day young bright minds (students from prestigious colleges and universities) are working in webapps, joining startups and doing their own companies, instead of working on projects with high entry barriers like the Linux Kernel, LLVM etc. (at least in India).  Perhaps, we would have started the quiz project that we did as a company (somewhat like surveymonkey) if we had enough exposure then. I may not have done that but some students smart with a business acumen, would have.

There are very interesting research problems in distributed systems that include both Databases and OSes. Most of the present day systems are just distributed systems constructed over Linux / POSIX systems. However, there is a potential for a DOSIX (along the lines of POSIX) API purely designed for large-scale, cross-geo distributed systems. It will be interesting to see what kind of research happens in this direction. In the recent past, We have a new distributed consensus algorithm Raft after decades of using Paxos. More such re-inventions are bound to happen soon, may be on novel things like non-blocking, distributed garbage collection etc.

a silhouette of a person's head and shoulders, used as a default avatar
a silhouette of a person's head and shoulders, used as a default avatar
darix posted at

FFI and Requires

FFI is a method to interface with native libraries, which is becoming more and more popular in many scripting languages. Unlike with native extensions though, we have nothing that links the shared library. As such our requires generator for shared libraries doesn’t kick in and we get no requires on the shared library package.

To make matters worse with the shared library policy the soversion is dependent on against which distro you build against.

a silhouette of a person's head and shoulders, used as a default avatar

Linux Presentation Day + Leap 42.1 release party in Munich

On Saturday we had the openSUSE Leap 42.1 release party in Munich, which I announced a couple of days ago. We had around 20 participants there: about 10 openSUSE users and also about 10 GNU/Linux users from the Linux Presentation Day – people that just started using Free Software and wanted to know more about openSUSE, GNU project, Open Source in general and of course celebrate with us the new release 🙂

But at the beginning I had no idea where we can meet in Munich. On Wednesday I asked in our German ML about location and Marcus advised Linux Presentation Day. Two minutes later I sent email to Linux Presentation Day event’s organizers and asked about separate room with beamer and power sockets. We got everything what we asked about. Thanks a lot for collaboration!

After that, on Friday (when I was sure about location and room was reserved for us) I come to Nuremberg to take openSUSE promotion material like USB flash sticks, DVDs, stickers, green “Leap” T-shirts and openSUSE beer. It’s not so far away from Munich. I think, about half of eighth I was at SUSE Office and Richard gave all “release party stuff” (last time, when I organized openSUSE 12.1 release party in Göttingen, I got all these stuff via post, with the exception of beer of course).

I had a talk about openSUSE project in general: the talk was targeted primarily for those who never heard about OBS, Leap or openQA. I tried emphasized the role of the community in openSUSE project.
I got many questions about systemd, SUSE impact on the openSUSE and quality of the “Enterprise Core” part which will be used in the Leap. I enjoyed talking with many that showed up and received as main feedback from many of those that I talked with.
If you’re going to invite “everybody” to your release party, you don’t need to talk so much about infrastructure or development model of openSUSE, I guess. That’s important and interesting for developers and Free Software evangelists maybe, but not for users, who are still not sure about contributing. For such users it’s more important how good this version as a desktop system than how easy to use submit request in OBS or which programming language should they use for implementation of tests for openQA or something like this.

By the way, at Linux Presentation Day we met one journalist from linux-user.de. So, I think my post will not be the only one about this event 🙂

I want to thank Richard and Doug for openSUSE stuff, Linux Presentation Day organizers for hosting us in the VHS building and… thanks to all who joined us! See you next time and have a lot of fun 🙂

more photos.

the avatar of Klaas Freitag

ownCloud Chunking NG Part 3: Incremental Syncing

This is the third and final part of a little blog series about a new chunking algorithm that we discussed in ownCloud. You might be interested to read the first two parts ownCloud Chunking NG and Announcing an Upload as well.

This part makes a couple of ideas how the new chunking could be useful with a future feature of incremental sync (also called delta sync) in ownCloud.

In preparartion of delta sync the server could provide another new WebDAV route: remote.php/dav/blocks.

For each file, remote.php/dav/blocks/file-id exists as long as the server has valid checksums for blocks of the file which is identified by its unique file id.

A successful reply to remote.php/dav/blocks/file-id returns an JSON formatted data block with byte ranges and the respective checksums (and the checksum type) over the data blocks for the file. The client can use that information to calculate the blocks of data that has changed and thus needs to be uploaded.

If a file was changed on the server and as a result the checksums are not longer valid, access to remote.php/blocks/file-id is returning the 404 “not found” return code. The client needs to be able to handle missing checksum information at any time.

The server gets the checksums of file blocks along the upload of the chunks from the client. There is no obligation of the server to calculate the checksums of data blocks that came in other than through the clients, yet it can if there is capacity.

To implement incremental sync, the following high level processing could be implemented:

  1. The client downloads the blocklist of the file: GET remote.php/dav/blocks/file-id
  2. If GET succeeded: Client computes the local blocklist and computes changes
  3. If GET failed: All blocks of the file have to be uploaded.
  4. Client sends request MKCOL /uploads/transfer-id as described in an earlier part of the blog.
  5. For blocks that have changed: PUT data to /uploads/transfer-id/part-no
  6. For blocks that have NOT changed: COPY /blocks/file-id/block-no /uploads/transfer-id/part-no
  7. If all blocks are handled by either being uploaded or copied: Client sends MOVE /uploads/transfer-id /path/to/target-file to finalize the upload.

This would be an extension to the previously described upload of complete files. The PROPFIND semantic on /uploads/transfer-id remains valid.

Depending on the amount of not changed blocks, this could be a dramatic cut for the data that have to be uploaded. More information has to be collected to find out how much that is.

Note that this is still in the idea- and to-be-discussed state, and not yet an agreed specification for a new chunking algorithm.

Please, as usual, share your feedback with us!

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE 42.1 Leap :: Release Party in Munich

openSUSE 42.1 Leap was released about week ago and it is looking good. Now we have community enterprise system. I would like to thank everyone who contribute to openSUSE project and help to make it better.

Of course, we should have openSUSE release party! openSUSE community haven’t had release parties in Munich for a while (since I live in Munich I think we never had it here).

So, what is release party about? Well… like usual: Linux geeks meet together, speak about features in new openSUSE version, news in Free Software world, drink beer and… of course have a lot of fun 😉

A few days ago I started discussion about release party with Linux Presentation Day organizers and it seems that problem with location is solved now. We will get small meeting room with power sockets and beamer there. That is exactly what we need. I also asked Doug and Robert about some “promotional material”, openSUSE beer and TShirts. Tomorrow (Friday) I’m going to go to SUSE office in Nuremberg to take it (beer can not be trusted to anybody).

Do you want be a part of it?
* November 14, Saturday
* I start my presentation at 12:00 AM. I’m going to talk (presentation) about OBS, Leap and openSUSE project in general.
* vhs-Zentrum, Münchner Str. 72, Eingang rechts, 85774 Unterföhring
* Don’t forget to bring your good mood and friends 😉

Everybody are very welcome! If you have any questions about openSUSE, GNU project or Free Software, feel free to come and ask.