A tip for dealing with the first GSOC weeks.
And many students are still busy with exams and other things. You are ambitious, of course, so you make promises to your mentor and then--you might not be able to follow through on that. You're too busy studying or this family-and-friends thing gets in the way. Now what?
It is fine to make mistakes or miss a deadline...
Please understand that we get this! It is not a surprise and you're not alone. The key here is to communicate with your mentors. That way, they know why you're busy and when you will be back.Not having time for something, even if you promised - really, that is OK. When you have a job in the future it will happen all the time that more urgent things come up and you can't meet a deadline. Key is that you TALK about it. Make sure people know.
Let me give you a short anecdote - something that didn't even happen that early in my career...
At some point early in my job at a new company, I was on on a business trip and I missed my train. It was quite stupid: I got out in the wrong station. The result was that I had to buy a new ticket, spending over USD 180. I was quite upset about it and afraid to tell my manager about my blunder. I did the easiest thing: just avoid talking to my boss at all. As he was in the US and I was in Europe, that was not hard at all... But, after three weeks of finding all kinds of excuses to get out of our regular calls, he gave me a direct call and said: "what the heck is going on?". I admitted the whole thing and, of course, he was quite upset. But not at the USD 180. That is nothing on the budget of his or any team in any company. The costs of me not talking to him, now that he was serious about and I had to promise to never do that, ever, again.
... if you communicate about it
So what can you learn from my mistake? The rule, especially in the beginning of your career, is to over-communicate. Especially when it comes to new employees, many managers are anxious and worried about what is going on. Telling them often, even every day, how things are going and what you're doing is something they will never complain about.You can practice during GSOC: sending a daily ping about the state to your mentor, even if it is "hey, I had no time yesterday, and won't have any today". And a weekly, bigger report on what you worked on is also a very good thing to get going.
Understand that it is not unprofessional to miss a deadline or make a mistake, but it IS unprofessional if it comes as a surprise to others when they find out later on!
Especially if there's some kind of issue or you got stuck: you don't have to ask for help right away, though you should not wait to long--topic for another blog. But it is important that management knows. It makes them feel in control and believe me, the nightmare of every manager is to not be in control! If you do these things when you start working I promise you: it will score you points with your boss and help your career.
Instalar BitTorrent Sync en openSUSE
Adventures in D programming
I recently wrote a bigger project in the D programming language, the appstream-generator (asgen). Since I rarely leave the C/C++/Python realm, and came to like many aspects of D, I thought blogging about my experience could be useful for people considering to use D.
Disclaimer: I am not an expert on programming language design, and this is not universally valid criticism of D – just my personal opinion from building one project with it.
Why choose D in the first place?
The previous AppStream generator was written in Python, which wasn’t ideal for the task for multiple reasons, most notably multiprocessing and LMDB not working well together (and in general, multiprocessing being terrible to work with) and the need to reimplement some already existing C code in Python again.
So, I wanted a compiled language which would work well together with the existing C code in libappstream. Using C was an option, but my least favourite one (writing this in C would have been much more cumbersome). I looked at Go and Rust and wrote some small programs performing basic operations that I needed for asgen, to get a feeling for the language. Interfacing C code with Go was relatively hard – since libappstream is a GObject-based C library, I expected to be able to auto-generate Go bindings from the GIR, but there were only few outdated projects available which did that. Rust on the other hand required the most time in learning it, and since I only briefly looked into it, I still can’t write Rust code without having the coding reference open. I started to implement the same examples in D just for fun, as I didn’t plan to use D (I was aiming at Go back then), but the language looked interesting. The D language had the huge advantage of being very familiar to me as a C/C++ programmer, while also having a rich standard library, which included great stuff like std.concurrency.Generator, std.parallelism, etc. Translating Python code into D was incredibly easy, additionally a gir-d-generator which is actively maintained exists (I created a small fork anyway, to be able to directly link against the libappstream library, instead of dynamically loading it).
What is great about D?
This list is just a huge braindump of things I had on my mind at the time of writing 
Interfacing with C
There are multiple things which make D awesome, for example interfacing with C code – and to a limited degree with C++ code – is really easy. Also, working with functions from C in D feels natural. Take these C functions imported into D:
extern(C):
nothrow:
struct _mystruct {}
alias mystruct_p = _mystruct*;
mystruct_p = mystruct_create ();
mystruct_load_file (mystruct_p my, const(char) *filename);
mystruct_free (mystruct_p my);
You can call them from D code in two ways:
auto test = mystruct_create ();
// treating "test" as function parameter
mystruct_load_file (test, "/tmp/example");
// treating the function as member of "test"
test.mystruct_load_file ("/tmp/example");
test.mystruct_free ();
This allows writing logically sane code, in case the C functions can really be considered member functions of the struct they are acting on. This property of the language is a general concept, so a function which takes a `string` as first parameter, can also be called like a member function of `string`.
Writing D bindings to existing C code is also really simple, and can even be automatized using tools like dstep. Since D can also easily export C functions, calling D code from C is also possible.
Getting rid of C++ “cruft”
There are many things which are bad in C++, some of which are inherited from C. D kills pretty much all of the stuff I found annoying. Some cool stuff from D is now in C++ as well, which makes this point a bit less strong, but it’s still valid. E.g. getting rid of the `#include` preprocessor dance by using symbolic import statements makes sense, and there have IMHO been huge improvements over C++ when it comes to metaprogramming.
Incredibly powerful metaprogramming
Getting into detail about that would take way too long, but the metaprogramming abilities of D must be mentioned. You can do pretty much anything at compiletime, for example compiling regular expressions to make them run faster at runtime, or mixing in additional code from string constants. The template system is also very well thought out, and never caused me headaches as much as C++ sometimes manages to do.
Built-in unit-test support
Unittesting with D is really easy: You just add one or more `unittest { }` blocks to your code, in which you write your tests. When running the tests, the D compiler will collect the unittest blocks and build a test application out of them.
The `unittest` scope is useful, because you can keep the actual code and the tests close together, and it encourages writing tests and keep them up-to-date. Additionally, D has built-in support for contract programming, which helps to further reduce bugs by validating input/output.
Safe D
While D gives you the whole power of a low-level system programming language, it also allows you to write safer code and have the compiler check for that, while still being able to use unsafe functions when needed.
Unfortunately, `@safe` is not the default for functions though.
Separate operators for addition and concatenation
D exclusively uses the `+` operator for addition, while the `~` operator is used for concatenation. This is likely a personal quirk, but I love it very much that this distinction exists. It’s nice for things like addition of two vectors vs. concatenation of vectors, and makes the whole language much more precise in its meaning.
Optional garbage collector
D has an optional garbage collector. Developing in D without GC is currently a bit cumbersome, but these issues are being addressed. If you can live with a GC though, having it active makes programming much easier.
Built-in documentation generator
This is almost granted for most new languages, but still something I want to mention: Ddoc is a standard tool to generate code documentation for D code, with a defined syntax for describing function parameters, classes, etc. It will even take the contents of a `unittest { }` scope to generate automatic examples for the usage of a function, which is pretty cool.
Scope blocks
The `scope` statement allows one to execute a bit of code before the function exists, when it failed or was successful. This is incredibly useful when working with C code, where a free statement needs to be issued when the function is exited, or some arbitrary cleanup needs to be performed on error. Yes, we do have smart pointers in C++ and – with some GCC/Clang extensions – a similar feature in C too. But the scopes concept in D is much more powerful. See Scope Guard Statement for details.
Built-in syntax for parallel programming
Working with threads is so much more fun in D compared to C! I recommend taking a look at the parallelism chapter of the “Programming in D” book.
“Pure” functions
D allows to mark functions as purely-functional, which allows the compiler to do optimizations on them, e.g. cache their return value. See pure-functions.
D is fast!
D matches the speed of C++ in almost all occasions, so you won’t lose performance when writing D code – that is, unless you have the GC run often in a threaded environment.
Very active and friendly community
The D community is very active and friendly – so far I only had good experience, and I basically came into the community asking some tough questions regarding distro-integration and ABI stability of D. The D community is very enthusiastic about pushing D and especially the metaprogramming features of D to its limits, and consists of very knowledgeable people. Most discussion happens at the forums/newsgroups at forum.dlang.org.
What is bad about D?
Half-proprietary reference compiler
This is probably the biggest issue. Not because the proprietary compiler is bad per se, but because of the implications this has for the D ecosystem.
For the reference D compiler, Digital Mars’ D (DMD), only the frontend is distributed under a free license (Boost), while the backend is proprietary. The FLOSS frontend is what the free compilers, LLVM D Compiler (LDC) and GNU D Compiler (GDC) are based on. But since DMD is the reference compiler, most features land there first, and the Phobos standard library and druntime is tuned to work with DMD first.
Since major Linux distributions can’t ship with DMD, and the free compilers GDC and LDC lack behind DMD in terms of language, runtime and standard-library compatibility, this creates a split world of code that compiles with LDC, GDC or DMD, but never with all D compilers due to it relying on features not yet in e.g. GDCs Phobos.
Especially for Linux distributions, there is no way to say “use this compiler to get the best and latest D compatibility”. Additionally, if people can’t simply `apt install latest-d`, they are less likely to try the language. This is probably mainly an issue on Linux, but since Linux is the place where web applications are usually written and people are likely to try out new languages, it’s really bad that the proprietary reference compiler is hurting D adoption in that way.
That being said, I want to make clear DMD is a great compiler, which is very fast and build efficient code. I only criticise the fact that it is the language reference compiler.
UPDATE: To clarify the half-proprietary nature of the compiler, let me quote the D FAQ:
The front end for the dmd D compiler is open source. The back end for dmd is licensed from Symantec, and is not compatible with open-source licenses such as the GPL. Nonetheless, the complete source comes with the compiler, and all development takes place publically on github. Compilers using the DMD front end and the GCC and LLVM open source backends are also available. The runtime library is completely open source using the Boost License 1.0. The gdc and ldc D compilers are completely open sourced.
Phobos (standard library) is deprecating features too quickly
This basically goes hand in hand with the compiler issue mentioned above. Each D compiler ships its own version of Phobos, which it was tested against. For GDC, which I used to compile my code due to LDC having bugs at that time, this means that it is shipping with a very outdated copy of Phobos. Due to the rapid evolution of Phobos, this meant that the documentation of Phobos and the actual code I was working with were not always in sync, leading to many frustrating experiences.
Furthermore, Phobos is sometimes removing deprecated bits about a year after they have been deprecated. Together with the older-Phobos situation, you might find yourself in a place where a feature was dropped, but the cool replacement is not yet available. Or you are unable to import some 3rd-party code because it uses some deprecated-and-removed feature internally. Or you are unable to use other code, because it was developed with a D compiler shipping with a newer Phobos.
This is really annoying, and probably the biggest source of unhappiness I had while working with D – especially the documentation not matching the actual code is a bad experience for someone new to the language.
Incomplete free compilers with varying degrees of maturity
LDC and GDC have bugs, and for someone new to the language it’s not clear which one to choose. Both LDC and GDC have their own issues at time, but they are rapidly getting better, and I only encountered some actual compiler bugs in LDC (GDC worked fine, but with an incredibly out-of-date Phobos). All issues are fixed meanwhile, but this was a frustrating experience. Some clear advice or explanation which of the free compilers is to prefer when you are new to D would be neat.
For GDC in particular, being developed outside of the main GCC project is likely a problem, because distributors need to manually add it to their GCC packaging, instead of having it readily available. I assume this is due to the DRuntime/Phobos not being subjected to the FSF CLA, but I can’t actually say anything substantial about this issue. Debian adds GDC to its GCC packaging, but e.g. Fedora does not do that.
No ABI compatibility
D has a defined ABI – too bad that in reality, the compilers are not interoperable. A binary compiled with GDC can’t call a library compiled with LDC or DMD. GDC actually doesn’t even support building shared libraries yet. For distributions, this is quite terrible, because it means that there must be one default D compiler, without any exception, and that users also need to use that specific compiler to link against distribution-provided D libraries. The different runtimes per compiler complicate that problem further.
The D package manager, dub, does not yet play well with distro packaging
This is an issue that is important to me, since I want my software to be easily packageable by Linux distributions. The issues causing packaging to be hard are reported as dub issue #838 and issue #839, with quite positive feedback so far, so this might soon be solved.
The GC is sometimes an issue
The garbage collector in D is quite dated (according to their own docs) and is currently being reworked. While working with asgen, which is a program creating a large amount of interconnected data structures in a threaded environment, I realized that the GC is significantly slowing down the application when threads are used (it also seems to use UNIX signals `SIGUSR1` and `SIGUSR2` to stop/resume threads, which I still find odd). Also, the GC performed poorly on memory pressure, which did get asgen killed by the OOM killer on some more memory-constrained machines. Triggering a manual collection run after a large amount of these interconnected data structures wasn’t needed anymore solved this problem for most systems, but it would of course have been better to not needing to give the GC any hints. The stop-the-world behavior isn’t a problem for asgen, but it might be for other applications.
These issues are at time being worked on, with a GSoC project laying the foundation for further GC improvements.
“version” is a reserved word
Okay, that is admittedly a very tiny nitpick, but when developing an app which works with packages and versions, it’s slightly annoying. The `version` keyword is used for conditional compilation, and needing to abbreviate it to `ver` in all parts of the code sucks a little (e.g. the “Package” interface can’t have a property “version”, but now has “ver” instead).
The ecosystem is not (yet) mature
In general it can be said that the D ecosystem, while existing for almost 9 years, is not yet that mature. There are various quirks you have to deal with when working with D code on Linux. It’s always nothing major, usually you can easily solve these issues and go on, but it’s annoying to have these papercuts.
This is not something which can be resolved by D itself, this point will solve itself as more people start to use D and D support in Linux distributions gets more polished.
Conclusion
I like to work with D, and I consider it to be a great language – the quirks it has in its toolchain are not that bad to prevent writing great things with it.
At time, if I am not writing a shared library or something which uses much existing C++ code, I would prefer D for that task. If a garbage collector is a problem (e.g. for some real-time applications, or when the target architecture can’t run a GC), I would not recommend to use D. Rust seems to be the much better choice then.
In any case, D’s flat learning curve (for C/C++ people) paired with the smart choices taken in language design, the powerful metaprogramming, the rich standard library and helpful community makes it great to try out and to develop software for scenarios where you would otherwise choose C++ or Java. Quite honestly, I think D could be a great language for tasks where you would usually choose Python, Java or C++, and I am seriously considering to replace quite some Python code with D code. For very low-level stuff, C is IMHO still the better choice.
As always, choosing the right programming language is only 50% technical aspects, and 50% personal taste 
UPDATE: To get some idea of D, check out the D tour on the new website tour.dlang.org.
Mind the gap between platform and app
With the Open Source Event Manager (OSEM), one of the Ruby on Rails apps I hack on, we're heading down the road to version 1.0. A feature we absolutely wanted to have, before making this step, was an easy deployment to at least one of the many Platform as a Service (PaaS) providers. We deemed this important for two reasons:
- Before people commit to use your precious little app they want to try it. And this has to be hassle free.
- Getting your server operating system ready to deploy Ruby on Rails applications can be tedious, too tedious for some people.
So I have been working on making our app ready for Heroku which is currently the most popular PaaS provider for rails (is it?). This was an interesting road, this is my travelogue.
Storage in the public cloud
Storing files is incredibly easy in Rails, there are many ready made solutions for it like paperclip, dragonfly or carrierwave. The challenge with Heroku is, that on their free plan, your apps storage will be discarded the moment it is stopped or restarted. This happens for instance every time during a deployment or when the app goes to sleep because nobody is using it.
And even though it's easy to store files in Rails, we in the OSEM team have long discouraged this in our app. We rather try to make it as easy as possible to reference things you have shared somewhere else. Want to show a picture of your Events location? Use the ones you share on flickr or instagram anyway. Embed a video of a Talk? Just paste the Youtube or Vimeo link. Share the slides with your audience? Slideshare or Speakerdeck to the rescue!
OSEM commercials by Henne Vogelsang licensed CC BY 4.0
Still there are some places left in our app where we upload pictures. Pictures we think conference organizers are not necessarily free to share on other platforms, like sponsor logos or pictures of sponsored lodgings. So to be able to use OSEM on Heroku our file upload needed support for offloading the files to someone else's computer a.k.a. *the cloud*. In the end I have settled for the carrierwave plug in of cloudinary (pull request #970).
This means it's now as easy as configuring the cloudinary gem and making use of their free plan to shove off all the storage OSEM needs to them.
Storage was the first gap in OSEM that I closed, another piece of the puzzle was configuration.
Configuration in the environment
According to some clever people it's too easy to mistakenly check in your apps configuration file into your version control system. That's the reason your apps environment should provide all the settings. I'm not a big fan of putting stickers onto microwaves that say you can't use it to dry your cat. If people want to be stupid, let them.
But hey 12-Factor is a thing, let's roll with it! So in pull request #900 I removed all traces of OSEMs YAML configuration file and now all the settings happen in environment variables.
This was the second gap I had to close to be able to run our app on Heroku. Now all of the sudden things where falling into place and some very interesting things emerged for OSEM.
Continuous Deployment
One of the reasons I have gone down this road was to make it easy for people to try out OSEM. Now once you can run your app on Heroku you can also integrate your github repository and run a continuous deployment (every commit get's deployed right away). That made it possible for us to set up an OSEM demo instance for people to try which always runs the latest code. All we had to do is to make use of data from our test suite to populate the demo (pull request #982) and voila...
OSEM demo by Henne Vogelsang licensed CC BY 4.0
Continuous Review
So a continuous deployment of the latest code with some sample data. Does that sound useful for you as free software developer? I think being a free software developer first and foremost means collaborating with varying people from all over. People I work with on a daily basis and people who I never had contact with before. Collaboration mostly means reviewing each others changes to our code. It's pretty easy as we have rules and tools for that in place. What is not so easy is to do the same for changes to the functionality, the user experience design, of our app.
In the OSEM team we some times attach a series of screenshots or animated gifs to a github pull requests to convey changes to the user interaction and work flows, but this is usually no replacement for trying yourself. Then in the middle of me doing all of this Heroku Review Apps happened. Review apps are instant, disposable deployments of your app that spin up automatically with each pull request.
OSEM heroku pipeline by Henne Vogelsang licensed CC BY 4.0
Now, once someone sends a pull request where I want to review the user experience design I just press the 'Create Review App' button on Heroku and a minute later I get the a temporary instance populated with test data. Magic.
More things to come?
Another thing we might want to replace in the future is how we spin up developer instances. So far we use Vagrant which starts your OSEM checkout in a virtual machine. But nowadays you have to have docker containers in the mix right? Let's see what the future brings.
All in all, I must say, it was a nice trip into the Platform as a Service world. Surprisingly easy to do and even more surprisingly rewarding for the development work flow. What do you think?
使用Linux系統和自然人憑證線上報稅
本系統用於:
Mac OSX(Safari 5.1以上)
Linux: Fedora 13 (FireFox 3.6以上);Ubuntu 11.10 (FireFox 3.6以上)
因為使用了 java 和 Firefox 的 addon 來達到跨平台的目的
所以一定要使用 Firefox 瀏覽器(google chrome 已經不支援 java-plugin 了)
但看起來和發行版本沒什麼關係,應該只是舉出比較常見的 Linux 系統而已
這次使用 64 位元 openSUSE Leap 42.1 完全可以正常報稅
Linux 使用者可以有一個方便的管道報稅了,真是可喜可賀
1. 首先你需要一台可以被 Linux 支援的晶片讀卡機
之前用的和這次的,都是在便利商店買的,所以被支援的機率應該蠻高的
插上讀卡機後
$lsusb
Bus 001 Device 004: ID 0bda:0169 Realtek Semiconductor Corp. Mass Storage Device
Bus 001 Device 003: ID 05e3:0608 Genesys Logic, Inc. USB-2.0 4-Port HUB
多出這兩行,和之前使用的機型是使用相同的晶片
所以也可以被 pcsc-lite 支援
開啟終端機,用 root 執行 pcscd
2. java 支援
大部分的發行版本應該都已經搞定了
打開 firefox 在網址列輸入 about:plugins
看有沒有 java-plugin 或 icedtea-plugin
沒有的話,自己裝一個
3. 開始報稅
用 Firefox 開啟 https://rtn.tax.nat.gov.tw/ircweb/index.jsp
經過一連串警告之後
開始檢查 JVM plugin
請在右上角允許執行 IcedTea-Web
接下來會出現警告:大意是數位簽署沒有被受信任的來源確認之類的
身為一個政府機關連這都沒做好有點誇張
但不管如何,還是選擇 Yes 進行吧!
接著是登入畫面
系統會提示您安裝中華電信自然人憑證plugin(HiPKIClient)
報稅系統是利用這個附加元件來存取您的讀卡機
然而...這仍是一個未受驗證的附加元件
而 Firefox 43 之後預設會停用並禁止安裝未經驗證的附加元件
還是一句話:政府機關加油好嗎?
有關附加元件的簽署,請參考 這裡
我們只好先由 about:config 來修改設定,暫時來安裝這個附加元件
找到 xpinstall.signatures.required 點兩下
將 true 改成 false
然後安裝 HiPKIClient
重新啟動 Firefox 後,重新登入
即可進入線上報稅的主畫面
選擇2.就可以利用自然人憑證下載所得資料
不用一筆一筆輸入
這邊有一個小bug:點選4.會出現警告畫面,畫面中誤將下載資料列在1.
下載所得資料後,基本上就是每頁檢查,修改,
就一直下一頁,就OK了
完整的詳細步驟畫面,請直接看相簿
![]() |
| 線上報稅步驟圖解 |
中間有一段錯誤是因為我原本的讀卡機讀不到卡片,所以又去買了一台新的...
最後加一張圖,請這兩個單位加油一點
Thursday: ownCloud at Open Tech Summit!
If you'd like to join, there's a number of free tickets available. Go to this website to register and use the code WELOVEOWNCLOUD.
See you there!
Danbooru Client 0.6.0 released
A new version of Danbooru Client is now available!
What is Danbooru Client?
Danbooru Client is an application to access Danbooru-based image boards (Wikipedia definition).
It offers a convenient, KF5 and Qt5-based GUI coupled with a QML image view to browse, view, and download images hosted in two of the most famous Danbooru boards (konachan.com and yande.re).
Highglights of the new version
- Support for width / height based filtering: now you can exclude posts that are below a specific width or height (or both)
- New dependency: KTextWidgets
Coming up next
Sooner or later I’ll get to finish the multiple API support, but given that there’s close to no interest for these programs (people are happy to use a browser) and that I work on this very irregularly (every 6-7 months at best), there’s no ETA at all. It might be done this year, perhaps the next.
Release details
Currently, there is only a source tarball. For security, I have signed it with my public GPG key (A29D259B) and I have also provided a SHA-512 hash of the release.
Can't use baloosearch for Chinese characters
in my Pictures folder (see fig 1),
but search with dolphin shows no results (see fig 2).
And search png, you can find these pictures (see fig 3),
so these files were indexed !
![]() |
| fig 1. Pictures Folder |
![]() |
| fig 2 Search with Chinese characters shows nothing |
![]() |
| fig 3 Search "png" will show these files |
In https://bugs.kde.org/show_bug.cgi?id=333037
Cjacker in comment 25 had some patches for this,
I rebuild baloo5 packages with these patches in my obs home project:
home:swyear:baloo5
before upstream fix this problem,
it's a temporitary workaround for Chinese search in plasma5.
At last, I can search the Chinese characters (see fig 4,5,6).
![]() |
| fig 4. Search in dolphin with Chinese characters |
![]() |
| fig 5. Search in start menu |
![]() |
| fig 6. Search in krunner |
The patched baloo5 packages can be found in
http://download.opensuse.org/repositories/home:/swyear:/baloo5/
After installing these packages, run "balooctl disable" then "balooctl enable"in terminal,
wait some minutes for the file indexing.
When it's done, your baloosearch can use Chinese characters to search.
Baloo also affects Desktop search and Krunner (Alt+F2), you can see in fig 5, 6.
Before patching baloo,
Desktop search and Krunner shows nothing when using Chinese characters to search.
But there's problems when searching Applications.
When search "edit", (see fig 7)
you can see some results in Applications (應用程式)and Desktop search (桌面搜尋),
And some applications are named with "編輯" (that means "edit").
But search "編輯",you can't find these applications,
only Desktop search got the results. (see fig 8)
![]() |
| fig 7. Search "edit" |
![]() |
| fig 8. Search "編輯" |
Installing shutter
shutter is not in Leap 42.1 standard repository,
so you have to install it by adding repositories manually.
#zypper ar obs://X11:Utilities x11
#zypper ar obs://devel:languages:perl perl-dev
#zypper ref
#zypper in shutter
if you want to use 1 click install,
open this link with firefox.
[Help Needed] FOSS License, CLA Query
- The project is a database (say, like mongodb, Cassandra etc.). It will have a server piece that users can deploy for storing data. Though it is a hobby personal project as of now, I may offer the database as a paid, hosted solution in future.
- There are some client libraries too, for providing the ability to connect to the above mentioned server, from a variety of programming languages.
- The client libraries will all be in Creative Commons Zero License / Public Domain. Basically anyone can do anything with the client library sources. The server license is where I have difficulty choosing.
- Anyone who contributes any source to the server software should re-assign their copyrights and ownership of the code, to me. By "me", I refer to myself as an individual and not any company. I should reserve the right to transfer the ownership in future to anyone / any company. I may relicense the software in future to public domain or sell it off to a company like: SUSE, Red Hat, Canonical, (or) a company like: Amazon, Google, Microsoft etc.
- Anyone who contributes code to my project, should make sure that [s]he has all the necessary copyrights to submit the changes to me and to re-assign the copyrights to me. I should not be liable for someone's contribution. If a contributor's employer has a sudden evil plan and want to take over my personal project to court (unlikely to happen, nevertheless), it should not be possible
- I or the users of the software, should not be sued for any patent infringement cases, for code that is contributed by someone else. If a patent holder wants to sue me for a code that I have written in the software, that is fine. I will find a way around.
- Anyone should be free to take the server sources, modify it and deploy it in his/her own hardware/cloud, for their personal and/or commercial needs, without paying me or any of the contributors any money/royalty/acknowledgement.
- If they choose to either sell the server software or host it and sell it as a service, (basically commercial reasons) they must be enforced to open source their changes in public domain, unless they have a written permission from me, at my discretion. For instance, if coursera wants to use my database source, after modifications, it is fine with me; but I will not want, say Oracle to modify my software and sell the modified software / service, without opensourcing their changes. If someone is hosting and selling a service of my software, with modified sources, there is no easy way for me to prove their modification, but I would still like to have that legal protection.
The best license model that I could come up for the above is: Dual license the source code to AGPLv3 and a proprietary license. Enforce a CLA to accept all contributions only after a copyright reassignment to me, with a guarantee that I have the right to change the license at a future time.
What is not clear to me however, is the patent infringement and ownership violation related constraints and AGPL's protection on such disputes. Another option is: Mozilla Public License 2.0 but that does not seem to cover the hosting-as-a-service-and-selling-the-service aspect clearly imho.
Are you readers of the internet have any better solution ?
Are you aware of any other project using any other license, CLA model that may suit my needs and/or is similar ?
What other things should I be reading to understand more ?
Or, should I lose all faith in licenses and keep the sources private and release the binary as freeware, instead of open sourcing ? That would suck.
Or should I just not bother about someone making proprietary modifications and selling the software/service, by releasing the software to public domain ?
Note: Of course, all these is assuming that my 1 hour a month, hobby project would make it big, be useful to others and someone may sue. In reality, the software may not be tried by even a dozen people, but I'm just romanticizing.

























