Testing the untestable
Admit it: how many times you have seen “software from this branch is completely untested, use it at your own risk” when you checked the latest code from any FOSS project? I bet you have, many times. For any reasonably modern project, this is not entirely true: Continuous Integration and automated testing are a huge help in ensuring that the code builds and at least does what it is supposed to do. KDE is no exception to this, thanks to build.kde.org and a growing number of unit tests.
Is it enough?
This however does not count functional testing, i.e. checking whether the software actually does what it should. You wouldn’t want KMail to send kitten pictures as a reply to a meeting invitation from your boss, for example, or you might want to test that your office suite starts and is able to actually save documents without crashing. This is something you can’t test with traditional unit testing frameworks.
Why does this matter to KDE? Nowadays, the dream of always summer in trunk as proposed 8 years ago is getting closer, and there are several ways to run KDE software directly from git. However, except for the above strategy, there is no additional testing done.
Or, should I rather say, there wasn’t.
Our savior, openQA
Those who use openSUSE Tumbleweed know that even if it is technically a “rolling release” distribution, it is extensively tested. That is made possible by openQA, which runs a full series of automated functional tests, from installation to actual use of the desktops shipped by the distribution. The recently released openSUSE Leap has also benefited from this testing during the development phase.
“But, Luca,” you would say, “we already know about all this stuff.”
Indeed, this is not news. But the big news is that, thanks mainly to the efforts of Fabian Vogt and Oliver Kurz, now openQA is testing also KDE software from git! This works by feeding the Argon (Leap based) and Krypton (Tumbleweed based) live media, which are roughly built daily, to openQA, and running a series of specific tests.
You can see here an example for Argon and an example for Krypton (note: some links may become dead as tests are cleaned up, and will be adjusted accordingly). openQA tests both the distro-level stuff (the console test) and KDE specific operations (the X11 test). In the latter case, it tests the ability to launch a terminal, running a number of programs (Kate, Kontact, and a few others) and does some very basic tests with Plasma as well.
Is it enough to test the full experience of KDE software? No, but this is a good solid foundation for more automated testing to spot functional regressions: during the openSUSE Leap 42.2 development cycle, openQA found several upstream issues in Plasma which were then communicated to the developers and promptly fixed.
Is this enough for everything?
Of course not. Automated testing only gets so much, so this is not an excuse for being lazy and not filing those reports. Also, since the tests run in a VM, they won’t be able to catch some issues that only occur on real hardware (multiscreen, compositing). But is surely a good start to ensure that at least obvious regressions are found before the code is actually shipped to distributions and then to end users.
What needs to be done? More tests, of course. In particular, Plasma regression tests (handling applets, etc.) would be likely needed. But as they say, every journey starts with the first step.
3 alternative reasons why you should test Nextcloud 11 Beta
But I actually have some more reasons to test. You see, Nextcloud is one of the tools we need to keep our democracy working. As Frank notes on his home page:
"Privacy is the foundation of democracy"And he is completely right. So, here are three different reasons why you should test (and help improve) Nextcloud:
1. The USA is making a massive swing towards even more spying
Obama has done nothing to curb the growth of the NSA and the scope of its operations. Secret laws spiked under his watch. Many of the folks about to be put in power by President-elect Trump favor more spying, including on US citizens, expansion of the NSA, a crackdown on whistleblowers and more. Trump's pick for CIA director calls for Snowden's execution. For what I can only guess must be giving proof of illegal government spying to dangerous terrorists like the Washington Post and the Guardian, who proceeded to win a Pulitzer prize by disclosing this information irresponsibly to the US public.In general, as somebody who changes his stance on hugely important and complicated issues like torture in under an hour, it is impossible to predict what Trump will do with the most powerful spying agency in the world under his control, but his appreciation for dictatorial figures like Kim Jong Il and Putin gives plenty cause for concern.
2. Britain isn't doing much better
I wrote about the Snoopers' charter just some days ago - this piece of legislation goes further than any earlier piece of spying law. It allows not only passive spying but also actively hacking devices from citizens.3. Nor is Europe
The UK is not alone. Since Snowden, Europe has complained a bit about the NSA but seems to simply follow suit, rather than doing anything about it. Germany is even introducing a bill that will allow spying on foreign journalists.Help out!
So, how can you help? Well, test Nextcloud 11 Beta, obviously. Help others to use it, get them involved. But it goes beyond Nextcloud - promote the use of and help improve tools like Tor, Signal and others, or democracy is screwed.Edit: updated the blog
Watching org.libelektra with Qt
libelektra is a configuration library and tools set. It provides very many capabilities. Here I’d like to show how to observe data model changes from key/value manipulations outside of the actual application inside a user desktop. libelektra broadcasts changes as D-Bus messages. The Oyranos projects will use this method to sync the settings views of GUI’s, like qcmsevents, Synnefo and KDE’s KolorManager with libOyranos and it’s CLI tools in the next release.
Here a small example for connecting the org.libelektra interface over the QDBusConnection class with a class callback function:
Declare a callback function in your Qt class header:
public slots: void configChanged( QString msg );
Add the QtDBus API in your sources:
#include <QtDBus/QtDBus>
Wire the org.libelektra intereface to your callback in e.g. your Qt classes constructor:
if( QDBusConnection::sessionBus().connect( QString(), "/org/libelektra/configuration", "org.libelektra", QString(), this, SLOT( configChanged( QString ) )) ) fprintf(stderr, "=================== Done connect\n" );
In your callback arrive the org.libelektra signals:
void Synnefo::configChanged( QString msg )
{
fprintf( stdout, "config changed: %s\n", msg.toLocal8Bit().data() );
};
As the number of messages are not always known, it is useful to take the first message as a ping and update with a small timeout. Here a more practical code elaboration example:
// init a gate keeper in the class constructor:
acceptDBusUpdate = true;
void Synnefo::configChanged( QString msg )
{
// allow the first message to ping
if(acceptDBusUpdate == false) return;
// block more messages
acceptDBusUpdate = false;
// update the view slightly later and avoid trouble
QTimer::singleShot(250, this, SLOT( update() ));
};
void Synnefo::update()
{
// clear the Oyranos settings cache (Oyranos CMS specific)
oyGetPersistentStrings( NULL );
// the data model reading from libelektra and GUI update
// code ...
// open the door for more messages to come
acceptDBusUpdate = true;
}
The above code works for both Qt4 and Qt5.
Brittain’s Snoopers charter threatens your privacy
![]() |
| pic from the ZDNet article |
An attack on privacy
There is a global siege on privacy. Governments all over the world have introduced legislation (sometimes secret) which forces email, internet or data storage providers to track what you do and make that data available to their governments. This, of course, also means third parties who gain access to the storage systems can see and abuse it. And because so many of us have put so much of our data at just a few providers, we're at great risk as events like last week's shutdown of hundreds of Google accounts did show.While Google, Dropbox and others lure customers in with 'free' data storage and great online services, governments benefit from centralized data storages as it makes it easy for them to hack in or demand data from these companies.
Why this surveillance?
While governments usually claim they need access to this data to find terrorists or child pornography, experts point out that it will not be helpful at all. As multiple experts (even internally) put it, growing the haystack makes it harder to find the needle. Intelligence agencies are swamped with data and nearly every terrorist attack in western states over the last decade took place despite the agencies having all information they would have needed to prevent it. The Paris attackers, for example, coordinated their attack using plain SMS messages. The Guardian thus rightly points out that:"Paris is being used to justify agendas that had nothing to do with the attack"which has become a familiar refrain after nearly every terrorist attack.
Indeed, we all know the argument But you have nothing to hide, do you? and indeed, we probably don't. But some people do, so they'll try to avoid being seen. That being illegal won't change their behavior...
And as Phill Zimmermann, the inventor of the PGP encryption pointed out:
"When privacy is outlawed, only outlaws will have privacy"
So not terrorists. Then what?
Experts agree that the vast majority of these surveillance and anti-privacy laws have little or no effect on real criminals. The crime syndicates, corrupt politicians and large corporations evading taxes and anti-trust/health/environmental laws, they DO have something to hide, and thus they would use encryption or avoid surveilled communication methods even if it were outlawed.However, ordinary citizens, including grass-roots local activists, charitable organizations, journalists and others, who DO have nothing to hide, would be surveilled closely. And with that information, the real criminals mentioned earlier - crime syndicates, corporations or corrupt politicians - would have weapons in hand to keep these citizens from bothering them. Whistle blowers can be found out and killed (like in Mexico), journalists can be harassed and charged for trivial transgressions (like was recently done at the US pipeline protest) and charities can be extorted.
What can we do?
Luckily, there are initiatives like the Stanford Law Schools' Crypto Policy Project which aim to train, for example, journalists in the use of encryption. Tools and initiatives like Signal, PGP email encryption, Let's Encrypt and Nextcloud provide the ability for users to protect themselves and their loved ones from surveillance. More importantly, these at the same time making it harder and more costly to conduct mass surveillance.There is nothing wrong with governments targeting criminals with surveillance but just vacuuming up all data of all citizens that might, some day, be used is a massive risk for our democracy. We all have a responsibility to decentralize and use tools to protect our privacy so those who need it (press, activists and others) have a place to hide.
Details on website migration and plans for the future
With my last post I promised to give some more detailed information on the migration of the crowbyte website and blog as well as my plans for the future of crowbyte.org and I live up with my promise in this post.
Migration details
After over half a year of absence I had the time to rethink the direction into which the blog and website headed lately.
I used Nikola as a page generator for my blog and websites. This was a decision which I made after a lot of consideration and testing of av...
Linux did not win, yet
Responsive HTML with CSS and Javscript
In this article you can learn how to make a minimalist web page readable on different format readers like larger desktop screens and handhelds. The ingredients are HTML, with CSS and few JavaScript. The goals for my home page are:
- most of the layout resides in CSS in a stateless way
minimal JavaScript- on small displays – single column layout
- on wide format displays – division of text in columns
- count of columns adapts to browser window width or screen size
- combine with markdown
CSS:
h1,h2,h3 {
font-weight: bold;
font-style: normal;
}
@media (min-width: 1000px) {
.tiles {
display: flex;
justify-content: space-between;
flex-wrap: wrap;
align-items: flex-start;
width: 100%;
}
.tile {
flex: 0 1 49%;
}
.tile2 {
flex: 1 280px
}
h1,h2,h3 {
font-weight: normal;
}
}
@media (min-width: 1200px) {
@supports ( display: flex ) {
.tile {
flex: 0 1 24%;
}
}
}
The content in class=”tile” is shown as one column up to 4 columns. tile2 has a fixed with and picks its column count by itself. All flex boxes behave like one normal column. With @media (min-width: 1000px) { a bigger screen is assumed. Very likely there is a overlapping width for bigger handhelds, tablets and smaller laptops. But the layout works reasonable and performs well on shrinking the web browser on a desktop or viewing fullscreen and is well readable. Expressing all tile stuff in flex: syntax helps keeping compatibility with non flex supporting layout engines like in e.g. dillo.
For reading on High-DPI monitors on small it is essential to set font size properly. Update: Google and Mozilla recommend a meta “viewport” tag to signal browsers, that they are prepared to handle scaling properly. No JavaScript is needed for that.
<meta name="viewport" content="width=device-width, initial-scale=1.0">
[Outdated: I found no way to do that in CSS so far. JavaScript:]
function make_responsive () {
if( typeof screen != "undefined" ) {
var fontSize = "1rem";
if( screen.width < 400 ) {
fontSize = "2rem";
}
else if( screen.width < 720 ) {
fontSize = "1.5rem";
}
else if( screen.width < 1320 ) {
fontSize = "1rem";
}
if( typeof document.children === "object" ) {
var obj = document.children[0]; // html node
obj.style["font-size"] = fontSize;
} else if( typeof document.body != "undefined" ) {
document.body.style.fontSize = fontSize;
}
}
}
document.addEventListener( "DOMContentLoaded", make_responsive, false );
window.addEventListener( "orientationchange", make_responsive, false );
[The above JavaScript checks carefully if various browser attributes and scales the font size to compensate for small screens and make it readable.]
The above method works in all tested browsers (FireFox, Chrome, Konqueror, IE) beside dillo and on all platforms (Linux/KDE, Android, WP8.1). The meta tag method works as well better for printing.
Below some markdown to illustrate the approach.
HTML:
<div class="tiles"> <div class="tile"> My first text goes here. </div> <div class="tile"> Second text goes here. </div> <div class="tile"> Third text goes here. </div> <div class="tile"> Fourth text goes here. </div> </div>
In my previous articles you can read about using CSS3 for Translation and Web Open Font Format (WOFF) for Web Documents.
CSS3 for Translation
Years ago I used a CMS to bring content to a web page. But with evolving CSS, markdown syntax and comfortable git hosting, publication of smaller sites can be handled without a CMS. My home page is translated. Thus I liked to express page translations in a stateless language. The ingredients are simple. My requirements are:
- stateless CSS, no javascript
- integrable with markdown syntax (html tags are ok’ish)
- default language shall remain visible, when no translation was found
- hopefully searchable by robots (Those need to understand CSS.)
CSS:
/* hide translations initially */
.hide {
display: none
}
/* show a browser detected translation */
:lang(de) { display: block; }
li:lang(de) { display: list-item; }
a:lang(de) { display: inline; }
em:lang(de) { display: inline; }
span:lang(de) { display: inline; }
/* hide default language, if a translation was found */
:lang(de) ~ [lang=en] {
display: none;
}
The CSS uses the display property of the element, which was returned by the :lang() selector. However the selectors for different display: types are somewhat long. Which is not so short as I liked.
Markdown:
<span lang="de" class="hide"> Hallo _Welt_. </span> <span lang="en"> Hello _World_. </span>
Even so the plain markdown text looks not as straight forward as before. But it is acceptable IMO.
Hiding the default language uses the sibling elements combinator E ~ F and selects a element containing the lang=”en” attribute. Matching elements are hidden (display: none;). This is here the default language string “Hello _World_.” with the lang=”en” attribute. This approach works fine in FireFox(49), Chrome Browser(54), Konqueror(4.18 khtml&WebKit) and WP8.1 with Internet Explorer. Dillo(3.0.5) does not show the translation, only the english text, which is correct as fallback for a non :lang() supporting engine.
On my search I found approaches for content swapping with CSS: :lang()::before { content: xxx; } . But those where not well accessible. Comments and ideas welcome.
foxtrotgps: not suitable for spacecraft navigation
Conversations with self while "Learning Reactjs"
> Go with react. That is what all the cool kids are using. Also, something to do with: Angular 2 continues to put “JS” into HTML. React puts “HTML” into JS. sounds geeky and logical.
Okay. Let me start with this react. Where do I even begin ? Seems very complex.
> Alright. There is this create-react-app which is introduced by Facebook to make it easy to begin, so that you do not have to break your head about gulp, grunt, node, etc. and their magical version incompatibilities
I started with this create-react-app and went a little further. I can create various components and render them, but how do I get various views/components to interact, to form a workflow (say such as sharing a session string or so) ?
> This is where state management comes in. You need to maintain state in a nice way centrally. You need to use the Flux architecture, introduced by Facebook.
Cool. So I just use the flux library from Facebook and things will all fall in place ?
> Actually flux is a standard, but everyone uses Redux which is an implementation of this standard. Oh, btw there are lot of other implementations such as alt. The creator of the redux seem to be an active guy and helps in the community often, writes long stackoverflow posts, etc. How can someone who write long posts be wrong ?
Hm. Okay. Let me start with this redux. What should I understand ?
> It is simple. If you understand: Global store, Reducers, Actions, Dispatch, Containers, you have understood redux. Just follow these egghead tutorials.
Okay. I tried following these. They are really beginner unfriendly. Actually this series on youtube is better, though a bit out-dated and non-standard. I have now done a simple redux toy app.
> Try a complex app, with multiple pages and talk to that API that you implemented.
Good idea. I will start with it. Oh, wait. My component has a lot of buttons, text boxes, etc. I need a way to get something rudimentary, like: getting the value from the username and password input boxes, when a "Login" button is clicked. Do I need to make a mess of global states and private-component-specific states ? That is like so contrary to what we learnt so far.
> Think again. Is there any alternative for this ?
May be I can have Actions, ActionCreators and State variables for each field in each component, centrally maintained in the global store ? That will be a looooot of boilerplate.
> Ahem. May be you should start using redux-form library. It will minimize your workload and optimizes the boilerplate.
Is it well maintained ? It has plenty of github stars but what if the bus factor is high and the primary author loses interest when he gets a dayjob somewhere else ? Also, it is already in version 6. Isn't react itself announced just three years ago ? Why is there so 6 major versions of this library already ? Will this change again if I depend on it ?
> Hrm. How long has it been since you began the exercise ?
It has been about a month or so, learning only during the latenights (after the dayjob and getting kid to sleep etc.) and occasionally weekends. Already I am tired. May be this javascript-fatigue is real.
> Now that you have learnt React and Redux, and experienced first-hand how much it takes time to identify the quintessential combination of libraries, you should not attempt to build anything in your free time, you should be careful in choosing these technologies, for a proper dayjob.
If all these javascript fatigue posts are to be believed, the alternatives are equally bad if not worse. Angular 2 broke APIs in RC stage, does not offer guarantee to not break APIs even after release it seems. Anyways, I started this project to learn about react and I can say, I know my way around react. It is a different question if I want to choose UI programming as a full time profession at all. The current flux (not to be confused with the architecture) of things makes it extremely painful. I think people choose mobile first development, not because of product requirements but because of javascript fatigue.
PS: A lot of things where I had to take a detour and wasted a lot of time is trimmed from the post, as they are anyway not directly related to ReactJS
