Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

Rio

I was really pleased to see Endless, the little company with big plans, initiate a GNOME Design hackfest in Rio.

The ground team in Rio arranged a visit to two locations where we met with the users that Endless is targeting. While not strictly a user testing session, it helped to better understand the context of their product and get a glimpse of the lives in Rocinha, one of the Rio famous favelas or a more remote rural Magé. Probably wouldn't have a chance to visit Brazil that way.

Points of diversion

During the workshop at the Endless offices we went through many areas we identified as being problematic in both the stock GNOME and Endless OS and tried to identify if we could converge on and cooperate on a common solution. Currently Endless isn't using the stock GNOME 3 for their devices. We aren't focusing as much on the shell now, as there is a ton of work to be done in the app space, but there are a few areas in the shell we could revisit.

GNOME could do a little better in terms of discoverability. We investigated the role of the app picker versus the window switcher in the overview and being able to enter the overview on boot. Some design choices have been explained and our solution was reconsidered to be a good way forward for Endless. Unified system menu, window controls, notifications, lock screen/screen shield have been analyzed.

Endless demoed how the GNOME app-provided system search has been used to great effect on their mostly offline devices. Think "offline google".

Another noteworthy detail was the use of CRT screens. The new mini devices sport a cinch connection to old PAL/NTSC CRT TVs. Such small resolutions and poor quality brings more constraints on the design to keep things legible. This also has had a nice effect in that Endless has investigated some responsive layout solutions for gtk+ they demoed.

I also presented GNOME design team's workflow, and the free software toolchain we use. Did a little demo of Inkscape for icon design and wireframing and Blender motion design.

Last but not least, I'd like to thank the GNOME Foundation for making it possible for me to fly to Rio.

the avatar of Kohei Yoshida

LibreOffice mini-Conference 2016 in Osaka

Night view in Osaka, overlooking the Metropolitan Expressway.
Night view in Osaka, overlooking the Metropolitan Expressway.

Keynote

First off, let me just say that it was such an honor and pleasure to have had the opportunity to present a keynote at the LibreOffice mini-Conference in Osaka. It was a bit surreal to be given such an opportunity almost one year after my involvement with LibreOffice as a paid full-time engineer ended, but I’m grateful that I can still give some tales that some people find interesting. I must admit that I haven’t been that active since I left Collabora in terms of the number of git commits to the LibreOffice core repository, but that doesn’t mean that my passion for that project has faded. In reality it is far from it.

There were a lot of topics I could potentially have covered for my keynote, but I chose to talk about the 5-year history of the project, simply because I felt that we all deserved to give ourselves a lot of praises for numerous great things we’ve achieved in this five years time, which not many of us do simply because we are all very humble beings and always too eager to keep moving forward. I felt that, sometimes, we do need to stop for a moment, look back and reflect on what we’ve done, and enjoy the fruits of our labors.

Osaka

Though I had visited Kyoto once before, this was actually my first time in Osaka. Access from the Kansai International Airport (KIX) into the city was pretty straightforward. The venue was located on the 23th floor of Grand Front Osaka North Building Tower B (right outside the north entrance of JR Osaka Station), on the premises of GMO DigiRock who kindly sponsored the space for the event.

Osaka Station north entrance.
Osaka Station north entrance.

Conference

The conference took place on Saturday January 9th of 2016. The conference program consisted of my keynote, followed by four regular-length talks (30 minutes each), five lightning talks (5 minutes each), and round-table discussions at the end. Topics of the talks included: potential use of LibreOffice in high school IT textbooks, real-world experiences of large-scale migration from MS Office to LibreOffice, LibreOffice API how-tos, and to LibreOffice with NVDA the open source screen reader.

After the round-table discussions, we had some social event with beer and pizza before we concluded the event. Overall, 48 participants showed up for the conference.

Conference venue.
Conference venue.

Videos of the conference talks are made available on YouTube thanks to the effort of the LibreOffice Japanese Language Team.

Slides for my keynote are available here.

Hackfest

We also organized a hackfest on the following day at JUSO Coworking. A total of 20 plus people showed up for the hackfest, to work on things like translating the UI strings to Japanese, authoring event-related articles, and of course hacking on LibreOffice. I myself worked on implementing simple event callbacks in the mdds library, which, by the way, was just completed and merged to the master branch today.

Many folks hard at work during hackfest.
Many folks hard at work during hackfest.

Conclusion

It was great to see so many faces, new and old, many of whom traveled long distance to attend the conference. I was fortunate enough to be able to travel all the way from North Carolina across the Pacific, and it was well worth the hassle of a jet lag.

Last but not least, be sure to check out the article (in Japanese) Naruhiko Ogasawara has written up on the conference. The article goes in-depth with my keynote, and is very well written.

Other Pictures

I’ve taken quite a bit of pictures of the conference as well as of the city of Osaka in general. Jump over to this Facebook album I made of this event if you are interested.

a silhouette of a person's head and shoulders, used as a default avatar

Why I Strive to be a 0.1x Engineer

There has been more discussion recently on the concept of a “10x engineer”. 10x engineers are, (from Quora) “the top tier of engineers that are 10x more productive than the average”

Productivity

I have observed that some people are able to get 10 times more done than me. However, I’d argue that individual productivity is as irrelevant as team efficiency.

Productivity is often defined and thought about in terms of the amount of stuff produced.

“The effectiveness of productive effort, especially in industry, as measured in terms of the rate of output per unit of input”

Diseconomies of Scale

The trouble is, software has diseconomies of scale. The more we build, the more expensive it becomes to build and maintain. As software grows, we’ll spend more time and money on:

  • Operational support – keeping it running
  • User support – helping people use the features
  • Developer support – training new people to understand our software
  • Developing new features – As the system grows so will the complexity and the time to build new features on top of it (Even with well-factored code)
  • Understanding dependencies – The complex software and systems upon which we build
  • Building Tools – to scale testing/deployment/software changes
  • Communication – as we try to enable more people to work on it

The more each individual produces, the slower the team around them will operate.

Are we Effective?

Only a small percentage of things I build end up generating enough value to justify their existence – and that’s with a development process that is intended to constantly focus us on the highest value work.

If we build a feature that users are happy with it’s easy to count that as a win. It’s even easier to count it as a win if it makes more money than it cost to build.

Does it look as good when you its compare its cost/benefit to some of the other things that the team could have been working on over the same time period? Everything we choose to work on has an opportunity cost, since by choosing to work on it we are therefore not able to work on something potentially more valuable.

Applying the 0.1x

The times I feel I’ve made most difference to our team’s effectiveness is when I find ways to not build things.

  • Let’s not build that feature.
    Is there existing software that could be used instead?
  • Let’s not add this functionality.
    Does the complexity it will introduce really justify its existence?
  • Let’s not build that product yet.
    Can we first do some small things to test the assumption that it will be valuable?
  • Let’s not build/deploy that development tool.
    Can we adjust our process or practices instead to make it unnecessary?
  • Let’s not adopt this new technology.
    Can we achieve the same thing with a technology that the team is already using and familiar with? “The best tool for the job” is a very dangerous phrase.
  • Let’s not keep maintaining this feature.
    What is blocking us from deleting this code?
  • Let’s not automate this.
    Can we find a way to not need to do it all?

Identifying the Value is Hard

Given the cost of maintaining everything we build, it would literally be better for us to do 10% the work and sit around doing nothing for the rest of our time, if we could figure out the right 10% to work on.

We could even spend 10x as long on minimising the ongoing cost of maintaining that 10%. Figuring out what the most valuable things to work on and what is a waste of time is the hard part.

a silhouette of a person's head and shoulders, used as a default avatar

HOWTO: Configure 389-ds LDAP server on openSUSE Tumbleweed

Recently I’ve been setting up LDAP authentication on CentOS servers to give a shared authentication method to all the compute nodes I use for my day job. I use 389-DS as it’s in my opinion much better to administer and configure than openLDAP (plus, it has very good documentation). As I have a self built NAS at home (with openSUSE Tumbleweed), I thought it’d be nice to use LDAP for all the web applications I run there. This post shows how to set up 389 Directory Server on openSUSE Tumbleweed, including the administration console.

(Obligatory) disclaimer

While this setup worked for me, there’s no guarantee it will work for you. If something breaks, you get to keep all the pieces. With some adjustments (repo names etc) this might also work on openSUSE Leap 42.1, but I haven’t tested it. Use these instructions at your own risk.

Prerequisites

Your machine should have a FQDN, either a proper domain name, or an internal LAN name. It doesn’t really matter as long as it’s a FQDN.

Secondly, you need to tune a couple of kernel parameters to ensure that the setup won’t scream at you for lack of available resources. In particular, you’ll need to raise the ranges of local ports available and the number of maximum file descriptors. You can easily do that by creating a file called /etc/sysctl.d/00-389-ds.confwith the following contents:

# Local ports available
net.ipv4.ip_local_port_range = 1024 65000
# Maximum number of file handles
fs.file-max = 64000

After adding it, issue sysctl -p as root to apply the changes.

Installing 389 Directory Server

Afterwards, we’ll need to add the network:ldap OBS project, as in particular the admin bits of 389 aren’t yet available in Tumbleweed. Bear in mind that adding third-party repository to a Tumbleweed install is unsupported.

zypper ar -f obs://network:ldap Network_Ldap
# Trust the key when prompted
zypper ref

The obs:// scheme automatically adds the “guessed” distribution to your repository (with Leap it might fail though, so beware). Then we install the required packages:

zypper in 389-admin 389-admin-console 389-adminutil 389-console 389-ds 389-ds-console 389-adminutil 389-adminutil-lang

Adjusting the configuration to ensure that it works

So far so good. But if you follow the guides now and use setup-ds-admin.pl, you’ll get strange errors and the administration server will fail to get configured properly. This is because of a missing dependency on the apache2-worker package and because the configuration for the HTTP service used by 389 Directory Server is not properly adjusted for openSUSE: it references Apache 2 modules that the openSUSE package ships builtin or with different names and thus cannot be loaded.

Fixing the dependency problem is easy:

zypper in apache2-worker

Then, we’ll tackle the configuration issue. Open (as root) /etc/dirsrv/admin-serv/httpd.conf, locate and comment out (or delete) the following line:

LoadModule unixd_module         /usr/lib64/apache2/mod_unixd.so

Then change the mod_nss one so that it reads like this:

LoadModule nss_module         /usr/lib64/apache2/mod_nss.so

Save the file and now you’ll be able to run setup-ds-admin.pl without issues. I won’t cover the process here, there are plenty of instructions in the 389 DS documentation.

After installation: fixing 389-console

If you want to use 389-console on a 64 bit system with openJDK you’ll notice that upon running it’ll throw a Java exception saying that some classes (Mozilla NSS Java classes) can’t be found. This is because the script looks in the wrong library directory (/usr/lib as opposed to /usr/lib64). Edit /usr/bin/389-console and find:

java \
    -cp /usr/lib/java/jss4.jar: # rest of line truncated for readability

and change it to:

java \
    -cp /usr/lib64/java/jss4.jar: # rest of line truncated for readability

Voilà!

389-console working

a silhouette of a person's head and shoulders, used as a default avatar

Bit shifting, done by the << and >> operators, allows in C languages to express memory and storage access, which is quite important to read and write exchangeable data.

Bit shifting, done by the << and >> operators, allows in C languages to express memory and storage access, which is quite important to read and write exchangeable data. But:

Question: where is the bit when moved shifted left?

Answere: it depends

Long Answere:

// On LSB/intel a left shift makes the bit moving 
// to the opposite, the right. Omg 
// The shift(<<,>>) operators follow the MSB scheme
// with the highest value left. 
// shift math expresses our written order.
// 10 is more than 01
// x left shift by n == x << n == x * pow(2,n)
#include <stdio.h> // printf
#include <stdint.h> // uint16_t

int main(int argc, char **argv)
{
  uint16_t u16, i, n;
  uint8_t * u8p = (uint8_t*) &u16; // uint16_t as 2 bytes
  // iterate over all bit positions
  for(n = 0; n < 16; ++n)
  {
    // left shift operation
    u16 = 0x01 << n;
    // show the mathematical result
    printf("0x01 << %u:\t%d\n", n, u16);
    // show the bit position
    for(i = 0; i < 16; ++i) printf( "%u", u16 >> i & 0x01);
    // show the bit location in the actual byte
    for(i = 0; i < 2; ++i)
      if(u8p[i]) 
        printf(" byte[%d]", i); 
    printf("\n");
  }
  return 0;
}

Result on a LSB/intel machine:

0x01 << 0:      1 
1000000000000000 byte[0] 
0x01 << 1:      2 
0100000000000000 byte[0] 
0x01 << 2:      4 
0010000000000000 byte[0] 
0x01 << 3:      8 
0001000000000000 byte[0]
...

<< MSB Moves bits to the left, while << LSB is a Lie and moves to the right. For directional shifts I would like to use a separate operator, e.g. <<| and |>>.

the avatar of Aleksa Sarai

Dockerinit and Dead Code

After running into insane amounts of very weird issues with gccgo with Docker, some of which were actual compiler bugs, someone on my team at SUSE asked the very pertinent question "just exactly what is dockerinit, and why are we packaging it?". I've since written a patch to remove it, but I thought I'd take the time to talk about dockerinit and more generally dead code (or more importantly, code that won't die).

the avatar of Bruno Friedmann

A brief 360° overview of my first board turn

You’ve certainly noticed that I didn’t run for a second turn, after my first 2 years. This doesn’t mean the election time and the actual campaign are boring 🙂

If you are an openSUSE Member, we really want to have your vote, so go to Board Election Wiki and make your own opinion.

The ballot should open tomorrow.

board-leaving-tigerfoot

Why not a second turn?

Being a board member (present at almost every conference call, reading the mailing lists and other task) consume free time. It has increased during the last semester too. And we’ve got some new business opportunities here at Ioda-Net Sàrl in 2015, and those need also my attention for the next year(s). I prefer to be a retired Board member, than not being able to handle my responsabilities.
But I’m not against the idea of a "I’ll be back" in a near future. Moreover with a bit more bandwidth in my free time, I will be able to continue my packaging stuff, and other contributions.

What a journey!

With the new campaign running, I found funny to bring back to light my 2013 platform written 2 years ago. And spent 5 minutes on checking the differences with today. I’m inviting you to discover the small interview between me and myself 🙂

1. Which sentences would you say again today?

I’m a "Fucking Green Dreamer" of a world of collaboration, which will one day offer Freedom for everyone.

Clearly still a day by day mantra and leitmotiv. But even if I’m dreaming, I never forget that
Freedom has a price : Freedom will only be what you do for it.

2. Which thing you would not say again, or differently ?

Well, as there’s no joker card, let’s start by this one:
"A Visible Active Board > A Visible Active Project." With the experience and time, and interaction, (the one who said fight, get out please :-)), I would said this is not as easy as it sound first.
I’m pretty sure, I was more quieter since I was on the board, than before. Because it’s too hard to make a clear distinction, when you just want to express something as "a simple contributor": you are on the board, and then whatever you can try to trick the fact: you’re still a board member.

So making the Board visible like a kind of communicant Alien, will certainly not improve that much our project.
It’s written everywhere, almost everybody agree, the board is not the lighthouse in the dark showing you the future.
Anyone, I repeat, anyone in this community is a potential vector of a future for openSUSE.

So in my past platform "Guardians of the past, Managers of the present, Creators of the future." I would replace Creators of the future by kinda "enabler"

3. In the last 2 years, where did openSUSE mostly evolve?

I think we moved from a traditional way of building distribution to some creative alternative, the rolling but tested Tumbleweed, and the hybrid Leap are really good things that happened.
I believe that if we have also some hard time on mailing list, which are exhausting, they also have a positive aspect (yeap:-)). If we fight, then it’s certainly because of our faith in something. Is this one defined? Not sure! But I’m pretty convinced that all members around have a good image of it.

4. What would be an advice to your successor (even if he doesn’t care about it)?

We are a tooling community, we love tools. Beware of one thing, if you get hurt by a tool, you can become the worst asshole you’ve never met. With tools, you can also hurt other people.

My advise, watch your step and keystroke!

truly-green-yours

5. Something more you would like to share?

I’ve invested time and energy during 2014 and beginning of 2015, to run booth, talks, represent openSUSE’s project all around. It was awesome to go to those events and meet people, involved or interested about openSUSE.

Perhaps some of you don’t know, but we have friends all around us, in many communities, and when the magic about working together appears, it just blast my heart.

Thanks for reading and for your support during this period

Truly green yours

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE :: MySQL & Ruby

Наконец-таки нашел немного времени для знакомства с Ruby. Присматривался к нему уже наверное лет 5. Сначала SUSE стала создавать свои web сервисы на Ruby, потом вышел WebYaST… Что связанно с SUSE, Ruby исползуется повсеместно. Наконец и YaST был переписан на Ruby, а это значит, что openSUSE стал первым дистрибутивом, где установщик полностью написан на этом языке. Сегодня его просто нельзя уже игнорировать. Итак, я собираюсь начать восхождение как можно быстрее. Вы со мной? 🙂

Я покажу, что Leap 42.1 прекрасно подходит для изучения Ruby. В качестве примера рассмотрим работу с MySQL (на работе мне постоянно приходится писать автономные программы, поведение которых зависит от данных в БД); что-нибудь самое простое.

Если вы уже работали с БД, к примеру, в Python или Lisp, то приведенный ниже код поймете сразу. Require подключает модуль mysql. Мы создаем mysql-объект con. С помошью его метода query создаем SQL-запрос. Пробегаемся по выводу при помощи each do, при этом каждой строке из вывода присваиваем имя row. Я использую puts для вывода информации на экран. В Ruby есть и всем известный print. Разница в том, что puts добавляет символ новой строки в конец вывода, а вот print этого не делает. 14-16 строки обрабатывают ошибку (показывают ее вывод, но программа продолжает работу), если такая возникнет при попытке соединиться и получить данные. В конце мы вызываем close, который закрывает соединине.

#!/usr/bin/ruby
require 'mysql'

begin
    con = Mysql::new('10.10.10.10',
                     'login',
                     'password',
                     'centreon')

    con.query('SHOW columns FROM nagios_server').each do |row|
         puts "#{row[0]} #{row[1]} #{row[2]} #{row[3]}"
    end

    rescue Mysql::Error => e
        puts e.errno
        puts e.error
    ensure
        con.close if con
end

При первом запуске (используем свежую Leap 42.1 x86_64) получаем ошибку:

> ./mysql-test.rb
/usr/lib64/ruby/2.1.0/rubygems/core_ext/kernel_require.rb:55: \
                   in `require': cannot load such file -- mysql (LoadError)
        from /usr/lib64/ruby/2.1.0/rubygems/core_ext/kernel_require.rb:55:in `require'
        from ./mysql-test.rb:2:in `'

Тут все понятно: require не может найти mysql. Для установки воспользуемся gem. Это аналог Lisp’овского QuickLisp или Python’овского pip.

> gem --help
RubyGems is a sophisticated package manager for Ruby.  This is a
basic help message containing pointers to more information.

  Usage:
    gem -h/--help
    gem -v/--version
    gem command [arguments...] [options...]

  Examples:
    gem install rake
    gem list --local
    gem build package.gemspec
    gem help install

  Further help:
    gem help commands            list all 'gem' commands
    gem help examples            show some examples of usage
    gem help platforms           show information about platforms
    gem help                     show help on COMMAND
                                   (e.g. 'gem help install')
    gem server                   present a web page at
                                 http://localhost:8808/
                                 with info about installed gems
  Further information:
    http://guides.rubygems.org

> gem list --local mysql

*** LOCAL GEMS ***


>

Нет ни одной установленной в системе mysql gem.
Обратите внимание на вывод gem’овской help – для команды list сказано только по поводу –local аргумента. Там есть еще –remote. Последний покажет доступные для установки gem’ы. Я не привожу тут вывод, потому что список слишком большой.
Для установки воспользуемся командой install:

# gem install mysql
Fetching: mysql-2.9.1.gem (100%)
Building native extensions.  This could take a while...
                 
ERROR:  Error installing mysql:
ERROR: Failed to build gem native extension.
/usr/bin/ruby.ruby2.1 extconf.rb
mkmf.rb can't find header files for ruby at /usr/lib64/ruby/include/ruby.h
extconf failed, exit code 1 

Gem files will remain installed in
/usr/lib64/ruby/gems/2.1.0/gems/mysql-2.9.1 for inspection.
Results logged to
/usr/lib64/ruby/gems/2.1.0/extensions/x86_64-linux/2.1.0/mysql-2.9.1/gem_make.out 

И я вам скажу честно – так происходит со многоими gem’ами 🙂

Но на самом деле ничего страшного не случилось. Дело в том, что gem’у нужен файл из RPM-пакета. Какого-то RPM пакета… Вызывать zypper нам придется самим. Какой пакет устанавливать? Мне повезло, я вспомнил, что один RPM-пакет нужно было доустановить и в случае с Common Lisp. В том случае это был libmysqlclient, для Ruby же нужен libmysqlclient-devel. Очень рекомендую еще поставить пакет ruby-devel и весь pattern devel_basis. Там много библиотек, необходимых интерпретору.

> sudo zypper in -t pattern devel_basis
> sudo zypper in ruby-devel libmysqlclient-devel

После этого попытаемся установить mysql gem снова:

> gem install mysql
Building native extensions.  This could take a while...
Successfully installed mysql-2.9.1
Parsing documentation for mysql-2.9.1
Installing ri documentation for mysql-2.9.1
Done installing documentation for mysql after 0 seconds
1 gem installed

> echo $?
0

> gem list --local mysql

*** LOCAL GEMS ***

mysql (2.9.1)

Выглядит лучше, правда? Теперь вернемся к коду выше. Придумайте для теста какой-нибудь простой SQL запрос. Что-нибудь типа show tables, к примеру. Вывод скрипта, приведенного выше, выглядит так:

./mysql-test.rb
id int(11) NO PRI
name varchar(40) YES 
localhost enum('0','1') YES 
is_default int(11) YES 
last_restart int(11) YES 
ns_ip_address varchar(255) YES 
ns_activate enum('1','0') YES 
ns_status enum('0','1','2','3','4') YES 
init_script varchar(255) YES 
monitoring_engine varchar(20) YES 
nagios_bin varchar(255) YES 
nagiostats_bin varchar(255) YES 
nagios_perfdata varchar(255) YES 
centreonbroker_cfg_path varchar(255) YES 
centreonbroker_module_path varchar(255) YES 
centreonconnector_path varchar(255) YES 
ssh_port int(11) YES 
ssh_private_key varchar(255) YES 
init_script_snmptt varchar(255) YES 

Для сравнения, вот так выглядит вывод нашего SQL запроса в mysql(1):

> SHOW columns FROM nagios_server;
+----------------------------+---------------------------+------+-----+---------+
| Field                      | Type                      | Null | Key | Default |
+----------------------------+---------------------------+------+-----+---------+
| id                         | int(11)                   | NO   | PRI | NULL    |
| name                       | varchar(40)               | YES  |     | NULL    |
| localhost                  | enum('0','1')             | YES  |     | NULL    |
| is_default                 | int(11)                   | YES  |     | 0       |
| last_restart               | int(11)                   | YES  |     | NULL    |
| ns_ip_address              | varchar(255)              | YES  |     | NULL    |
| ns_activate                | enum('1','0')             | YES  |     | 1       |
| ns_status                  | enum('0','1','2','3','4') | YES  |     | 0       |                         
| init_script                | varchar(255)              | YES  |     | NULL    |                              
| monitoring_engine          | varchar(20)               | YES  |     | NULL    |                
| nagios_bin                 | varchar(255)              | YES  |     | NULL    |                    
| nagiostats_bin             | varchar(255)              | YES  |     | NULL    |                                 
| nagios_perfdata            | varchar(255)              | YES  |     | NULL    |                                  
| centreonbroker_cfg_path    | varchar(255)              | YES  |     | NULL    |                                          
| centreonbroker_module_path | varchar(255)              | YES  |     | NULL    |                                  
| centreonconnector_path     | varchar(255)              | YES  |     | NULL    |                                   
| ssh_port                   | int(11)                   | YES  |     | NULL    |                                           
| ssh_private_key            | varchar(255)              | YES  |     | NULL    |                              
| init_script_snmptt         | varchar(255)              | YES  |     | NULL    |                                              
+----------------------------+---------------------------+------+-----+---------+
19 rows in set (0.00 sec)

Вот так выглядит мой первый шаг изучения Ruby. После опробывания парочки других библиотек (например SNMP, threading или SMTP), нужно браться за изучения типов данных. Это основа языка. Если приницип работы библиотек практически 1:1 как и в Python, то к отличиям типов данных стоит отнестись серьезнее. Еще меня интересует стиль языка, а именно функциональная парадигма. Ходят слухи, что на нем легче писать функциональный код, чем на python. В этом мне еще только придется убедиться.

На последок скажу, что открытой документации по Ruby очень много. Все больше и больше документации появляется сейчас и на русском. Удачи в изучении. Счастливо 😉

a silhouette of a person's head and shoulders, used as a default avatar

Keras for Binary Classification

So I didn’t get around to seriously (besides running a few examples) play with Keras (a powerful library for building fully-differentiable machine learning models aka neural networks) – until now. And I have been a bit surprised about how tricky it actually was for me to get a simple task running, despite (or maybe because of) all the docs available already.

The thing is, many of the “basic examples” gloss over exactly how the inputs and mainly outputs look like, and that’s important. Especially since for me, the archetypal simplest machine learning problem consists of binary classification, but in Keras the canonical task is categorical classification. Only after fumbling around for a few hours, I have realized this fundamental rift.

The examples (besides LSTM sequence classification) silently assume that you want to classify to categories (e.g. to predict words etc.), not do a binary 1/0 classification. The consequences are that if you naively copy the example MLP at first, before learning to think about it, your model will never learn anything and to add insult to injury, always show the accuracy as 1.0.

So, there are a few important things you need to do to perform binary classification:

  • Pass output_dim=1 to your final Dense layer (this is the obvious one).
  • Use sigmoid activation instead of softmax – obviously, softmax on single output will always normalize whatever comes in to 1.0.
  • Pass class_mode='binary' to model.compile() (this fixes the accuracy display, possibly more; you want to pass show_accuracy=True to model.fit()).

Other lessons learned:

  • For some projects, my approach of first cobbling up an example from existing code and then thinking harder about it works great; for others, not so much…
  • In IPython, do not forget to reinitialize model = Sequential() in some of your cells – a lot of confusion ensues otherwise.
  • Keras is pretty awesome and powerful. Conceptually, I think I like NNBlocks‘ usage philosophy more (regarding how you build the model), but sadly that library is still very early in its inception (I have created a bunch of gh issues).

(Edit: After a few hours, I toned down this post a bit. It wasn’t meant at all to be an attack at Keras, though it might be perceived by someone as such. Just as a word of caution to fellow Keras newbies. And it shouldn’t take much to improve the Keras docs.)

a silhouette of a person's head and shoulders, used as a default avatar

Некоторые исправления для OpenSUSE 42.1

1. При загрузке ядра система зависает намертво и… через 20 минут всё таки грузится. В 13.2 наблюдал такое на ядре kernel-desktop и лечил это тогда переходом на kernel-default. В OpenSUSE 42.1 нет в образе такого выбора ядер, а новый kernel-default стал зависать при загрузке именно так. Решение нашлось очень простое: к параметрам загрузки ядра нужно добавить dis_ucode_ldr. Нашёл тут.

2. На некоторых машинах система время от времени зависает сама и без видимых причин. В dmesg такое:

[   35.962355] systemd-journald[525]: File /var/log/journal/fde0e1438648364a342657f95654294e/user-1000.journal corrupted or uncleanly shut down, renaming and replacing.

Заметил, что проблема имеется только тогда, когда /var примонтирован на отдельном разделе (именно так предлагается по умолчанию при установке 42.1). Удалось вылечить следующим образом:

su -
rm -rf /var/log/*
systemctl daemon-reload
reboot

Из того, что пока не удалось починить:

  1. При обновлении/установки ядра, создание initrd и обновление загрузчика занимает очень много времени (у меня — 40 минут). Опять же — только на некоторых конфигурациях, когда в системе определённое сочетание дисков с MBR и GPT.
  2. В Plasma 5 полная неразбериха в системном лотке. К тому же так и не починено отображение значка Dropbox. Xembed-sni-proxy не помогает.

В целом, OpenSUSE 42.1 — один из самых глючных и небрежно сделанных релизов OpenSUSE, что, безусловно, расстраивает. Я рассчитывал на качественную сборку KDE Plasma 5, но пока получается, что лучше всего Plasma 5 собрана в Rosa Desktop Fresh R6. Кстати, свежий образ системы «на посмотреть» можно взять тут.