Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

openSUSE Tumbleweed – Review of the week 2020/25

Dear Tumbleweed users and hackers,

7 days – 6 snapshots. That’s the quick summary of the last week. Felt unspectacular from that point of view, even though, unfortunately, the Tumbleweed users did have to fight a few problems this time aroud. The 6 snapshots released were 0611, 0612, 0614, 0615, 0616 and 0617. No worries, we did not skip 0613 for superstitious reasons – it just so happened that OBS needed a bit more time to build 0612.

The releases brought you these changes:

  • git 2.27.0
  • OpenSSH 8.3p1
  • PostgreSQL 12.3
  • SQLite 3.32.2
  • SSSD 2.3.0
  • Linux kernel 5.7.1: failures when loading firmware for iwlwifi based cards. The next kernel should bring the fix for that. As a workaround, you could use the uncompressed kernel-firmware package (instead of kernel-firmware-*)
  • KDE Applications 20.04.2
  • KDE Frameworks 5.71.0
  • Mesa 20.1.1

The following changes are being worked on in staging areas:

  • Linux kernel 5.7.2
  • VLC 3.0.11
  • KDE Plasma 5.19.1
  • LibreOffice 7.0 (beta)
  • RPM change: %{_libexecdir} is being changed to /usr/libexec. This exposes quite a lot of packages that abuse %{_libexecdir} and fail to build
  • openSSL 3.0

a silhouette of a person's head and shoulders, used as a default avatar

Build tensorflow2 with CUDA support

Build Tensorflow 2.1.1 with CUDA support

Tensorflow 2.1.1 is available in Tumbleweed and Leap 15.2 but has no CUDA support enabled, due to legal issues with NVIDIA. As CUDA support speeds up training and inference of neuronal networks a lot, it is desirable to have it enabled.

This post explains how to build a tensorflow package with CUDA support.

Prerequisites

The CUDA packages for building will be added via the -p flag to the osc command, but due to a bug you need at least version 0.169.1. This version is already included in Tumbleweed but for Leap 15.2 you will have to add the openSUSE Tools repository with the command and update osc

sudo zypper ar https://download.opensuse.org/repositories/openSUSE:/Tools/openSUSE_15.2/openSUSE:Tools.repo
sudo zypper ref
sudo zypper up --allow-vendor-change osc

It is also recommended to have a decent equipped machine for building as at least 10GB memory are needed and the build takes also a lot of time. On my twelve core machine it took more than two hours.

Create CUDA archive

In this step all the relevant CUDA packages are put into a single repository so that osc can access them. For this create the repository directory ($HOME/opt/cuda-10-1 in this case) with the command

mkdir -p $HOME/opt/cuda-10-1

The in next step downloaded work with Tumbleweed and openSUSE Leap 15.2.

CUDA

Download from https://developer.nvidia.com/cuda-10.1-download-archive-update2 the rpm(local) for openSUSE and install it. After that copy the rpms to the local directory with

cp /var/cuda-repo-10-1-local-10.1.*/*rpm $HOME/opt/cuda-10-1

NCCL

Visit https://developer.nvidia.com/nccl/nccl-download and download Local installer for RedHat/CentOS 7. After you installed the downloaded rpm you have to copy the packages with the command

cp /var/nccl-repo-2.7.3-ga-cuda10.1/*rpm $HOME/opt/cuda-10-1

CUdnn

You have to register at NVIDA in order to download from https://developer.nvidia.com/cudnn . There download cuDNN Runtime Library for RedHat/Centos 7.3 (RPM) and cuDNN Developer Library for RedHat/Centos 7.3 (RPM). After that copyt the rpms to $HOME/opt/cuda-10-1.

Create local repository

Now you can create a local repository which can be used with osc with the commands

cd $HOME/opt/cuda-10-1 
createrepo .

where you might have to install the createrepo package.

Compile Tensorflow

With all the packages in place you have to get the tensorflow sources. This can be done with

osc co science:machinelearning/tensorflow2

Tensorflow can now compiled in the directory science:machinelearning/tensorflow2 with the command

osc build --ccache -p $HOME/opt/cuda-10-1 -k . -M cuda-10-1 openSUSE_Leap_15.2

which will start the build of the tensorflow package. You should always use the --ccache option as this speeds up rebuild. For Tumbleweed used the command

osc build --ccache -p $HOME/opt/cuda-10-1 -k . -M cuda-10-1 openSUSE_Tumbleweed

After some hours the build should have finished and the resulting tensorflow rpms should be in the directory where you have started the build.

Installation

For a proper installation copy the rpms to $HOME/opt/cuda-10-1 and rerun createrepo. You might want to add this repository to you system with

zypper ar $HOME/opt/cuda-10-1/repodata tensorflow

in order to install the package tensorflow2.

the avatar of openSUSE Heroes

Short network timeouts expected during the weekend

The SUSE office in Nuremberg will get disconnected from any former Microfocus network over the upcoming weekend (2020-06-20/21). Most of the changes should go unnoticed. But SUSE-IT needs to replace some hardware and they informed us that there might be short outages or curious network timeouts during this time. Especially around Sunday, 2020-06-21, in the afternoon.

We will keep you updated via our status page, if we get aware of any longer outage.

the avatar of Nathan Wolf
the avatar of openSUSE News

openSUSE + LibreOffice Conference Update

Organizers of the openSUSE + LibreOffice Conference have been slightly adjusted the conference dates from the original dates of Oct. 13 – 16 to the new dates of Oct. 15. - 17.

The new dates are a Thursday through a Saturday. Participants can submit talks for the live conference until July 21 when the Call for Papers is expected to close.

The length of the talks for the conference have also been changed. There will be a 15-minute short talk, a 30-minute normal talk and a 60-minute work group sessions to select. Organizers felt that shortening the talks were necessary to keep attendees engaged during the online conference. The change will also help with the scheduling of breaks, social video sessions and extra segments for Questions and Answers after each talk.

The live platform that will be used will allow presenters with limited bandwidth to play a talk they record should they wish not to present a live talk. The presenter will have the possibility to control the video as well as pause, rewind, fast-forward, etc., which is built into the system.

Organizers have online, live conference sponsorship packages available. Interested parties should contact ddemaio (at) opensuse.org for more information.

the avatar of openSUSE News

Plasma "5.19", Virtualbox, Kernel "5.7.1" update in Tumbleweed

An exciting week of openSUSE Tumbleweed snapshots have brought even more KDE software, a new stable kernel and more.

A week ago Plasma 5.19 arrived in the 20200609 snapshot and just a couple of days ago in snapshot 20200614 KDE’s 20.04.2 Apps Update arrived.

A large amount of the packages updated in snapshot 20200614 were Applications 20.04.2 packages, which included improvements to the music player Elisa, search tags for the file manager Dolphin and faster editing with KDE’s advanced video-editing application Kdenlive. Several other packages were included in the snapshot like an update to image editor gimp 2.10.20, which now allows the tool-group menu that hovers to expand. The Generic Graphics Library, gegl, 0.4.24 added new horizontal and vertical shapes for vignettes. Other packages updated in the snapshot were autoyast2 4.3.13, pam 1.4.0, instant messaging client pidgin 2.14.1 and GNOME document reader evince 3.36.5. The snapshot is trending unstable with a few known issues like a bootloop and a failure to build vmware modules. The current rating was at 68 during the release of this article, according to the Tumbleweed snapshot reviewer.

The arrival of Linux Kernel 5.7.1 came in snapshot 20200612, which is also trending unstable at a rating of 76, and could affect people relying on iwlwifi. The Linux tool used to diagnose issues with power consumption and power management, powertop, was updated to version 2.13, and the perl-Mojolicious package was updated to version 8.53 in the snapshot, which added an experimental extname method to Mojo::File.

Moving back to the Tumbleweed snapshots that were trending stable, snapshot 20200611 was trending at a 98 rating and brought multiple new packages to include updates of ImageMagick 7.0.10.18, which had a colorspace change that removes the ICC profile and frees up memory, and an update to virtualbox 6.1.10 that fixed the resizing and multi monitor handling for Wayland guests. The Advanced Linux Sound Architecture 1.2.3 package addressed issue #34 and makes ALSA relocatable in the filesystem. OpenSSH has new features in its 8.3p1 release and removes the “ssh-rsa” (RSA/SHA1) algorithm from those accepted for certificate signatures (i.e. the client and server CASignatureAlgorithms option) and will use the rsa-sha2-512 signature algorithm by default when the ssh-keygen(1) CA signs new certificates. Among some of the notable package to update in the snapshot were a new major version of Google’s API tool package nodejs-common 4.0, git 2.27.0, postgresql 12.3, redis 6.0.5, sqlite 3.32.2 and GnuTLS 3.6.14, which fixed a memory leak that could have lead to a DoS attack against Samba servers.

Snapshot 20200610 had several YaST packages update to include the Yast2 4.2.84 package which improved the stop and start of a system service, and yast2-bootloader 4.3.3, which enhanced disk type detection to cover multipath in s390 secure boot. Plus four months of translations for the yast2-trans package were updated. Regular expression library RE2 had a month worth of updates and rebootmgr 1.2 made a change to depend on dbus and not the network regarding disabled etcd support. The snapshot is likely to record a stable rating of 96, according to the Tumbleweed snapshot reviewer.

The excitement around the release of Plasma 5.19 didn’t disappoint as the widgets and wallpaper provided an eye-catching background. The arrival of the “Polished Plasma” was well received and the features look fantastic. Plasma packages weren’t the only packages to arrive in snapshot 20200609. A new major version of Mozilla Firefox 77.0.1 allows for an easier way to view and manage web certificates. Stability improvements were made with the minor update of Mozilla Thunderbird 68.9.0. BitTorrent client transmission updated to a 3.0 major version and allows the Remote Procedure Call server to listen on an IPv6 address. Among the other packages to update in the snapshot were harfbuzz 2.6.7, poppler 0.89.0, nodejs14 14.4.0, iptables 1.8.5 and several GNOME packages to include a minor update of gnome-software to version 3.36.1 and gnome-shell 3.36.3. The snapshot recorded a stable rating of 91, according to the Tumbleweed snapshot reviewer.

the avatar of Karatek's Blog

Datenschutz bei Untis Messenger

Vor einiger Zeit führte unsere Schule im Zuge der Notwendigkeit durch die Corona Pandemie die Messenger App “Untis Messenger” zur schulinternen Kommunikation ein. Hierbei handelt es sich um den neu gebrandeten Messenger der Österreichischen Firma Grape, der eigentlich für den Einsatz in Unternehmen konzipiert ist.

Das Problem mit dem Datenschutz

Beim lesen der Datenschutz-Bestimmungen jedoch stieß ich auf folgenden Absatz:

!!! quote “Zitat nach grape.io/privacy” Some service providers are situated outside of the European Union, namely the USA. Therefore, Data is transferred to recipients in third countries, all of which adhere to the EU-US privacy shield.

Wörtlich genannt werden die Unternehmen IXOLIT GmbH, Google Inc., Amazon Web Services Inc., Zendesk Inc., Apple Inc., Microsoft Corp. und Hubspot Inc.. Auf seiner Webseite jedoch schreibt der österreichische Softwareentwickler folgendes (Zitat nach untis.at/produkte/webuntis-das-grundpaket/messenger):

Ihre Daten werden sicher aufbewahrt und verlassen nicht die Europäische Union.

Kontaktaufnahme zu Untis

Da sich diese Aussagen wiedersprechen, wandte ich mich am 4. Juni 2020 per E-Mail an Untis. Ich forderte das Unternehmen auf, diese Unklarheiten zu erklären. Als ich auch am 14. Juni, über eine Woche später, noch keine Antwort erhalten hatte, wandte ich mich erneut per Mail an Untis, diesmal an den Datenschutzbeauftragten Dr. David Huemer. In dieser E-Mail erwähnte ich außerdem die ausgebliebene Antwort des Untis Büros. Eine Kopie dieser Mail schickte ich erneut an office@untis.at. Wenige Minuten später erhielt ich eine Bestätigung über den Eingang meiner Mail, mit der Information, dass das Büro bis zum 6.1.2020 nur eingeschränkt besetzt sei, und man wünschte mir “erholsame Feiertage und einen guten Rutsch ins neue Jahr.”

Antworten des Unternehmens

Am Tag darauf, nämlich heute, am 15.06.2020, gingen dann zwei Antworten bei mir ein. Der Datenschutzbeauftragte erzählte mir, dass es sich bei Untis Messenger um eine angepasste Version des Messengers von Grape handle, weshalb auch auf dessen Datenschutzrichtlinien verwiesen worden sei. Diese Information war mir zwar nicht neu, lässt sie sich doch nach einiger Recherche leicht finden, zeigt jedoch den guten Willen. Auch erklärte Dr. Huemmer, für den Untis Messenger gälten besondere Bedingungen, die Daten würden ausschließlich innerhalb der EU verarbeitet. Grape habe allerdings für diesen Fall keine angepassten Datenschutzhinweise. Eine Ausnahme gelte jedoch für Push Benachichtigungen: Da es derzeit keine anderen technischen Möglichkeiten gäbe, Push Nachichten zu versenden, außer die jeweiligen Services von Google und Apple zu verwenden, würden keine Alternativen eingesetzt. Das stimmt nicht ganz, wie z.B. dieser StackOverflow Beitrag zeigt, jedoch ist in den meisten Fällen die Nutzung von Firebase Cloud Messaging die einfachste und energiesparenste Lösung.

Auch ging eine weitere Mail ein, diesmal aus dem Büro. In ihr ist genau gennant, wozu welche Dienstleister verwendet werden:

  • Aplitude: Die Grape Software hat auch einen Video Teil integriert (“jitsi”), dieser verwendet diesen Tracker, voip ist für Untis nicht aktiv und wird somit nicht verwendet. Danke jedoch für diese Info, dieser Tracker wurde bereits deaktiviert.
  • Appdynamics: Dabei handelt es sich um eine Performance profiling und Error Reporting Methode und steht ausschliesslich für on Premises Kunden zur Verwendung frei, die ihren eigenen Server betreiben und Daten in Appdynamics loggen wollen. Der Untis Server hat Appdynamics nicht konfiguriert.
  • Google Analytics: Ebenso nicht aktiv für Untis, wurde bereits komplett vom Code entfernt (Danke für den Hinweis).
  • Google Crashlytics: Ausschliesslich für Fehler reporting um die Applikation stabiler zu machen. Je weniger Daten wir hier reinbekommen, desto glücklicher sind wir.
  • Google Firebase Analytics: Wird nicht benutzt, jedoch schlägt dieser Tracker an weil firebase.core und firebase.messaging (FCM, push notification) importiert wird.
  • Google Tag Manager: Teil von Google Playservices Analytics was für die Cloud Variante verwendet wurde und wird nun auch entfernt.

Die einzigen Daten, die von Grape über amerikanische Server laufen, sind Push Notifications, da es technisch keine andere, zuverlässige, Lösung am Markt gibt. wir sind selber Meinung, dass Abhilfe geschaffen werden muss um ein komplett Google-freies Smartphone zu haben, jedoch ist der derzeitig komplizierteste Teil eine stabile Alternative zu FCM Push Notification zu finden. Jedoch haben wir einen Auftragsverarbeitungsvertrag mit Google bezüglich Push Notifications.

Alles in allem waren die Mitarbeiter des Unternehmens sehr freundlich und hilfsbereit, man bot mir sogar an, telefonisch mit mir über das Thema zu sprechen, wozu ich jedoch keine Notwendigkeit mehr sehe.

Weitere Informationen

Ich veröffentliche diese Informationen hier, um anderen Nutzern mit ähnlichen Bedenken die Arbeit zu ersparen, sich selber an Untis zu wenden. Sollte das Unternehmen etwas gegen die Veröffentlichung auf dieser Homepage einzuwenden haben, so nehme ich diesen Artikel selbstverständlich vom Netz.

Quellen

the avatar of Santiago Zarate

Using python virtualenv inside vscode

Quick and dirty

  • Install python3-virtualenvwrapper (via pip or via package manager)
  • Export a workon directory: export WORKON_HOME=/home/foursixnine/Projects/python-virtualenv
  • source virtualenvwrapper
foursixnine@deimos:~/Projects> source virtualenvwrapper    
virtualenvwrapper.user_scripts creating /home/foursixnine/Projects/python-virtualenv/premkproject
...
virtualenvwrapper.user_scripts creating /home/foursixnine/Projects/python-virtualenv/get_env_details
  • mkvirtualenv newenv
foursixnine@deimos:~/Projects> mkvirtualenv newenv
created virtual environment CPython3.8.3.final.0-64 in 115ms
  creator CPython3Posix(dest=/home/foursixnine/Projects/python-virtualenv/newenv, clear=False, global=False)
  seeder FromAppData(download=False, pip=latest, setuptools=latest, wheel=latest, via=copy, app_data_dir=/home/foursixnine/.local/share/virtualenv/seed-app-data/v1.0.1)
  activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator
virtualenvwrapper.user_scripts creating /home/foursixnine/Projects/python-virtualenv/newenv/bin/predeactivate
...
virtualenvwrapper.user_scripts creating /home/foursixnine/Projects/python-virtualenv/newenv/bin/get_env_details
  • By this point, you’re already inside newenv:
(newenv) foursixnine@deimos:~/Projects> 
  • You can create multiple virtual environments and switch among them using workon $env so long as you have sourced virtualenvwrapper and your $WORKON_HOME is properly defined.

Real business

  • Now, if you want to use vscode, remember that you will need to define properly python.PythonPath for your workspace/project (I’m new to this, don’t hang me in a public square ok?), in this case, my env is called linkedinlearningaiml
{
    "python.pythonPath": "/home/foursixnine/Projects/python-virtualenv/linkedinlearningaiml/bin/python"
}

Now your python code will be executed within the context of your virtual environment, so you can get down to serious (or not at all) python development, without screweing up your host or polluting the dependencies and stuff.

PS: Since wanted to be able to run standalone python files, I also needed to change a bit my launch.json (Maybe this is not needed?)

    "version": "0.2.0",
    "configurations": [
        {
            "name": "Python: Current File",
            "type": "python",
            "request": "launch",
            "program": "${file}",
            "console": "integratedTerminal",
            "cwd": "${fileDirname}"
        }
    ]
}

And off you go, how to use python virtualenv inside vscode

Et voilà, ma chérie! It's alive!

the avatar of Klaas Freitag

Open Search Foundation

recently I learned about the Open Search Foundation in the public broadcast radio (Bayern 2 Radio Article). That surprised me: I had not heard about OSF before, even though I am active in the field of free software and culture. But this new foundation made it into the mainstream broadcast already. Reason enough to take a closer look.

It is a very good sign to have the topic of internet search in the news. It is a fact that one company has a gigantic market share in searching which is indeed a threat to the freedom of internet users. The key to be found in the web is the key to success with whatever message or service a web site might come up with, and all that is controlled by one enterprise driven by commercial interests. That should be realized by a broad audience.

The Open Search Foundation has the clear vision to build up an publicly owned search index as an alternative for Europe.

Geographical and Political Focus

The whitepaper talks about working on the search machine specifically for Europe. It mentions that there are search indexes in the US, China and Russia, but none rooted in Europe. While this is a geographical statement in the first place, it is of course also a political, because some of the existing services are probably politically controlled.

It is good to start with a focus on Europe, but the idea of a free and publicly controlled project should not be limited to Europes borders. In fact, it will not stop there if it is attractive because it might offer a way to escape from potentially controlled systems.

On the other hand, Europe (in opposite to any single European country alone) seems like a good base to start with this huge effort as it is able to come up with the needed resources.

Organization

The founding members of the Open Search Foundation are not very well known members of the wider open source community. That is good, as it shows that the topics around the free internet do not only concern nerds in the typical communities, but also people who work for an open and future proof society in other areas like academia, research and medicine.

On the other hand, an organization like for example the Wikimedia e.V. might have been a more obvious candidate to address this topic. Neither on the web site nor in the whitepaper I found mentions of any of the “usual suspects” or other organizations and companies who have already tried to set up alternative indices. I wonder if there have been discussions, cooperations or plans to work together?

I am very curious to see how the collaboration between the more “traditional” open data/open source community and the Open Search Foundation will be, as I think it is a crucial part to combine all players in this area without falling into the “endless discussion trap” while not achieving countable results. It is the question of building an efficient community.

Pillars of Success

Does the idea of the OSF have a realistic chance to succeed? The following four pillars might play an important role for the success of the idea to build the free search index of the internet:

1. Licenses and Governance

The legal framework has to be well defined and thought through, so that it will be resilient longer term. As we talk about a huge commercial potential to control this index, parties might wanna try to get into control of it.

Only a strong governance and legal framework can ensure that the idea lasts.

The OSF mentions in the whitepaper that it is one of the first steps to set this up.

2. Ressources

A search index requires big amounts of computing power in the wider sense, including storage, networking, redundancy and so on. Additionally there need to be people who take care on that. For that, there needs to be financial support for staffing, marketing, legal support and all that.

The whitepaper mentions ideas to collect the computing power from academia or from company donations.

For the financial backing the OSF will have to find sources like EC money, from governments and academia, and maybe private fund raising. Organizations like Wikimedia would already have experience with that.

If that will not be enough, the idea of selling better search results for money or offering SEO help for development will quickly come up. This will be interesting discussions that require the strong governance.

3. Technical Excellence

Who will use a search index that does not come up with reasonable search results?
To be able to compete with the existing solutions that even made it into our daily communication habits already, the service needs to be just great in terms of search results and user experience.

Many already existing approaches that use the Google index as a backend have already show that even with that it is not easy to provide a comparable result.

It is a fact that users of the commercial competition trade their personal data against optimal search results, even if they dont do that consciously. It is more difficult for a privacy oriented service, so this is another handicap.

The whitepaper mentions ideas on how to work on this huge task and also accepts that it will be challenging. But that is no reason to not try it. We all know plenty of examples where these kind of tasks were successful even though nobody believed that in the beginning.

4. Community

To achieve all the points a strong community is key factor.

There need to be people who do technical work like administering the data centers, developers who code, technical writers for documentation, translators and much more. But that is only the technical part.

For the financial-, marketing- and legal support there are other people needed, not speaking about political lobby and such.

All these parts have to be built up, managed and kept intact long term.

The Linux kernel, which was mentioned as a model in the whitepaper, is different. Not even the technical work is comparable between the free search index and the Linux kernel.

The long term stable development of the Linux kernel is based on people who work full time on the kernel while being employed by certain companies who are actually competitors. But on the kernel, they collaborate.

This way, the companies share cost for inevitable base development work. There differentiators in the market are not depending on there work on the kernel, but in the levels above the kernel.

How is that for OSF? I am failing to see how enough sustainable business can be based on an open, privacy respecting search index so that companies will be happy to fund engineers working on it.

Apart from that, the kernel has the benefit that it had strong companies like RedHat, SUSE and IBM who pushed Linux in the early times, so no special marketing budgets etc. were needed for the kernel specifically. Also that is different for OSF, as quite some marketing- and community management money will be required to start.

Conclusion

Building a lasting, productive and well established community will be the vital question for the whole project in my opinion. Offering a great idea, which this initiative is without question, will not be enough to motivate people to participate long term.

There has to be an interesting offer for potential contributors at all levels, starting from individuals and companies for contributions, to universities for donating hardware or the governments and the European Community for money. There needs to be some kind of benefit they will gain for their engagement on the project. It is interesting if the OSF can come up with a model that will get that kickstarted.

I very much hope that this gets traction as it would be an important step towards a more free internet again. And I also hope that there will be collaboration on this topic with the traditional free culture communities and the foundations there.

the avatar of Nathan Wolf

Noodlings 14 | LeoCAD, DeWalt and a UPS

Dusting off for the 14th installment. 14th Noodling of nonsense and clamoring LeoCAD from Design to Publication Designing, organizing the timeline and publishing a MOC (My Own Creation) on Rebrickable.com using LeoCAD on openSUSE DeWalt cordless Power tool platform A little trip outside the cubicle for my appreciation for a great cordless tool platform that … Continue reading Noodlings 14 | LeoCAD, DeWalt and a UPS