Skip to main content

the avatar of Richard Brown

Changing of the Guard

Dear Community,

After six years on the openSUSE Board and five as its Chairperson, I have decided to step down as Chair of the openSUSE Board effective today, August 19.

This has been a very difficult decision for me to make, with reasons that are diverse, interlinked, and personal. Some of the key factors that led me to make this step include the time required to do the job properly, and the length of time I’ve served. Five years is more than twice as long as any of my predecessors. The time required to do the role properly has increased and I now find it impossible to balance the demands of the role with the requirements of my primary role as a developer in SUSE, and with what I wish to achieve outside of work and community. As difficult as it is to step back from something I’ve enjoyed doing for so long, I am looking forward to achieving a better balance between work, community, and life in general.

Serving as member and chair of the openSUSE Board has been an absolute pleasure and highly rewarding. Meeting and communicating with members of the project as well as championing the cause of openSUSE has been a joyous part of my life that I know I will miss going forward.

openSUSE won’t get rid of me entirely. While I do intend to step back from any governance topics, I will still be working at SUSE in the Future Technology Team. Following SUSE’s Open Source policy, we do a lot in openSUSE. I am especially looking forward to being able to focus on Kubic & MicroOS much more than I have been lately.

As I’m sure it’s likely to be a question, I wish to make it crystal clear that my decision has nothing to do with the Board’s ongoing efforts to form an independent openSUSE Foundation.

The Board’s decision to form a Foundation had my complete backing as Chairperson, and will continue to have as a regular openSUSE contributor. I have absolute confidence in the openSUSE Board; Indeed, I don’t think I would be able to make this decision at this time if I wasn’t certain that I was leaving openSUSE in good hands.

On that note, SUSE has appointed Gerald Pfeifer as my replacement as Chair. Gerald is SUSE’s EMEA-based CTO, with a long history as a Tumbleweed user, an active openSUSE Member, and upstream contributor/maintainer in projects like GCC and Wine.

Gerald has been a regular source of advice & support during my tenure as Chairperson. In particular, I will always remember my first visit to FOSDEM as openSUSE Chair. Turning up more smartly dressed than usual, I was surprised to find Gerald, a senior Director at SUSE, diving in to help at the incredibly busy openSUSE booth, and doing so dressed in quite possibly the oldest and most well-loved openSUSE T-shirt I’ve ever seen. When booth visitors came with questions about SUSE-specific stuff, I think he took some glee in being able to point them in my direction while teasingly saying “Richard is the corporate guy here, I’m just representing the community..”

Knowing full well he will continue being so community minded, while finally giving me the opportunity to tease him in return, it is with a similar glee I now hand over the reigns to Gerald.

As much as I’m going to miss things about being chairperson of this awesome community, I’m confident and excited to see how openSUSE evolves from here.

Keep having a lot of fun,

Richard

Note: This announcement has been cross-posted in several places, but please send any replies and discussion to the opensuse-project@opensuse.org Mailinglist. Thanks!

a silhouette of a person's head and shoulders, used as a default avatar

FrOSCon 2019 - openSUSE booth & AppArmor Crash Course

The lucky winner of the openSUSE Jeopardy and the Geeko
AppArmor Crash Course slides, 2019 edition.
Last weekend, I was at FrOSCon - a great Open Source conference in Sankt Augustin, Germany. We (Sarah, Marcel and I) ran the openSUSE booth, answered lots of questions about openSUSE and gave the visitors some goodies - serious and funny (hi OBS team!) stickers, openSUSE hats, backpacks and magazines featuring openSUSE Leap. We also had a big plush geeko, but instead of doing a boring raffle, we played openSUSE Jeopardy where the candidates had to ask the right questions about Linux and openSUSE for the answers I provided.

To avoid getting bored ;-) I did a sub-booth featuring my other two hobbies - AppArmor and PostfixAdmin. As expected, I didn't get too many questions about them, but it was a nice addition and side job while running the openSUSE booth ;-)

I also gave an updated version of my "AppArmor Crash Course" talk. You can find the slides on the right, and the video recording (in german) on media.ccc.de.

the avatar of Efstathios Iosifidis

Αυτά είναι τα καταπληκτικά μπλουζάκια του GUADEC

GUADEC T-shirts

Σχεδόν σε κάθε συνέδριο που συμμετέχω, τυπώνονται μπλουζάκια ως "αναμνηστικό". Η γκαρνταρόμπα μου αποτελείται κατά 90% από μπλουζάκια από συνέδρια ή γενικά από projects ανοικτού κώδικα.

Όπως κάθε χρόνο, έτσι και φέτος τυπωθηκαν μπλουζάκια. Προσωπικά τα θεωρώ πανέμορφα.
Αυτά είναι τα μπλουζάκια που μπορείτε να προμηθευτείτε στο συνέδριο σε λίγες ημέρες.

Ποιο σας άρεσε;
the avatar of Chun-Hung sakana Huang

openSUSE Leap 15.1 安裝小記

openSUSE Leap 15.1 安裝小記

openSUSE Leap 15.0 Lifetime 到 2019/11 

所以在處理完 COSCUP 議程之後, 就可以動手來安裝 openSUSE Leap 15.1 到我的桌機了

這次也是使用 USB 來進行安裝,  

== 安裝過程小記==

這次建立的時候我還是選擇 GNOME 桌面

磁碟區分割的部分, 使用 進階磁碟分割程式
  • / 大小為 60GB, 一樣使用 btrfs, 但是取消勾選”啟用快照”
  • swap 大小為 16GB
  • /boot/efi, 大小為 1GB, 使用 fat
  • /home 大小為剩下的所有空間(126 GB ), 使用 xfs ( 如果有勾選 )

安裝的內容, 我這次是全新安裝, 
  • 有鑒於之前 flatpak 的 bug 還有 snapper 佔用空間, 這次把 / 大小調整為 60GB, 然後取消啟用快照

===============

Network Manager:

#yast2  lan
更該預設為 Network Manager


Google Chrome:

還是會有驗證性問題, 但是功能沒有差異
為了進行google 登入,先使用 Google 驗證App,  後面來處理yubikey

home 資料回覆:

因為有很多相關的 config 在個人家目錄內, 所以先把舊的 openSUSE Leap 15.0 的 /home 目錄, 使用# tar    cvf home.tar /home 進行打包到隨身碟 ( 不要使用 .gz 方式, 會比較快速 )
新機器再使用 tar 指令還原回來

Notes
  • Ifconfig 預設沒有安裝, 要使用 ip  address show

關閉GNOME裡面的搜尋功能(點選右上角的設定按鈕), 因為我覺得用不到


中文輸入法問題:

在系統內新增中文輸入法, 目前使用 ibus
  • system_key( windows ) + 空白鍵 切換輸入法


取消 USB 為安裝來源
# yast2  repositories 


Freemind:
使用one click install 安裝 http://software.opensuse.org/package/freemind 
我是使用 editors 那個來源的 ymp 檔案安裝

.mm 的檔案指定用 freemind  開啟


新增 Packman 套件庫:

使用 #yast2  repositories 加入社群版本的Packman 

#yast2  repositories

Firefox Sync:
登入 Firefox Sync, 會處理之前有下載的 Plugin

flash-player:
# zypper   install flash-player


播放器:

# zypper  install   vlc vlc-codecs

  • Mp4 codec 應該是要安裝 vlc-codecs,  需要 Packman 套件庫

並將 .rmvb 以及 .mp4 預設播放器設定為  VLC

安裝  ffmpeg ( 會把提供者從 openSUSE 換成 Packman )
# zypper  install ffmpeg

這樣的好處是使用 youtube-dl  可以轉換 mp3 格式


透過 youtube-dl  -F 來觀察可以下載的格式

# zypper  install youtube-dl

> youtube-dl  -F  http://www.youtube.com/watch?v=13eLHrDcb1k
[youtube] Setting language
[youtube] 13eLHrDcb1k: Downloading video webpage
[youtube] 13eLHrDcb1k: Downloading video info webpage
[youtube] 13eLHrDcb1k: Extracting video information
Available formats:
22 : mp4 [720x1280]
18 : mp4 [360x640]
43 : webm [360x640]
5 : flv [240x400]
17 : mp4 [144x176]

指定要下載的格式進行下載 (請注意 -f 是小寫)

> youtube-dl  -f  22  http://www.youtube.com/watch?v=13eLHrDcb1k

下載為 mp3
首先要安裝 ffmpeg 套件

>youtube-dl    http://www.youtube.com/watch?v=13eLHrDcb1k --extract-audio --audio-format mp3

PDF Viewer 安裝:
Foxit
  • 下載軟體的 .tar.gz 然後以  root 安裝

Skype:
目前的版本是 8.51.0.72 的版本


下載 RPM 版本用軟體安裝就裝好了 :)

使用 #yast2 sound 調整音效


GNOME Extension:

參考調校小記

> gnome-tweak-tool
裝了
  • TopIcons
  • NetSpeed


Dropbox:

目前版本 2.10.0
使用 # zypper install dropbox 來安裝

安裝完之後在終端機下 dropbox  start -i 來安裝

裝好之後才發現, 現在 linux 又支援 XFS …..

修改 LS_OPTIONS 變數
# vi   /etc/profile.d/ls.bash
把 root 的 LS_OPTIONS 的 -A 移除

.7z 支援:
# zypper  install p7zip

imagewriter:
# zypper  install imagewriter
用來製作開機 USB

hexchat:
# zypper  install hexchat

rdesktop 安裝與測試:
#zypper  install freerdp

執行方式
#xfreerdp  -g 1280x1024  -u administrator  HOST_IP

Yubico Key:
如果 linux 沒有抓到 Yubico 的 U2F Key可以使用以下步驟
讓 linux 支援 Yubico , 我是參考 https://www.yubico.com/faq/enable-u2f-linux/  
作法
存到 /etc/udev/rules.d/70-u2f.rules
將 linux 重開機, 接下來就可以使用了 :-)

ansible 安裝:

目前版本 2.8.1
#zypper  install ansible

Docker 安裝:

目前版本 18.09.6-ce
#zypper  install  docker

將使用者 sakana  加入 docker 群組 

#systemctl  start  docker
#systemctl  enable   docker


VMware workstation Pro 15:

安裝 kernel-default-devel  
# zypper   install   kernel-default-devel kernel-source
# ./VMware-Workstation-Full-15.1.0-13591040.x86_64.bundle

如果啟動的時候出現找不到 Kernel Header
# yast2  sw_single
點選 kernel-default-devel 套件, 點選 Versions 分頁, 勾選適當的 kernel 版本


取消 “Enable virtual machine sharing and remote access

2017-11-13 20-45-20 的螢幕擷圖.png




smartgit 安裝:

下載 smartgit-linux-19_1_1.tar.gz

解壓縮到 /opt
# tar  zxvf   smartgit-linux-19_*.tar.gz  -C   /opt/

建立 link 讓一般使用者也能使用
# ln  -s   /opt/smartgit/bin/smartgit.sh   /usr/local/bin/smartgit

安裝 git
# zypper  install  git

建立 個人的 ssh key ( 這次沒有作, 因為將舊的 /home 還原回來 )
> ssh-keygen  -t dsa

將 ssh 的公鑰 id_dsa.pub 新增到 Github 的 Settings -- >  SSH and GPG Keys ( 這次沒有作, 因為將舊的 /home 還原回來 )

接下來就是以一般使用者的身份執行 smartgit 指令
> smartgit

這次沒有發生 一般使用者發生找不到 jre 路徑

解法, 目前是在 ~/.smartgit/smartgit.vmoptions 檔案內
將 jre 指向 /opt/smartgit/jre

> cat   ~/.smartgit/smartgit.vmoptions 
jre=/opt/smartgit/jre

按照上面的參考設定

# zypper  install alacarte
設定 smart git icon 使用 alacarte

> alacarte
在設定好之後如果發現無法直接開啟資料夾 ( 資料夾上面按右鍵 -- > Open )
Edit -- > Preferences --> 點選  Tools -- > 點選 Re-Add Defaults 得到解決
2016-11-24 15-48-28 的螢幕擷圖.png






Azure-cli 安裝:

版本: 2.0.71

匯入 rpm key

新增 Azure CLI 的 repo
# zypper  addrepo --name 'Azure CLI' --check https://packages.microsoft.com/yumrepos/azure-cli azure-cli

安裝 azure-cli 套件
# zypper  install --from azure-cli  -y  azure-cli

使用互動的方式登入 azure ( 現在已經不需要輸入機器碼, 直接驗證帳號就可以  )
> az  login

AWS Cli 安裝:

版本: 1.16.220

有鑑於 python 2 即將終止 support , 這次是用 pip3 來安裝 awscli

# pip3 install awscli

# pip3  install --upgrade pip

# aws --version

aws-cli/1.16.220 Python/3.6.5 Linux/4.12.14-lp151.27-default botocore/1.12.210


Google Cloud SDK ( gcloud )安裝:

安裝 gcloud
  • 但是目前實務上是使用容器的方式來執行


Visual Studio Core 相關 :


安裝 vscode

安裝 vscode extension ( 這次沒有作, 因為將舊的 /home 還原回來 )
  • AWS Toolkit for Visual Studio Code
  • Bracket Pair Colorizer
  • Code Time
  • Git Graph
  • GitHub Pull Requests
  • GitLens
  • Kubernetes
  • Python

Nextcloud client 安裝,  crontab 設定:

參考之前的文章

# zypper  install  nautilus-extension-nextcloud  nextcloud-client

使用一般使用者 啟動 nextcloud client
設定相關連線與同步   ( 這次沒有作, 因為將舊的 /home 還原回來 )

設定同步 Dropbox 與 Nextcloud 目錄
> crontab  -e

0 22 * * * rsync -a --delete /home/sakana/Dropbox/*  /home/sakana/Nextcloud/Dropbox/


PPSSPP 安裝:


使用 Emulators 來源安裝


Filezilla 安裝:

#zypper  install  filezilla


Sqlitebrowser 安裝:


=====================================

這次安裝的時間真的相對久 :p

~ enjoy it

參考


the avatar of Efstathios Iosifidis

Όλες οι πληροφορίες για το GUADEC θα τις βρείτε στην εφαρμογή στο κινητό σας

GUADEC 2019, Thessaloniki

Μπορεί να έχετε διαβάσει προηγούμενες αναρτήσεις μου σχετικά με το συνέδριο GUADEC. Κάτι καινούργιο που θα εφαρμοστεί για πρώτη φορά, είναι η εφαρμογή με όλες τις πληροφορίες που πρέπει να γνωρίζει κάποιος συμμετέχοντας.

Η όλη προσπάθεια στηρίζεται στο λογισμικό Connfa, το οποίο είναι ένα ανοικτού κώδικα λογισμικό για events, συνέδρια κλπ.

Ο Britt μαζί με την ομάδα του, φτιάξανε την εφαρμογή αυτή για συσκευές android και iPhone αλλά και για όσους δεν θέλουν να εγκαταστήσουν, υπάρχει και online.

GUADEC application

Αρχικά για να κατεβάσετε στο κινητό σας, μπορείτε να πάτε στο google play στην διεύθυνση:

https://play.google.com/store/apps/details?id=org.gnome.guadec

GUADEC application

Αφού την εγκαταστήσετε, μπορείτε να δείτε το πρόγραμμα, ποιες δραστηριότητες θα έχουμε και πότε (τα parties κλπ).

GUADEC application

Η εφαρμογή έχει κατασκευαστεί και για iPhone αλλά η αλήθεια είναι ότι δεν την βρήκα στο appstore για να αφήσω τον σύνδεσμο εδώ.

Online μπορείτε να το βρείτε στο https://schedule.guadec.org/.
the avatar of Klaas Freitag

ownCloud and CryFS

It is a great idea to encrypt files on client side before uploading them to an ownCloud server if that one is not running in controlled environment, or if one just wants to act defensive and minimize risk.

Some people think it is a great idea to include the functionality in the sync client.

I don’t agree because it combines two very complex topics into one code base and makes the code difficult to maintain. The risk is high to end up with a kind of code base which nobody is able to maintain properly any more. So let’s better avoid that for ownCloud and look for alternatives.

A good way is to use a so called encrypted overlay filesystem and let ownCloud sync the encrypted files. The downside is that you can not use the encrypted files in the web interface because it can not decrypt the files easily. To me, that is not overly important because I want to sync files between different clients, which probably is the most common usecase.

Encrypted overlay filesystems put the encrypted data in one directory called the cipher directory. A decrypted representation of the data is mounted to a different directory, in which the user works.

That is easy to setup and use, and also in principle good to use with file sync software like ownCloud because it does not store the files in one huge container file that needs to be synced if one bit changes as other solutions do.

To use it, the cypher directory must be configured as local sync dir of the client. If a file is changed in the mounted dir, the overlay file system changes the crypto files in the cypher dir. These are synced by the ownCloud client.

One of the solutions I tried is CryFS. It works nicely in general, but is unfortunately very slow together with ownCloud sync.

The reason for that is that CryFS is chunking all files in the cypher dir into 16 kB blocks, which are spread over a set of directories. It is very beneficial because file names and sizes are not reconstructable in the cypher dir, but it hits on one of the weak sides of the ownCloud sync. ownCloud is traditionally a bit slow with many small files spread over many directories. That shows dramatically in a test with CryFS: Adding eleven new files with a overall size of around 45 MB to a CryFS filesystem directory makes the ownCloud client upload for 6:30 minutes.

Adding another four files with a total size of a bit over 1MB results in an upload of 130 files and directories, with an overall size of 1.1 MB.

A typical change use case like changing an existing office text document locally is not that bad. CryFS splits a 8,2 kB big LibreOffice text doc into three 16 kB files in three directories here. When one word gets inserted, CryFS needs to create three new dirs in the cypher dir and uploads four new 16 kB blocks.

My personal conclusion: CryFS is an interesting project. It has a nice integration in the KDE desktop with Plasma Vault. Splitting files into equal sized blocks is good because it does not allow to guess data based on names and sizes. However, for syncing with ownCloud, it is not the best partner.

If there is a way how to improve the situation, I would be eager to learn. Maybe the size of the blocks can be expanded, or the number of directories limited? Also the upcoming ownCloud sync client version 2.6.0 again has optimizations in the discovery and propagation of changes, I am sure that improves the situation.

Let’s see what other alternatives can be found.

the avatar of Efstathios Iosifidis

GUADEC: BoF, workshop και hacking days

GUADEC 2019, Thessaloniki

Το πρόγραμμα μπορεί να μην ανακοινώθηκε ακόμα λόγω των αλλαγών της τελευταίας στιγμής, όμως έχει ανακοινωθεί το πρόγραμμα για τα λεγόμενα Birds of a feather ή αλλιώς BoF. Τι σημαίνει αυτό και γιατί ονομάστηκε έτσι; Ρίξτε μια ματιά στο wiki για την ιστορία.

Πάμε να δούμε λοιπόν αναλυτικά τι συναντήσεις θα έχουμε. Όσοι ενδιαφέρεστε για το θέμα, πατήστε επάνω στον τίτλο για να διαβάσετε περισσότερα και εάν θέλετε, μπορείτε να συμμετάσχετε προσθέτοντας το όνομά σας. Εάν δεν μπορείτε, πείτε σε εμένα ή σε κάποιον άλλο από τους οργανωτές να προσθέσει το όνομά σας.

Στις 26 Αυγούστου:


1. FreeDesktop Dark Style Preference: ώρα 10:00-13:00

2. Engagement: ώρα 14:00-17:00

3. GTK: ώρα 10:00-13:00 και 14:00-17:00

4. GNOME documentation and localization: ώρα 14:00-17:00

5. Newcomers Workshop: ώρα 10:00-13:00 και 14:00-17:00

6. GNOME OS: ώρα 10:00-13:00 και 14:00-17:00

7. Rust + GTK + GStreamer Workshop: ώρα 10:00-13:00 και 14:00-17:00

Στις 27 Αυγούστου:


1. GTask: ώρα 10:00-13:00

2. Flatpak Donations/Store: ώρα 14:00-17:00

3. Rust: ώρα 10:00-13:00

4. GStreamer: ώρα 14:00-17:00

5. SpinachCon: ώρα 10:00-13:00 και 14:00-17:00

6. Vendor Themes: ώρα 10:00-13:00

7. Boxes: ώρα 14:00-17:00

8. Freedesktop SDK: ώρα 10:00-13:00

9. Content Apps & Tracker: ώρα 10:00-13:00

10. Diversity: ώρα 14:00-17:00

Όλα μαζί μπορείτε να τα δείτε μαζεμένα στην σελίδα https://wiki.gnome.org/GUADEC/2019/Hackingdays

a silhouette of a person's head and shoulders, used as a default avatar

Alte Kernelversionen installiert behalten

OpenSUSE behält in seiner Standardkonfiguration bei einer Aktualisierung des Kernels stets die letzte zuvor installierte Version. Diese kann beim Systemstart über die erweiterten Startmodi ausgewählt werden. Das hat den großen Vorteil, dass sollte der neue Kernel einen Fehler haben, man noch zu einem funktionierenden System zurückkehren kann. Man kann also einfach mit dem alten Kernel weiterarbeiten und auf eine Behebung des Fehlers warten.

Doch was passiert bei der nächsten Aktualisierung? Hier gibt es mehrere Optionen. Löst die nächste Aktualisierung das Problem, kann man wieder auf den aktuellen Kernel wechseln und die alten Versionen können einem egal sein. Ist das Problem mit der nächsten Aktualisierung nicht behoben gibt es zwei mögliche Fälle. Im einfachen Fall startet das System mit dem neuen Kernel gar nicht. In diesem Fall kann man weiterhin mit dem alten Kernel weiterarbeiten, denn das System behält stets den aktuell laufenden Kernel installiert Solange neuere Kernel gar nicht starten können die alten Versionen somit auch nicht entfernt werden. Spannender wird es, wenn der neue Kernel zwar startet, aber trotzdem nicht korrekt funktioniert. So läuft womöglich der proprietäre Treiber von NVIDIA nicht oder ein Fehler im Kernel tritt nur in bestimmten Situationen auf. In diesem Fall würde das System in der Standardkonfiguration nach dem, aus seiner Sicht erfolgreichen, Start den alten Kernel entfernen und nur die zwei defekten Kernel behalten.

Zum Glück lässt sich die Menge der aufbewahrten Kernel sehr flexibel Konfigurieren. Die Konfiguration findet sich in der Datei /etc/zypp/zypp.conf Entscheiden ist der Wert der Option multiversion.kernels. Diese wird standardmäßig wie folgt gesetzt:

multiversion.kernels = latest,latest-1,running

Damit werden die neuesten beiden und der aktuell laufende Kernel behalten. Hier lassen sich aber auch explizite Versionen eintragen. Damit sieht die Zeile dann wie folgt aus:

multiversion.kernels = latest,latest-1,running,4.12.14-lp151.28.10.1

Wichtig ist, dass hier die Paketversion benötigt wird, nicht die vom Kernel bei einem uname -a ausgegebene Version. Letzteres würde lautet für den im Beispiel benutzen Kernel folgendes ausgeben.

Linux test 4.12.14-lp151.28.10-default #1 SMP Sat Jul 13 17:59:31 UTC 2019 (0ab03b7) x86_64 x86_64 x86_64 GNU/Linux

Die dazugehörige Paketversion lässt sich per rpm oder zypper abfragen:

> rpm -qa kernel-default
kernel-default-4.12.14-lp151.28.10.1.x86_64
kernel-default-4.12.14-lp151.28.13.1.x86_64

> zypper search -s kernel-default
Repository-Daten werden geladen...
Installierte Pakete werden gelesen...

S  | Name                 | Typ        | Version               | Arch   | Repository
---+----------------------+------------+-----------------------+--------+--------------------------------
i+ | kernel-default       | Paket      | 4.12.14-lp151.28.13.1 | x86_64 | Hauptaktualisierungs-Repository
i+ | kernel-default       | Paket      | 4.12.14-lp151.28.10.1 | x86_64 | Hauptaktualisierungs-Repository
v  | kernel-default       | Paket      | 4.12.14-lp151.28.7.1  | x86_64 | Hauptaktualisierungs-Repository
v  | kernel-default       | Paket      | 4.12.14-lp151.28.4.1  | x86_64 | Hauptaktualisierungs-Repository
v  | kernel-default       | Paket      | 4.12.14-lp151.27.3    | x86_64 | Haupt-Repository (OSS)
   | kernel-default       | Quellpaket | 4.12.14-lp151.28.13.1 | noarch | Hauptaktualisierungs-Repository
   | kernel-default       | Quellpaket | 4.12.14-lp151.28.10.1 | noarch | Hauptaktualisierungs-Repository
   | kernel-default       | Quellpaket | 4.12.14-lp151.28.7.1  | noarch | Hauptaktualisierungs-Repository
   | kernel-default       | Quellpaket | 4.12.14-lp151.28.4.1  | noarch | Hauptaktualisierungs-Repository
   | kernel-default-base  | Paket      | 4.12.14-lp151.28.13.1 | x86_64 | Hauptaktualisierungs-Repository
   | kernel-default-base  | Paket      | 4.12.14-lp151.28.10.1 | x86_64 | Hauptaktualisierungs-Repository
   | kernel-default-base  | Paket      | 4.12.14-lp151.28.7.1  | x86_64 | Hauptaktualisierungs-Repository
   | kernel-default-base  | Paket      | 4.12.14-lp151.28.4.1  | x86_64 | Hauptaktualisierungs-Repository
   | kernel-default-base  | Paket      | 4.12.14-lp151.27.3    | x86_64 | Haupt-Repository (OSS)
i+ | kernel-default-devel | Paket      | 4.12.14-lp151.28.13.1 | x86_64 | Hauptaktualisierungs-Repository
i+ | kernel-default-devel | Paket      | 4.12.14-lp151.28.10.1 | x86_64 | Hauptaktualisierungs-Repository
v  | kernel-default-devel | Paket      | 4.12.14-lp151.28.7.1  | x86_64 | Hauptaktualisierungs-Repository
v  | kernel-default-devel | Paket      | 4.12.14-lp151.28.4.1  | x86_64 | Hauptaktualisierungs-Repository
v  | kernel-default-devel | Paket      | 4.12.14-lp151.27.3    | x86_64 | Haupt-Repository (OSS)

Wie im Beispiel zu sehen ist, liefert zypper die einfacher zu lesende Ausgabe, rpm aber eine deutlich kompaktere.

Hat man so den funktionierenden Kernel vor dem automatischen Aufräumen geschützt, lässt es sich ganz entspannt auf eine reparierte Version warten. Wurde der Fehler behoben, und spätestens nach dem nächsten Distributionsupgrade, sollte man allerdings die Konfiguration wieder zurücksetzen um nicht unnötig alte Kernel mit sich herumzuschleppen. Denn diese benötigen nicht nur Platz, sie verlängern auch die Installationszeit mancher Updates.

Weitere Informationen zur Installation mehrerer Kernelversionen finden sich in der englischen Referenzdokumentation von openSUSE Leap.

the avatar of Kubic Project

Kata Containers now available in Tumbleweed

Kata

Kata Containers is an open source container runtime that is crafted to seamlessly plug into the containers ecosystem.

We are now excited to announce that the Kata Containers packages are finally available in the official openSUSE Tumbleweed repository.

It is worthwhile to spend few words explaining why this is a great news, considering the role of Kata Containers (a.k.a. Kata) in fulfilling the need for security in the containers ecosystem, and given its importance for openSUSE and Kubic.

What is Kata

As already mentioned, Kata is a container runtime focusing on security and on ease of integration with the existing containers ecosystem. If you are wondering what’s a container runtime, this blog post by Sascha will give you a clear introduction about the topic.

Kata should be used when running container images whose source is not fully trusted, or when allowing other users to run their own containers on your platform.

Traditionally, containers share the same physical and operating system (OS) resources with host processes, and specific kernel features such as namespaces are used to provide an isolation layer between host and container processes. By contrast, Kata containers run inside lightweight virtual machines, adding an extra isolation and security layer, that minimizes the host attack surface and mitigates the consequences of containers breakout. Despite this extra layer, Kata achieves impressive runtime performances thanks to KVM hardware virtualization, and when configured to use a minimalist virtual machine manager (VMM) like Firecracker, a high density of microVM can be packed on a single host.

If you want to know more about Kata features and performances:

  • katacontainers.io is a great starting point.
  • For something more SUSE oriented, Flavio gave a interesting talk about Kata at SUSECON 2019,
  • Kata folks hang out on katacontainers.slack.com, and will be happy to answer any quesitons.

Why is it important for Kubic and openSUSE

SUSE has been an early and relevant open source contributor to containers projects, believing that this technology is the future way of deploying and running software.

The most relevant example is the openSUSE Kubic project, that’s a certified Kubernetes distribution and a set of container-related technologies built by the openSUSE community.

We have also been working for some time in well known container projects, like runC, libpod and CRI-O, and since a year we also collaborate with Kata.

Kata complements other more popular ways to run containers, so it makes sense for us to work on improving it and to assure it can smoothly plug with our products.

How to use

While Kata may be used as a standalone piece of software, it’s intended use is serve as a runtime when integrated in a container engine like Podman or CRI-O.

This section shows a quick and easy way to spin up a Kata container using Podman on openSUSE Tumbleweed.

First, install the Kata packages:

$ sudo zypper in katacontainers

Make sure your system is providing the needed set of hardware virtualization features required by Kata:

$ sudo kata-runtime kata-check

If no errors are reported, great! Your system is now ready to run Kata Containers.

If you haven’t already, install podman with:

$ sudo zypper in podman

That’ all. Try running a your first Kata container with:

$ sudo podman run -it --rm --runtime=/usr/bin/kata-runtime opensuse/leap uname -a
Linux ab511687b1ed 5.2.5-1-kvmsmall #1 SMP Wed Jul 31 10:41:36 UTC 2019 (79b6a9c) x86_64 x86_64 x86_64 GNU/Linux

Differences with runC

Now that you have Kata up and running, let’s see some of the differences between Kata and runC, the most popular container runtime.

When starting a container with runC, container processes can be seen in the host processes tree:

...
10212 ?        Ssl    0:00 /usr/lib/podman/bin/conmon -s -c <ctr-id> -u <ctr-id>
10236 ?        Ss     0:00  \_ nginx: master process nginx -g daemon off;
10255 ?        S      0:00      \_ nginx: worker process
10256 ?        S      0:00      \_ nginx: worker process
10257 ?        S      0:00      \_ nginx: worker process
10258 ?        S      0:00      \_ nginx: worker process
...

With Kata, container processes are instead running in a dedicated VM, so they are not sharing OS resources with the host:

...
10651 ?        Ssl    0:00 /home/marco/go/src/github.com/containers/conmon/bin/conmon -s -c <ctr-id> -u <ctr-id>
10703 ?        Sl     0:01  \_ /usr/bin/qemu-system-x86_64 -name sandbox-<ctr-id> -uuid e54ee910-2927-456e-a180-836b92ce5e7a -machine pc,accel=kvm,kernel_ir
10709 ?        Ssl    0:00  \_ /usr/lib/kata-containers/kata-proxy -listen-socket unix:///run/vc/sbs/<ctr-id>/proxy.sock -mux-socket /run/vc/vm/829d8fe0680b
10729 ?        Sl     0:00  \_ /usr/lib/kata-containers/kata-shim -agent unix:///run/vc/sbs/<ctr-id>/proxy.sock -container <ctr-id>
...

Future plans

We are continuing to work to offer you a great user experience when using Kata on openSUSE by:

  • improving packages quality and stability,
  • delivering periodic releases,
  • making sure that Kata well integrates with the other container projects, like Podman and CRI-O.

As a longer term goal, we will integrate Kata in the Kubic distribution and in CaaSP, to make them some of the most complete and secure solutions to manage containers.