Skip to main content

a silhouette of a person's head and shoulders, used as a default avatar

MyGica T230C hacking

As DVB-T(1) is phased out in Germany soon, I got me a new DVB-T2 stick. The MyGica T230 is supported under Linux, and has a quite low price (~20€).

Instead of the expected T230, I received a T230C which has silently replaced the T230. The T230C is – although quite similar to the T230 – currently (Linux 4.10rc2) not supported by the mainline kernel.

Compared to the T230, the T230C uses the same Cypress FX2 CY7C68013A USB bridge chip, a Silabs Si2168-D60 demodulator (new revision) and a new tuner chip, Silabs Si2141-A10 (T230 uses a Si2157).

img_0419_v1img_0420_v1
img_0440img_0429
img_0429_440_overlay

Beyond the bridge/RF chips, there is an I2C eeprom (FX2 firmware?), two LowPowerSemi adjustable, synchronous DC/DC buck converters (marking LPS A36j1, LP3202?), a 74ACT1G00 NAND gate (marking A00), two 24MHz oscillators and a bunch of passives.

a silhouette of a person's head and shoulders, used as a default avatar

Install openSUSE Tumbleweed + KDE on MacBook 2015

It is pretty easy to install openSUSE Linux on a MacBook as operating system. However there are some pitfalls, which can cause trouble. The article gives some hints about a dual boot setup with OS X 10.10 and at time of writing current openSUSE Tumbleweed 20170104 (oS TW) on a MacBookPro from early 2015. A recent Linux kernel, like in TW, is advisable as it provides better hardware support.

The LiveCD can be downloaded from www.opensuse.org and written with ImageWriter GUI to a USB stick ~1GB. I’ve choose the Live KDE one and it run well on a first test. During boot after the first sound and display light switches on hold Option/alt key and wait for the disk selection icon. Put the USB key with Linux in a USB port and wait until the removable media icon appears and select it for boot. For me all went fine. The internal display, sound, touchpad and keyboard where detected and worked well. After that test. It was a good time to backup all data from the internal flash drive. I wrote a compressed disk image to a stick using the unix dd command. With that image and the live media I was able to recover, in case anything went wrong. It is not easy to satisfy OS X for it’s journaled HFS and the introduced logical volume layout, which comes with a separate repair partition directly after the main OS partition. That combination is pretty fragile, but should not be touched. The rescue partition can be booted with the command key + r pressed. External tools failed for me. So I booted into rescue mode and took the OS X diskutil or it’s Disk Utility GUI counter part. The tool allows to split the disk into several partitions. The EFI and the rescue ones are hidden in the GUI. The newly created additional partitions can be formatted to exfat and later be modified for the Linux installation. One additional HFS partition was created for sharing data between OS X and Linux with the comfortable Unix attributes. The well know exfat used by many bigger USB sticks, is a possible option as well, but needs the exfat-kmp kernel module installed, which is not by default installed due to Microsofts patent license policy for the file system. In order to write to HFS from Linux, any HFS partition must have switched off the journal feature. This can be done inside the OS X Disk Utility GUI, by selecting the data partition and holding the alt key and searching in the menu for the disable journaling entry. After rebooting into the Live media, I clicked on the Install icon on the desktop background and started openSUSE’s Yast tool. Depending on the available space, it might be a good idea to disable the btrfs filesystem snapshot feature, as it can eat up lots of disk space during each update. An other pitfall is the boot stage. Select there secure GrubEFI mode, as Grub needs special handling for the required EFI boot process. That’s it. Finish install and you should be able to reboot into Linux with the alt key.

My MacBook has unfortunedly a defect. It’s Boot Manager is very slow. Erasing and reinstalling OS X did not fix that issue. To circumvent it, I need to reset NVRAM by pressing alt+cmd+r+p at boot start for around 14 second, until the display gets dark, hold alt on the soon comming next boot sound, select the EFI TW disk in Apple Boot Manager and can then fluently go through the boot process. Without that extra step, the keyboard and mouse might not respond in Linux at all, except the power button. Hot reboot from Linux works fine. OS X does a cold reboot and needs the extra sequence.

KDE’s Plasma needs some configuration to run properly on a high resolution display. Otherwise additional monitors can be connected and easily configured with the kscreen SystemSettings module. Hibernate works fine. Currently the notebooks SD slot is ignored and the facetime camera has no ready oS packages. Battery run time can be extended by spartan power consumption (less brightness, less USB devices and pulseaudio -k, check with powertop), but is not too far from OS X anyway.

the avatar of Bernhard M. Wiedemann

How we run our OpenStack cloud

This post it to document how we setup cloud.suse.de which is one of our many internal SUSE OpenStack Cloud deployments for use by R&D.

In 2016-06 we started the deployment with SOC6 on 4 nodes. 1 controller and 3 compute nodes that also served for ceph (distributed storage) with their 2nd HDD. Since the nodes are from 2012 they only have 1gbit network and spinning disks. Thus ceph only delivers ~50 MB/s which is sufficient for many use cases.

We did not deploy that cloud with HA, even though our product supports it. The two main reasons for that are

  • that it will use up two or three nodes instead of one for controller services, which is significant if you start out with only 4 (and grow to 6)
  • that it increases the complexity of setup, operations and debugging and thus might lead to decreased availability of the cloud service

Then we have a limited supply of vlans even though technically they are just numbers between 1 and 4095, in SUSE we do allocations to be able to switch together networks from further away. So we could not use vlan mode in neutron if we want to allow software defined network (=SDN) (we did not in old.cloud.suse.de and I did not hear complaints, but now I see a lot of people using SDN)
So we went with ovs+vxlan +dvr (open vSwitch + Virtual eXtensible LAN + Distributed Virtual Router) because that allows VMs to remain reachable even when the controller node reboots.
But then I found that they cannot use DNS during that time, because distributed virtual DNS was not yet implemented. And ovs has some annoying bugs are hard to debug and fix. So I built ugly workarounds that mostly hide^Wsolve the problems from our users’ point of view.
For the next cloud deployment, I will try to use linuxbridge+vlan or linuxbridge+vxlan mode.
And the uptime is pretty good. But it could be better with proper monitoring.

Because we needed to redeploy multiple times before we got all the details right and to document the setup, we scripted most of the deployment with qa_crowbarsetup (which is part of our CI) and extra files in https://github.com/SUSE-Cloud/automation/tree/production/scripts/productioncloud. The only part not in there are the passwords.

We use proper SSL certs from our internal SUSE CA.
For that we needed to install that root CA on all involved nodes.

We use kvm, because it is the most advanced and stable of the supported hypervisors. Xen might be a possible 2nd choice. We use two custom kvm patches to fix nested virt on our G3 Opteron CPUs.

Overall we use 3 vlans. One each for admin, public/floating, sdn/storage networks.
We increased the default /24 IP ranges because we needed more IPs in the fixed and public/floating networks

For authentication, we use our internal R&D LDAP server, but since it does not have information about user’s groups, I wrote a perl script to pull that information from the Novell/innerweb LDAP server and export it as json for use by the hybrid_json assignment backend I wrote.

In addition I wrote a cloud-stats.sh to email weekly reports about utilization of the cloud and another script to tell users about which instances they still have, but might have forgotten.

On the cloud user side, we and other people use one or more of

  • Salt-cloud
  • Nova boot
  • salt-ssh
  • terraform
  • heat

to script instance setup and administration.

Overall we are now hosting 70 instance VMs on 5 compute nodes that together have cost us below 20000€

a silhouette of a person's head and shoulders, used as a default avatar

Modern CUDA + CuDNN Theano/Keras AMI on AWS

Wow, what a jargon-filled post title. Basically, we do a lot of our deep learning currently on the AWS EC2 cloud – but to use the GPU there with all the goodies (up to CuDNN that supports modern Theano’s batch normalization) is a surprisingly arduous process which you basically need to do manually, with a lot of trial and error and googling and hacking. This is awful, mind-boggling and I hate that everyone has to go through this. So, to fix this bad situation, I just released a community AMI that:

  • …is based on Ubuntu 16.04 LTS (as opposed to 14.04)
  • …comes with CUDA + CuDNN drivers and toolkit already set up to work on g2.2xlarge instances
  • …has Theano and Keras preinstalled and preconfigured so that you can run the Keras ResNet model on a GPU right away (or anything else you desire)

To get started, just spin up a GPU (g2.2xlarge) instance from community AMI ami-f0bde196 (1604-cuda80-cudnn5110-theano-keras), ssh in as the ubuntu@ user and get going! No hassles. But of course, EC2 charges apply.


Edit (errata): Actually, there’s a bug – sorry about that! Out of the box, the nvidia kernel driver is not loaded properly on boot. I might update the AMI later, for now to fix it manually:

  1. Edit /etc/modprobe.d/blacklist.conf (using for example sudo nano) and append the line blacklist nouveau to the end of that file
  2. Run sudo update-initramfs -u
  3. Reboot. Now, everything should finally work.

This AMI was created like this:

  • The stock Ubuntu 16.04 LTS AMI
  • NVIDIA driver 367.57 (older drivers do not support CUDA 8.0, while this is the last driver version to support the K520 GRID GPU used in AWS)
  • To make the driver setup go through, the trick to install apt-get install linux-image-extra-`uname -r` per
  • CUDA 8.0 and CuDNN 8.0 set up from the official though unannounced NVIDIA Debian packages by replaying the nvidia-docker recipes
  • bashrc modified to include cuda in the path
  • Theano and Keras from latest Git as of writing this blogpost (feel free to git pull and reinstall), and some auxiliary python-related etc. packages
  • Theano configured to use GPU and Keras configured to use Theano (and the “th” image dim ordering rather than “tf” – this is currently non-default in Keras!)
  • Example Keras deep learning models, even an elephant.jpg! Just run python resnet50.py
  • Exercise: Install TensorFlow on the system as well, release your own AMI and post its id in the comments!
  • Tip: Use nvidia-docker based containers to package your deep learning software; combine it with docker-machine to easily provision GPU instances in AWS and execute your models as needed. Using this for development is a hassle, though.

Enjoy!

a silhouette of a person's head and shoulders, used as a default avatar

GNU Screen v.4.5.0

I’m proud to announce the release of GNU Screen v.4.5. This time it’s mostly a bugfix release. We added just one new feature: now it’s possible to specify logfile name by using parameter -L (default name stays screenlog.0). Myself also spent some time to make source code a bit cleaner.

As you probably noticed we were going to release 4.5 until Christmas. Unfortunately, we could not do it because of some internal GNU problems. I apologise for that.

As usual, we merged some community patches from our bug tracing system (small patches also were presented in IRC) and I would like to thank everyone who contribute to Screen and helps us to test development git-version!

For openSUSE users: I updated our devel-package already. It’s soon in factory and, as usual, after openQA routine new package will be available in Tumbleweed.

the avatar of Jos Poortvliet

Happy Birthday ownCloud

Seven years ago at Camp KDE in San Diego, Frank announced a project to help people protect their privacy, building an alternative to Dropbox: ownCloud.

I was there, sharing a room with Frank at the infamous Banana Bungalow. Epic times, I can tell you that - there was lots of rum, lots of rain and loads of good conversations and making new friends.





Since then, a lot has changed. But the people who started building a self-hosted, privacy protecting alternative in 2010 and 2011 are still on it! In 2011, a first meetup was held, and the 5 participants at that meetup recently got on stage at the Nextcloud conference to recall some good memories:



Of course, today we continue the work at Nextcloud, that just yesterday published its latest bugfix- and security update. It is great to see so many people have stuck with us for all these years - just this month, the KDE sysadmins migrated their ownCloud instance to Nextcloud!

We'll keep up the good work and you're welcome to join, either if you're looking for a job or just want to code. In both cases I can promise you: working with such a motivated, dedicated, professional team is just plain amazing.

I also published a blog on our Nextcloud blog about this milestone.

EDIT: By the way - there's a meetup tonight in C-Base, B'lin, 19:00 - would be fun to drink a beer on ownCloud's birthday and talk about the future! Join! It will be at least until 10 or so, so if you can't be there before then - still come! ;-)

the avatar of Cameron Seader

VMware Workstation 12.5.2 patch for Linux Kernel 4.9

I've rounded up the working patches from the public posts and created my own patch files. You can use my updated VMware module compile script to patch it as well. It also does a bit of cleanup. Grab the script and the patch files from here. Once downloaded then make sure they are all in the same directory and you have made the script executable. Follow the rest of the steps below.

1) Directory should look like this:
# ls -al mkvm* *.patch
-rwxr-xr-x 1 cseader users 2965 Jan  4 21:11 mkvmwmods+patch.sh        
-rwxr-xr-x 1 cseader users 1457 Sep 26 15:47 mkvmwmods.sh
-rw-r--r-- 1 cseader users  650 Jan  4 19:16 vmmon-hostif.patch        
-rw-r--r-- 1 cseader users  650 Jan  4 21:21 vmnet-userif.patch
2) Execute with sudo or login as root

# ./mkvmwmods+patch.sh                                                
It will immediately start the cleanup and then extracting the VMware source. If the patch files are in the same Directory as it looks like above then it will patch the source for compiling against Kernel 4.9
                      

3) Now Start VMware Workstation.

Enjoy!

the avatar of Martin Vidner

USB Communication with Python and PyUSB

Say we have a robot with a USB connection and command documentation. The only thing missing is knowing how to send a command over USB. Let's learn the basic concepts needed for that.

General Bunny catching Pokemon

Installing the Library

We'll use the pyusb Python library. On openSUSE we install it from the main RPM repository:

sudo zypper install python-usb

On other systems we can use the pip tool:

pip install --user pyusb

Navigating USB Concepts

To send a command, we need an Endpoint. To get to the endpoint we need to descend down the hierarchy of

  1. Device
  2. Configuration
  3. Interface
  4. Alternate setting
  5. Endpoint

First we import the library.

#!/usr/bin/env python2

import usb.core

The device is identified with a vendor:product pair included in lsusb output.

Bus 002 Device 043: ID 0694:0005 Lego Group

VENDOR_LEGO = 0x0694
PRODUCT_EV3 = 5
device = usb.core.find(idVendor=VENDOR_LEGO, idProduct=PRODUCT_EV3)

A Device may have multiple Configurations, and only one can be active at a time. Most devices have only one. Supporting multiple Configurations is reportedly useful for offering more/less features when more/less power is available. EV3 has only one configuration.

configuration = device.get_active_configuration()

A physical Device may have multiple Interfaces active at a time. A typical example is a scanner-printer combo. An Interface may have multiple Alternate Settings. They are kind of like Configurations, but easier to switch. I don't quite understand this, but they say that if you need Isochronous Endpoints (read: audio or video), you must go to a non-primary Alternate Setting. Anyway, EV3 has only one Interface with one Setting.

INTERFACE_EV3 = 0
SETTING_EV3 = 0
interface = configuration[(INTERFACE_EV3, SETTING_EV3)]

An Interface will typically have multiple Endpoints. The Endpoint 0 is reserved for control functions by the USB standard so we need to use Endpoint 1 here.

The standard distinguishes between input and output endpoints, as well as four transfer types, differing in latency and reliability. The nice thing is that the Python library nicely allows to abstract all that away (unlike cough Ruby cough) and we simply say to write to a non-control Endpoint.

ENDPOINT_EV3 = 1
endpoint = interface[ENDPOINT_EV3]

# make the robot beep
command = '\x0F\x00\x01\x00\x80\x00\x00\x94\x01\x81\x02\x82\xE8\x03\x82\xE8\x03'
endpoint.write(command)

Other than Robots?

Robots are great fun but unfortunately they do not come bundled with every computer. Do you know of a device that we could use for demonstration purposes? Everyone has a USB keyboard and mouse but I guess the OS will claim them for input and not let you play.

What Next

The Full Script

a silhouette of a person's head and shoulders, used as a default avatar

Running for the openSUSE Board

Hi! I‘m Sarah Julia Kriesch, 29 years old, educated as a Computer Science Expert for System Integration, and currently studying Computer Science at the TH Nürnberg.

 

Introduction and Biography

I am a Student at the TH Nürnberg, Student Officer for Computer Science (Fachschaft Informatik) and a Working Student (Admin/ DevOps) at ownCloud. I changed from working life to student life this year. I have received the scholarship „Aufstiegsstipendium“ (translated „upgrading scholarship“) for students with work experience by the BMBF.

I have got 4 years of work experience as a Linux System Administrator in the Core System Administration (Monitoring) at 1&1 Internet AG/ United Internet and as a (Managing) Linux Systems Engineer for MRM Systems (SaaS) at BrandMaker. MRM Systems are systems for project management in marketing (Marketing Ressource Management Systems).

I used SLES/ openSUSE during my German education of information technology for the first time in 2009. In the company I learned installations with YaST. I wanted to know more, which was the reason for going to conferences and expos. I tried to educate myself (with community support and vocational school) until the end of my 2nd year. oSC11 was the time stamp for meeting the openSUSE Community.  Marco Michna wanted to become my Mentor in System Administration and gave me private lessons until his death. I got a scholarship for further education (a free Linux training) by Heinlein. Both were a good base for starting in the job after the vocational training act.

I wasn‘t allowed to contribute to openSUSE during my last year of education, because my education company didn‘t want to see that. They filtered Google after all contributions in forums and communities. That‘s the reason why I am using the anonymous nick name „AdaLovelace“ at openSUSE. I had to wait for joining openSUSE again until my first job where I worked together with Contributors/ Members of Debian, FreeBSD and Fedora.

I started with German translations at openSUSE with half a year of work experience. Most of you know me from oSCs (since 2011). I was Member of the Video Team, the Registration Desk and contributed as a Speaker. Since 2013 I am wiki maintainer in the German wiki and admin there. Since 2014 I am an active Advocate in Germany. I give yearly presentations, organize booths and take part in different Open Source Events. As a GUUG Member (German Unix User Group) I asked for a sponsorship for oSC16. I hold my first (English) presentation about performance monitoring there then.

This year I have joined the Heroes Team and the Release Management Team. I founded the Heroes Team with my friends during the oSC16 because of the spam in the wiki. I became the Coordinator for this project. I am Translation Coordinator now, too. I was responsible for the documentation of openSUSE Leap 42.2. So I wrote a lot in the English wiki this year. I was interviewed (as an Advocate) by the Hacker Public Radio at the FOSDEM 2016.

Some of you know me from different mailing lists. That‘s the best way to reach me.

I love openSUSE and pick up tasks, if I see something to do where I can help with my Sysadmin/ Coordination/ Documentation/ BPM skills. Free periods ( Monday & Tuesday) are reserved for openSUSE Contributions. If somebody asks me for technical help (unimportant whether programming, infrastructure or communication), I‘ll try to find a solution.  I learned to work agile (Scrumban in System Administration) which I want to transfer to my teams in open source projects.

Issues I can see

I want to improve the cooperation between openSUSE and universities/ TH Nürnberg as the founder of the Open Source AG there.

openSUSE should be one of the main distributions on AWS (main AMI).

The openSUSE Infrastructure should be easier to achieve for openSUSE admins, so that we can react on escalations very fast.

Role of the Board

My goal is to have happy customers and developers. That‘s what I want to achieve as an Advocate and (perhaps) as a Board Member in the future.

We should live freedom in the community. Everybody should do what he likes. I don‘t like bossing. But I want to help in leadership with coordination and solutions where needed.

Why you should vote me

  •  I am a geek(o).
  •  I like new technologies and learning.
  •  I know most important people in the community.
  •  I learned coordination in my first job, which I can use as a Board Member, too.
  •  I am educated by communities.
  •  I have got an education in information technology.
  •  I contribute to different parts of the project (technical and non-technical).
  •  I have got a big open source network (openSUSE, ownCloud, GUUG, …).
  •  I have got international work experience.
  •  I love openSUSE.

 

Aims/ Goals

We should improve openSUSE and hold the position of being one of the best Linux distributions.

I want to be open for cooperation with other Linux/ open source projects.

The post Running for the openSUSE Board first appeared on Sarah Julia Kriesch.

a silhouette of a person's head and shoulders, used as a default avatar

Silent night - or "how I accidentally disabled email delivery"

My private email domains are hosted on a linux server where I have shell access (but not as root) which processes them with procmail, stores them locally and finally forwards them all to a professionally hosted email server with IMAP access and all that blinky stuff.
The setup is slightly convulted (aka "historically grown") but works well for me.

But the last days have been quiet on the email front. Not even the notorious spammers spamming my message-ids (how intelligent!) have apparently be trying to contact me. Now that's suspicious, so I decided to look into that.

A quick testmail from my gmail account did not seem to come through. Now the old test via telnet to port 25... had to look up the SMTP protocol, it's a long time ago I had to resort to this. First try: greylisting... come back later. Second try:
250 Ok: queued as F117E148DE4
Check the mails on the server: did not get through.

Now a few more words on the setup: as I wrote, all mail is forwarded to that professionally hosted IMAP server, where I read it usually with Thunderbird or, if things get bad, with the web frontend.
But since all emails are also stored on the server with shell access, I get them from there from time to time via imap-over-ssh, using fetchmail and the mailsync tool.

BTW, the fetchmail setup for such a thing is:
poll myacc via shellservername.tld with proto imap:
    plugin "ssh -C %h bin/imapd" auth ssh;
    user seife there is seife here options keep stripcr
    folders Mail/inbox Mail/s3e-spam Mail/thirdfolder
    mda "/usr/bin/procmail -f %F -d %T"
So while trying to check mail, I'm regularly running:
fetchmail && mailsync myacc
(first fetchmail, since it passes the mails to procmail which does the same folder-sorting as was done on the mail server already and is much faster than mailsync, which comes second to do the synchronization stuff: delete mails on the server that have been deleted locally etc.)
All looks normal, apart from no new mails arriving.
Until suddenly I noticed, that mailsync was synchronizing a folder named "spamassassin.lock". WTF?

 Investigating... On the server, there really is an (emtpy) mailbox named "Mail/spamassassin.lock".
Next place to look for is .procmailrc, and there it is: a rule like:

:0fw: spamassassin.lock
* < 1048576
| $HOME/perl/bin/spamassassin
And since everything in procmail apparently per default is relative to $MAILDIR, the lockfile was placed there. Probably a mailsync process came along, exactly at the moment the lockfile was existing and persisted it, and after that, no mail ever went past this point.

Solution was easy: remove the lockfile, make sure it does not get re-synchronized with next mailsync run and reconfigure procmail to use $HOME/spamassassin.lock instead. Now the silent times are over, spam is piling up again.