Welcome to Planet openSUSE

This is a feed aggregator that collects what openSUSE contributors are writing in their respective blogs.

To have your blog added to this aggregator, please read the instructions.


Saturday
18 August, 2018


face

I don’t have a regular need to interact the network there are times when I very much need to do so. My first exposure to controlling the network using the terminal was using ifconfig and I can’t seem to latch onto the “new” ip command in the same way I was able to with the previous. My problem is, I can’t ever seem to remember which resource it was that I liked best so, I’ve decide to make my own, very basic, resource. I have this with openSUSE as well as Raspian Linux. This is a gift to my future self for the next time I need to interact with the network.

Network Control from the Terminal

You’re welcome, Future Self.


Friday
17 August, 2018


face

Dear Tumbleweed users and hackers,

Again I have to span two weeks worth of Tumbleweed snapshots. But I’m somewhat happy about that, as otherwise, I’d only have negative things to report this week. Again, we’re having issues with openQA. Since we published snapshot 0813 to QA, we could not get any reliable test results – and if I can’t get any results I can trust, I prefer not to send out ‘possibly working’ snapshots to you, the users of Tumbleweed. The QA team is busy getting this back under control, but so far we could not get an estimate as to when we can expect our openQA instance to be fully functional again. In order to keep a solid base for debugging, I also decided to stop checking in to Factory and pushing new snapshots to QA: this would only be another moving target, making it even more difficult on the team debugging it.

But, as I said, I’m spanning two weeks again, so there are at least some snapshots I can report on that have been published. Namely the 6 snapshots 0803, 0804, 0806, 0807, 0808 and 0812.

The noteworthy changes delivered in those snapshots are:

  • KDE Plasma 5.13.4
  • Linux kernel 4.17.12 & 4.17.13
  • Tracker 2.1.0
  • PHP 7.2.8
  • Mesa 18.1.5
  • firewalld 0.6.0

While stopping checkins, obviously, the backlog is growing a bit, which means there are more things you can look forward to receive ‘soon’

  • RPM packaging changes: the SUSE specific configuration is being moved to a separate package
  • binutils 2.31
  • ncurses 6.1-20180707: breaks Xen (deprecations added, Xen builds with -Werror)
  • Usage of the new System Role selection instead of the legacy ‘desktop selection’
  • Python 3.7
  • KDE Applications 18.08.0 (release candidate currently in testing, will be delivered when final)
  • glibc 2.28; the split of libxcrypt causes some build failures
  • Linux kernel 4.17.14 & 4.18
  • util-linux 2.32.1 – breaks systemd build (header changes, fixed upstream)

 


face

Previously I wrote about how I became the maintainer of Saka, an open source browser extension that allows users to search through and load open tabs, browsing history and bookmarks. I talked about how I came up with a solution for unit testing the extension to give me confidence with code changes. I also mentioned that there were issues with integration testing that I ran into which made it difficult to test components that relied on browser APIs.

Today I am happy to report that I have found a way to perform integration testing on extensions and want to share it with you in this post. But before we go down that particular rabbit hole lets first discuss integration testing and why it is useful for validating software.

The Testing Trophy

https://twitter.com/kentcdodds/status/960723172591992832

Kent C. Dodds has written about something he calls the ‘Testing Trophy’. If you have heard of the testing pyramid before this is a similar concept — it’s a visualization of how you should prioritize the different types of testing in applications. The title of Kent’s post says it all:

Write tests. Not too many. Mostly integration.

Why does he say this? Kent notes the problem with unit tests is that they only prove individual units work as expected— they do not prove that the units can work together as a whole. Integration testing on the other hand proves that all the components in our project can actually work together as expected.

The Need For Integration Testing

Let’s leave the world of software and look at a real world example. Suppose we wanted to build a sink for a bathroom. There are 4components to this sink: the faucet, the basin, the drainage system and the water line. Since the drain and water line come with the building we only need to worry about adding the faucet and the basin.

We go to the store and pick a faucet and basin that we like. We bring them on site and assemble each individually. We confirm that the faucet and basin each work as expected and that they have no defects. Finally we assemble the full sink — hooking up the faucet to the water line and the basin to the drainage. After all our labor we are excited to see our sink in action so we turn on the faucet and what happens? Well…

Source: X Unit Tests, 0 Integration Tests

Oops! While we did check to see that the faucet and basin work on their own we forgot to check if the two were actually compatible. This is why integration testing is valuable — it proves that different components, modules and libraries work together as expected.

Kent C. Dodds — Write tests. Not too many. Mostly integration.

Ulrika Malmgren — X Unit Tests, 0 Integration Tests

Solution

Since writing my last post I have managed to get Jest working with Preact, the framework used to create Saka. Jest is a modern testing framework that can


Thursday
16 August, 2018


face

I watch a lot of BBC Gardeners World, which gives me a lot of inspiration for making changes to my own garden. I tried looking for a free and open source program for designing gardens in the openSUSE package search. The only application that I found was Rosegarden, a MIDI and Audio Sequencer and Notation Editor. Using google, I found Edraw Max, an all-in-one diagram software. This included a Floor planner, including templates for garden design. And there are download options for various Linux distributions, including openSUSE.

Installation

You can download a 14-day free trial from the Edraw Max website.

The next thing to do, is use Dolphin and browse to your Downloads folder. Find the zipped package and double click it. Ark will automatically load it. Then click on the Extract button.

Now you can press F4 in Dolphin to open the integrated terminal. If you type in the commands as listed on the Edraw website, the application will install without an issue.

Experience

From the application launcher (start menu), you can now type Edraw Max and launch the application. Go to New and then Floor Plan and click on Garden Design.

On the left side, there is a side pane with a lot of elements that you can use for drawing (see picture below). Start with measuring your garden and with the walls, you can ‘draw’ the borders of your garden. On the right side, there is a side pane where you can adjust the properties of these elements. For instance you can edit the fill (color) of the element, the border (color) of the element and adjust the properties. I didn’t need the other parts of this right side pane (which included shadow, insert picture, layer, hyperlink, attachment and comments).

Now you can make various different garden designs! This is one of the 6 designs that I created for my own garden.

The last feature that I like to mention is the export possibilities. There is a lot of export options here, including Jpeg, Tiff, PDF, PS, EPS, Word, PowerPoint, Excel, HTML, SVG and Visio. In the unlicensed version, all exports work except for the Visio export. In the PDF you will see a watermark “Created by Unlicensed Version”.

Conclusion

As this is proprietary software, you will have to pay for it after 14 days. Unfortunately, the price is quite high. As a Linux user, you can only select the Lifetime license, which currently costs $ 245. It is a very complete package (280 different types of diagrams), but I find the pricing too high for my purposes. And there is no option to pay less. For professional users I can imagine that this price would not be a big issue, as the software will pay itself back when you get payed for making designs. For me personally, it was a very nice experience to use this limited trial and it helped me to think of different ways in which I can redesign my garden.

Published on


face

There were two openSUSE Tumbleweed snapshots this past week that mostly focused on language and network packages.

The Linux Kernel also received an update a couple days ago to version 4.17.13.

The packages in the 20180812 Tumbleweed snapshot brought fixes in NetworkManager-applet 1.8.16, which also modernized the package for GTK 3 use in preparations for GTK 4. The free remote desktop protocol client had its third release candidate for freerdp 2.0.0 where it improved automatic reconnects, added Wave2 support and fixed automount issues. More network device card IDs for the Intel 9000 series were added in kernel  4.17.13. A jump from libstorage-ng 4.1.0 to version 4.1.10 brought several translations and added unit test for probing xen xvd devices. Two Common Vulnerabilities and Exposures fixes were made with the update in postgresql 10.5. Several rubygem packages were updated to versions 5.2.1 including rubygem-rails 5.2.1, which makes the master.key file read-only for the owner upon generation on POSIX-compliant systems. Processing XML and HTML with python-lxml 4.2.4 should have fewer crashes thanks to a fix of sporadic crashes during garbage collection when parse-time schema validation is used and the parser participates in a reference cycle. Several YaST packages receive updates including a new ServiceWidget to manage the service status with yast2-ftp-server 4.1.3 as well with yast2-http-server, yast2-slp-server and yast2-squid 4.1.0 versions.

The snapshot from 20180808 brought the firewalld 0.6.0 version, which switched back to an ‘iptables’ backend as a default; “loads of new services” were added in the newer version including the addition of firewall-config adding a ipv6-icmp to the protocol dropdown box. The Linux Filesystem in Userspace interface, fuse 2.9.8, provided security update for systems where SELinux is active. The security update stops an unprivileged users to specify the allow_other option even when it was forbidden in the /etc/fuse.conf. The snapshot also updated yast2-network 4.1.5 that fixes the networking AutoYaST schema

Snapshot 20180808 recorded a stable rating of 95 on the snapshot reviewer and 20180812 is trending at a 96 rating.


Wednesday
15 August, 2018


face

People of the Builds! Another Sprint is over and here is what the OBS frontend team has achieved in the last two weeks (2018-07-30 to 2018-08-09). :smile: Release of Azure Cloud Upload Feature The beginning of August was hot in Germany! And while talking about weather, the cloud upload feature comes to our mind. :sunny: The OBS team released the Azure Cloud Upload, yes! :cloud: Just have a look at the blog post where it...


face

After today’s deployment we faced a downtime of our reference server. We want to give you some insight into what happened. What Happened? At 10:10, we deployed a new version to our reference server build.opensuse.org. Right after the deployment, the application didn’t boot anymore and displayed our error page. We immediatly recognized that a change in the configuration was causing the issues. After fixing the configuration, we were back online at 10:13. Why Did It...


Sunday
12 August, 2018


face

Friday, August 10. I got info from my boss that our company invited to give a workshop presentation about Linux in one of the colleges in Bekasi. STMIK Bani Saleh Bekasi. STMIK Bani Saleh Bekasi is the campus where I studying. Indeed, my campus has a relationship with our company. They are hosting their web … Continue reading Introducing openSUSE to Vocational High School Students

The post Introducing openSUSE to Vocational High School Students appeared first on dhenandi.com.


Saturday
11 August, 2018


face

Dual Head or Multi Monitor

If you have two monitors attached to your computer then the setup is called dual head, the generic term for any number of monitors is multi-monitor.

This setup useful if a single monitor is not enough for you to see all needed windows at once. But in this setup both monitors can be used only one person at the same time.

Dual Seat or Multi Seat

If you have two monitors what about attaching one more keyboard and mouse and “split” the computer and have independent sessions for each user? That setup is called dual seat or multi seat in general.

Linux is a multi user systems from the very beginning, but normally these users either work remotely or they simply share one seat and need to cooperate who will use the computer when.

Hardware

For this multi seat solution you need a separate graphics adapter for each seat. Fortunately to save some money you can combine discrete graphics cards with an integrated one.

If you use an integrated card you might need to enable the multi graphics support in BIOS because usually when a discrete graphics card is found the integrated one is automatically disabled.

BIOS settings

:information_source: This option is vendor dependant, check your mainboard manual.

Linux

I wanted to configure a multi seat in the past, but it was really complicated. I would have to tweak the X.org config manually and there were lots of hacks to make it work.

But actually it turned out that using a modern Linux distribution like openSUSE Leap 15.0 makes this very easy!

Using the loginctl Tool

As in almost all modern Linux distributions also in the openSUSE Leap 15.0 the console is managed by the systemd login manager. To interact with it from the command line you can use a tool called loginctl.

It can handle sessions, seats and users. Let’s see which seats are defined by default:

# loginctl list-seats
SEAT            
seat0           

1 seats listed.

Now we can list all hardware devices assigned to the default seat:

# loginctl seat-status seat0 
seat0
	Sessions: *2
	 Devices:
		  ├─/sys/devices/LNXSYSTM:00/LNXPWRBN:00/input/input5
		  │ input:input5 "Power Button"
		  ├─/sys/device…LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input4
		  │ input:input4 "Power Button"
		  ├─/sys/devices/pci0000:00/0000:00:01.3/0000:02:00.0/usb1
		  │ usb:usb1
		  ├─/sys/devices/pci0000:00/0000:00:01.3/0000:02:00.0/usb2
		  │ usb:usb2
		  ├─/sys/device…2:00.1/ata2/host1/target1:0:0/1:0:0:0/block/sr0
		  │ block:sr0
		  ├─/sys/device…ata2/host1/target1:0:0/1:0:0:0/scsi_generic/sg1
		  │ scsi_generic:sg1
		  ├─/sys/device…1.3/0000:02:00.2/0000:03:04.0/0000:05:00.0/usb4
		  │ usb:usb4
		  ├─/sys/devices/pci0000:00/0000:00:03.1/0000:09:00.0/drm/card0
		  │ [MASTER] drm:card0
		  │ ├─/sys/device…000:00:03.1/0000:09:00.0/drm/card0/card0-DP-1
		  │ │ [MASTER] drm:card0-DP-1
		  │ ├─/sys/device…:00:03.1/0000:09:00 

Thursday
09 August, 2018


face

Dear Community,

It has been more than a year since the openSUSE community started the Kubic Project, and it’s worth looking back over the last months and evaluating where we’ve succeeded, where we haven’t, and share with you all our plans for the future.

A stable base for the future

Much of our success has been in the area generally referred to as **MicroOS**, the part of the Kubic stack that provides a stable operating system that is **atomicly updated** for running containers.

Not only is Kubic MicroOS now a fully integrated part of the openSUSE Tumbleweed release process, but our Transactional Update stack has also been ported to regular openSUSE Tumbleweed and Leap.

Based on the community’s feedback, the new System Role has been further refined and now includes fully automated updates out of the box.

This collaboration is continuing, with many minor changes to the regular openSUSE installation process coming soon based on lessons learned with tuning the installation process in Kubic.

Reviewing our initial premise

We haven’t just been busy on the basesystem. Our efforts with Rootless Containers continue, and you can now use the “Docker-alternative” Podman CRI-O in both Kubic and regular openSUSE. But when considering the Initial Premise of the Kubic project, it’s probably safe to say we’re not where we hoped to be by now.

When we started in May 2017, SUSE were already well underway developing their first version of SUSE CaaS Platform. Alongside the general goal of making Kubic the easiest-to-live-with community Kubernetes distribution, a big part of the intial premise was to establish itself as a ‘close upstream’ community for SUSE CaaS Platform. In order set things up in a way similar to that you see between openSUSE Tumbleweed and SUSE Linux Enterprise), the plan was to rebase the SUSE CaaS Stack, including the Velum cluster bootstrapping tool onto the shared Kubic/Tumbleweed codebase. After a year, this goal has still proven elusive.

This is for a number of reasons, including the wonderfully fast pace of change of Tumbleweed and aspects of the initial design of SUSE CaaS Platform which was conceptualized for the needs of SUSE’s commercial customers, with the needs of developing within Kubic being an afterthought.

Obviously, this status quo has been tricky for all of us involved in Kubic, with the collective feeling being a desire to simultaneously close the gap between the Kubic and SUSE CaaS codebases, while also keeping up with the ever evolving container landscape, especially surrounding Kubernetes.

The world has shifted

Mentioning Kubernetes, it’s worth considering just how much has changed with kubernetes upstream in the 2 years since SUSE CaaS Platform development began. Back then there was no common tool for setting up and configuring a Kubernetes cluster. This was one of the primary motivators for creating `Velum`, a key part of the SUSE CaaS Stack. However these days there are multiple tools, including the increasingly pervasive kubeadm, which is used both standalone


face

Dear Community,

It’s been over a year since we started the Kubic project, and it’s worth looking back over the last months and evaluating where we’ve succeeded, where we haven’t, and sharing with you all our plans for the future.

A stable base for the future

Much of our success has been in the area generally referred to as MicroOS, the part of the Kubic stack which provides a stable operating system that is atomicly updated for running containers.

Not only is Kubic MicroOS now a fully integrated part of the openSUSE Tumbleweed release process, but our Transactional Update stack has also been ported to regular openSUSE Tumbleweed and Leap.
Based on your feedback the new System Role has been further refined now includes fully automated updates out of the box.

This collaboration is continuing, with many minor changes to the regular openSUSE installation process coming soon based on lessons learned with tuning the installation process in Kubic.

Reviewing our initial premise

We haven’t just been busy on the basesystem. Our efforts with Rootless Containers continue, and you can now use the “Docker-alternative” Podman CRI-O in both Kubic and regular openSUSE. But when considering the Initial Premise of the Kubic project, it’s probably safe to say we’re not where we hoped to be by now.

When we started in May 2017, SUSE were already well underway developing their first version of SUSE CaaS Platform. Alongside the general goal of making Kubic the easiest-to-live-with community Kubernetes distribution, a big part of the intial premise was to establish itself as a ‘close upstream’ community for SUSE CaaS Platform. In order set things up in a way similar to that you see between openSUSE Tumbleweed and SUSE Linux Enterprise), the plan was to rebase the SUSE CaaS Stack, including the Velum cluster bootstrapping tool onto the shared Kubic/Tumbleweed codebase. After a year, this goal has still proven elusive.

This is for a number of reasons, including the wonderfully fast pace of change of Tumbleweed and aspects of the initial design of SUSE CaaS Platform which was conceptualised for the needs of SUSE’s commercial customers, with the needs of developing within Kubic being an afterthought.
Obviously, this status quo has been tricky for all of us involved in Kubic, with the collective feeling being a desire to simultaneously close the gap between the Kubic and SUSE CaaS codebases, while also keeping up with the ever evolving container landscape, especially surrounding Kubernetes.

The world has shifted

Mentioning Kubernetes, it’s worth considering just how much has changed with kubernetes upstream in the 2 years since SUSE CaaS Platform development began. Back then there was no common tool for setting up and configuring a Kubernetes cluster. This was one of the primary motiviators for creating Velum, a key part of the SUSE CaaS Stack. However these days there are multiple tools, including the increasingly pervasive kubeadm which is used both standalone and as an integrated part of


Wednesday
08 August, 2018


face

The usual lifetime of openSUSE Leap minor versions have traditionally received updates for about 18 months, but the minor version of Leap 42.3 is being extended.

The last minor version of the Leap 42 series was scheduled to be maintained until January 2019, but that has changed thanks to SUSE committing to additional months of maintenance and security updates. Leap 42.3 is based on SUSE Linux Enterprise Server 12 Service Pack (SP) 3  and SUSE has agreed to keep publishing updates for Leap 42.3 until June 2019.

This means the extended End of Life for Leap 42.3 will increase the total lifetime of the Leap 42 series to 44 months.

Users of the openSUSE Leap 42 series are encouraged to use the additional months to prepare the upgrade to Leap 15, which was released in May.

Those who can’t migrate production servers to the new major version in time may want to take a (commercial) SLE subscription into consideration, which provides even a longer lifecycle. The proximity of Leap 42’s base system to SLE 12 keeps the technical effort to migrate workflows from Leap to SLE low.

 


Tuesday
07 August, 2018


face

zypper-upgraderepo-plugin adds to zypper the ability to check the repository URLs either for the current version or the next release, and upgrade them all at once in order to upgrade the whole system from command line.

This tool started as a personal project when a day I was in the need to upgrade my distro quicker than using a traditional ISO image, Zypper was the right tool but I got a little stuck when I had to handle repositories: some of them were not yet upgraded, others changed slightly in the URL path.

Who knows how to Bash the problem is not exactly a nightmare, and so I did until I needed to make a step further.

The result is zypper-upgraderepo Ruby gem which can be integrated as a zypper plugin just installing the zypper-upgraderepo-plugin package.

Installing zypper-upgraderepo-plugin

Installing zypper-upgraderepo-plugin is as easy as:

  1. Adding my repo:
    sudo zypper ar https://download.opensuse.org/repositories/home:/FabioMux/openSUSE_Leap_42.3/home:FabioMux.repo
  2. Install the package:
    sudo zypper in zypper-upgraderepo-plugin

How to use it

Sometime we want to know the status of current repositories, the command zypper ref does a similar job but it is primarily intended to update the repository’s data and that slow down a bit the whole process.
Instead we can type:

$ zypper upgraderepo --check-current

To know whether or not all the available repositories are upgrade-ready:

$ zypper upgraderepo --check-next


As you can see from the example above all the enabled repositories are ready to upgrade except for the OSS repo which has a slightly different URL.

# The URL used in the openSUSE Leap 42.3
http://download.opensuse.org/distribution/leap/42.3/repo/oss/suse/
# The suggested one for openSUSE Leap 15.0
http://download.opensuse.org/distribution/leap/15.0/repo/oss/

Let’s try again overriding the URL without make any real change:

$ zypper upgraderepo --check-next \
--override-url 8,http://download.opensuse.org/distribution/leap/15.0/repo/oss/

Once everything is ok, and after performed a backup including all the repositories, it’s time to upgrade all the repository at once:

$ sudo zypper upgraderepo --upgrade \
--override-url 8,http://download.opensuse.org/distribution/leap/15.0/repo/oss/

Conclusions

That’s all with the basic commands, more information is available in the wiki page of zypper-upgraderepo gem where all the commands are intended with the only use of the gem, but installing the plugin they are also available as zypper subcommands like shown above, also a man page is available as

$ zypper help upgraderepo


Saturday
04 August, 2018


face

Dear Tumbleweed users and hackers,

Week 31 was very slow when looking at the number of snapshots delivered. Last Friday, there was an update of openQA deployed, which contained quite a bit of rework under the hood. Despite quite a bit of testing upfront, there were some corner cases that made snapshots turn red. So rather than releasing untested snapshots, we spent the time to get the openQA code fixed again, adjusted tests where needed, so that in the end at least we can, with confidence, say again we trust the test results. That’s why only 0731 has been released during this week.

That snapshot brought you, amongst others, these updates:

  • GStreamer 1.14.2
  • fwupd 1.1.0
  • Linux kernel 4.17.11
  • LLVM 6.0.1

Staging projects were also slowed down during this week due to openQA. So a good bunch of what was in the makes last week is still there:

  • KDE Plasma 5.13.4
  • Linux kernel 4.17.12
  • RPM packaging changes: the SUSE specific configuration is being moved to a separate package, theoretically making maintenance of the rpm package easier.
  • binutils 2.31: breaks grub2 and qemu
  • ncurses 6.1-20180707: breaks Xen (deprecations added, Xen builds with -Werror)
  • Usage of the new System Role selection instead of the legacy ‘desktop selection’
  • Python 3.7
  • KDE Applications 18.08.0 (release candidate currently in testing, will be delivered when final)

face

Over in this issue we are discussing how to add debug logging for librsvg.

A popular way to add logging to Rust code is to use the log crate. This lets you sprinkle simple messages in your code:

error!("something bad happened: {}", foo);
debug!("a debug message");

However, the log create is just a facade, and by default the messages do not get emitted anywhere. The calling code has to set up a logger. Crates like env_logger let one set up a logger, during program initialization, that gets configured through an environment variable.

And this is a problem for librsvg: we are not the program's initialization! Librsvg is a library; it doesn't have a main() function. And since most of the calling code is not Rust, we can't assume that they can call code that can initialize the logging framework.

Why not use glib's logging stuff?

Currently this is a bit clunky to use from Rust, since glib's structured logging functions are not bound yet in glib-rs. Maybe it would be good to bind them and get this over with.

What user experience do we want?

In the past, what has worked well for me to do logging from libraries is to allow the user to set an environment variable to control the logging, or to drop a log configuration file in their $HOME. The former works well when the user is in control of running the program that will print the logs; the latter is useful when the user is not directly in control, like for gnome-shell, which gets launched through a lot of magic during session startup.

For librsvg, it's probably enough to just use an environment variable. Set RSVG_LOG=parse_errors, run your program, and get useful output. Push button, receive bacon.

Other options in Rust?

There is a slog crate which looks promising. Instead of using context-less macros which depend on a single global logger, it provides logging macros to which you pass a logger object.

For librsvg, this means that the basic RsvgHandle could create its own logger, based on an environment variable or whatever, and pass it around to all its child functions for when they need to log something.

Slog supports structured logging, and seems to have some fancy output modes. We'll see.


Thursday
02 August, 2018


face

People of the Builds! Another Sprint is over and here is what the OBS frontend team has achieved in the last two weeks (2018-07-15 to 2018-07-26). :wink: Release of OBS 2.9.4 There were two ugly security bugs :beetle: in our official OBS release. We’ve fixed them now with the help of Markus Hüwe and released a patched version last week. If you host your own OBS instance, please checkout the most recent OBS version and...


face

We are proud to announce that the new Azure cloud upload feature has just been released. You may be familiar with the previous EC2 Cloud Upload feature. This time, it is the turn of Azure images. The new feature we are presenting allows you to upload Azure images created in OBS to Microsoft Azure. Uploading an Azure Image In the package overview of an image (your own or another you find in OBS), you can...


Wednesday
01 August, 2018


face

If you search online for “Best Panorama Stitching software 2018”, chances are very high that you find an article that mentions Hugin (1, 2, 3). The reason is that this is one of the best programs to stitch photos for Linux, MacOS and Windows. And it is free and open source! The criticism is that its aimed at professional users and that it can be a bit overwhelming for new users. This article will provide you with a rundown on how to use Hugin. I have used openSUSE Leap 15 as my operating system, but this tutorial will work across distributions and operating systems.

Tutorial

  1. Start the Hugin Panorama Creator. The Assistant tab is the first thing that you will see. Click on Load images.
    .

    .
  2. Select the images that you want to use to create a panorama.
    .

    .
  3. Next, click the Align button to auto-align the pictures.
    .

    .

    .
  4. Now go to the Projection tab. I recommend that you try different ways of projection. I mostly use the options Rectilinear, Cylindrical or Architectural. This really can’t go wrong, because you can always go back to the default, which is Rectilinear.
    .

    .
  5. Then go to the Move/Drag tab. I would recommend to only use this feature when the picture has an incorrect angle or is really off-center. It never hurts to try, but be aware that the buttons (Center / Fit / Straigthen) might deform the picture in a way that you don’t like. In that case, it is better to start all over again (Click on File –> New). There is no Undo button! In the example below, I use this feature to correct the angle of the panorama.
    .

    .
  6. Now it is time to perfect your panorama. Go to the Crop tab. I prefer to use the sliders inside the picture, to determine the correct borders (up, down, left and right). It gives you a lot of control over the final picture. You can also use the Autocrop button, but where is the fun in that?
    .

    .
  7. After all necessary adjustments are made, go back to the Assistant tab. Now click on the Create panorama button.
    .

    .
  8. Now you will be asked how to adjust for the differences in exposure. I will always start with either the first option (Exposure corrected, low dynamic range) or the second option (Exposure fused from stacks). This depends on my visual inspection of the photo’s. If I see a lot of differences (some are very bright, some are darker) I tend to choose the second option. If the differences are not so pronounced, I usually stick with the first option.
    .

    .
  9. You will be asked to save your project and save the picture. The project will always be saved as a .pto file. The picture can be saved in different formats. But I leave it on Tiff. After the panorama is created and if I am happy with the results, I use GIMP to convert the panorama to the Jpeg format.
    .

    .
  10. Hugin will now create the panorama. This can take

Monday
30 July, 2018


face

Turtle YouTube

I have noticed as of late how clunky YouTube has become. The “dynamically loading content”, which I don’t remember asking for, has these weird pulsating boxes and you have to wait longer to get what you want. In what world does slower loading text make sense?

A Solution Presents Itself

I was listening to one of my favorite podcasts, Linux Action News, and on episode 64, one of the items discussed was this YouTube Classic extension and how having it installed improves load times. I was interested in trying it. I installed the extension through the Add-on manager and boom, done, nothing else to do but enjoy the reduced wait time with YouTube. Visually, it takes YouTube back a few years to, what I consider, a much better YouTube experience.

YouTube Classic Extension.png

Interestingly, there is a Chrome extension but Google has decided to remove it from the Google Chrome Web Store. One could draw the conclusion that Google prefers a less efficient YouTube experience. It is still possible to “side load” the extension. You can read more here on the GitHub page if you are interested. I have been drifting away from using Chrome so it is not really a priority to get it working.

Final Thoughts

Although, I have increased the usage of the Falkon browser, I still prefer to use Firefox for YouTube for the Plasma Integration plus KDE Connect which allows me to start and stop YouTube from the phone or my not-so-fancy Bluetooth headphones (no KDE Connect necessary for that).

If you are annoyed by pulsating boxes of dynamically loading content and want a more zippy static feel, this is most certainly the extension to have in Firefox. It would be fantastic if I could fix all websites “dynamic content” but that is not available… yet…

Further Reading

Linux Action News, episode 64

YouTube Classic Extension

YouTube Classic Extension GitHub page

Falkon Browser Home Page

 


Sunday
29 July, 2018


face

Timing and side-channels are not normally considered side-effects, meaning compilers and cpus feel free to do whatever they want. And they do. Unfortunately, I consider leaking my passwords to remote attackers prety significant side-effect... Imagine simple function.

void handle(char secret) {}
That's obviously safe, right? And now
void handle(char secret) { int i; for (i=0; i<secret*1000000; i++) ; }
That's obviously bad idea, because now secret is exposed via timing. Now, that used to be the only sidechannel for a while, but then, caches were invented. These days,
static char font[16*256]; void handle(char secret) { font[secret*16]; }
may be bad idea. But C has not changed, it knows nothing about caches, and nothing about side-channels. Caches are old news. But today we have complex branch predictors, and speculative execution. It is called spectre. This is bad idea:
static char font[16*256]; void handle(char secret) { if (0) font[secret*16]; }
as is this:
static char small[16], big[256]; void foo(int untrusted) { if (untrusted<16) big[small[untrusted]]; }
CPU bug... unfortunately it is tricky to fix, and the bug only affects caches / timing, so it "does not exist" as far as C is concerned. Canonical fix is something like
static char small[16], big[256]; void foo(int untrusted) { if (untrusted<16) { asm volatile("lfence"); big[small[untrusted]]; }}
which is okay as long as compiler compiles it the obvious way. But again, compiler knows nothing about caches / side channels, so it may do something unexpected, and re-introduce the bug. Unfortunately, it seems that there's not even agreement whose bug it is. Is it time C was extended to know about side-channels? What about
void handle(int please_do_not_leak_this secret) {}
? Do we need new language to handle modern (speculative, multi-core, fast, side-channels all around) CPUs?
(Now, you may say that it is impossible to eliminate all the side-channels. I believe eliminating most of them is well possible, if we are willing to live with ... huge slowdown. You can store each variable twice, to at least detectrowhammer. Caches still can be disabled -- row buffer in DRAM will be more problematic, and if you disable hyperthreading and make every second instruction  lfence, you can get rid of Spectre-like problems. You may get 100x? 1000x? slowdown, but if that's applied only to software that needs it, it may be acceptable. You probably want your editors & mail readers protected. You probably don't need gcc to be protected. No, running modern web browser does not look sustainable).

Nathan Wolf: FreeCAD First Timer

00:47 UTCmember

face

FreeCAD Logo

I have had a need for doing 3D CAD work in Linux but I don’t have the budget to invest in any  high dollar software to hobby around. FreeCAD fit the bill. It’s a free and open source 3D CAD modeler but it does a bunch of other things too. Although I could probably run something through Wine or in Virtual Machine, I don’t want to get further locked into proprietary software that doesn’t support Linux. The CAD package I am most familiar with is PTC Creo, formally Pro/Engineer and Wildfire. It is a fine piece of software that I am very adept and creating whatever I can dream up.

The problem with Creo is, even if I could afford to purchase a license, the software doesn’t run on Linux. PTC used to support Linux but does not any longer, which is very unfortunate.

After trying a few things, I have settled on FreeCAD as my open sourced software of choice. At the time of writing, I am running FreeCAD v 0.17. FreeCAD is written in C++ and Python and is extensible so it allows for you to create functions in Python. This is a 3D CAD, BIM, FEM modeler. At this time, I only do Mechanical CAD (Computer Aided Drafting) work with it but am interested in BIM (Building Information Modeler) and FEM (Finite Element Method) modules as well. More on those at another time.

Installing FreeCAD in openSUSE is can be done using the 1-Click method or my favorite method, through the terminal:

sudo zypper install FreeCAD

I wasn’t completely vanilla in using FreeCAD as I used it as a part viewer for years but I hadn’t taken the time to create any any parts in it. After watching a video from Sudo Sergeant on using FreeCAD, I was able to see how too use the different functions. I was rather inspired to try it myself.

Running FreeCAD for the first time you are welcomed with a pleasant start center. Since I am most focused on part design, that was the Workbench that I selected.

Screenshot_20180703_092729

My only initial displeasure with the usage of FreeCAD was the color gradient of the background.

Screenshot_20180703_092804

I am quite sure it is the most acceptable default for most but it is a bit too bright for me. I like everything to be dark. This is easy enough to fix by opening the preferences dialog box.

Edit > Preferences…

Screenshot_20180703_092900

Select Display then the Colors Tab where you can change your gradient however you like.

Screenshot_20180703_092919

The Combo View pane is real nice at guiding you through the process of designing parts the way FreeCAD wants you to do it. I find this to be just a bit better than how Creo does it. Unless you KNOW how to use Creo, you won’t know where to get started. FreeCAD, on the other hand will begin guiding you after you select Create body you can begin with your


Saturday
28 July, 2018


face

Part of this post is about openQA, openSUSE’s automated tool which tests a number of different scenarios, from installation to the behavior of the different desktop environments, plus testing the freshest code from KDE git. Recently, thanks to KDE team member Fabian Vogt, there has been important progress when testing KDE software.

Testing the Dolphin file manager

Those who use KDE software, either in Plasma or in other desktop environments have at least heard of Dolphin, the powerful file manager part of KDE Applications (by the way, have you checked out the recent beta yet?). A couple of weeks ago, in the #opensuse-kde channel, Fabian mentioned the possibility of writing new openQA tests: although there are many tests already, the desktop ones are all but comprehensive given the many possible scenarios. After some discussion back and forth, the consensus was to test Dolphin as it’s a very commonly used in many everyday tasks.

The second part was thinking up what to test. Again, the idea was to focus on common tasks: navigating and creating folders, creating and removing files, and adding elements to Places (the quick-access bar on the left). The first idea also contemplated testing drag-and-drop, but unfortunately such actions aren’t yet supported by openQA, so they were put in the backlog. With these goals in mind, Fabian created a series of tests. This is what being tested:

  • Launching dolphin and creating a folder
  • Navigating in the folder and creating a text file via context menu (“Create New…”)
  • Test navigation via the breadcrumb
  • Adding the newly-added folder via Places and ensuring it shows up in other applications
  • Remove the folder and the added place

You can see the latest results of the tests for Krypton on openQA. Of course, this does not apply to only the latest unstable code: it is also working for the main distributions.

What does this mean, in practice? openQA is flexible enough to allow even complex tests with multiple operations in succession, and these tests are very useful in catching bugs early, as it has already happened in the past. That doesn’t mean you should be complacent: keep filing those bugs, either to the distro for packaging issues, or to KDE for issues related to the software.

New Krypton links

A small but welcome addition to the Krypton live images is that now the latest build is automatically symlinked to a fixed name (openSUSE_Krypton.$arch.iso, where $arch is either i686 or x86_64. This allows now to link the images directly whenever required: before this change any direct link would break due to the ISO changing name at every rebuild. Doing the same for Argon is on the horizon, but requires some more work before it will happen.

KDE Applications 18.08 beta enters KDE:Applications

As manual testing is still important, the openSUSE KDE:Applications repository now contains the KDE Applications 18.08 beta. If you’re willing to help test the beta, please download the package and file


Friday
27 July, 2018


face

Blood Moon of July 2018Blood Moon of July 2018

Tonight, I spent some time on the balkony with my SLR, a glass of Shiraz and the most significant lunar eclipse of the century.


face

Happy SysAdmin day!

Introduction

This is a small write-up of our ongoing effort to move our infrastructure to modern technologies like Kubernetes. It all started a bit before the Hack Week 17 with the microservices and serverless for the openSUSE.org infrastructure project. As it mentions, the trigger behind it is that our infrastructure is getting bigger and more complicated, so the need to migrate to a better solution is also increasing. Docker containers and Kubernetes (for container orchestration) seemed like the proper ones, so after reading tutorials and docs, it was time to get our hands dirty!

Installing and experimenting with Kubernetes / CaaSP

We started by installing the SUSE CaaSP product on the internal Heroes VLAN. It provides an additional admin node which sets up the Kubernetes cluster. The cluster is not that big for now. It consists of the admin node, three kube-master nodes behind a load balancer and four kube-minions (workers). The product was in version 2 when we started, but version 3 became available which is the one we're using right now. It worked flawlessly, and we were even able to install the Kubernetes dashboard on top of it, which was our first Kubernetes-hosted webapp.

Since the containers inside Kubernetes are on their own internal network, we needed also a loadbalancer to expose the services to our VPN. Thus, we experimented with Ingress for load balancing of the applications deployed inside Kubernetes, also successfully. A lot of experiments around deployments, scaling and permissions took place afterwards, to get us more familiarized with the new concepts, which of course ended up in us destroying our cluster multiple times. We were surprised to see though the self-healing mechanisms taking over.

Although the experiments took place only with static pages so far, it still allowed us to learn a lot about Docker itself, eg how to create our own images and deploy them to our cluster. It's worth also mentioning the amazing kctl tool, just take a look at its README to realize how much more useful it is compared to the official kubectl.

Time to move to the next layer.

Installing and experimenting with Cloud Foundry / CAP

The next step was to install yet another SUSE product, this time the Cloud Application Platform, which offers a Platform as a Service solution based on the software named Cloud Foundry. The first blocker was met here though. CAP requires a working Kubernetes storageclass, which means that we needed to have a persistent storage backend. A good solution would be to use a distributed filesystem solution like Ceph, but due to the time limitations of Hack Week, we decided to go with a simpler solution for now, and the simplest was an NFS server. The CAP installation was smooth from that point, and we managed to login to our Cloud Foundry installation via the command line tool, as well as via the Stratos webUI. A wildcard domain *.cf.mydomain.tld was also needed here.

The idea was quite straightforward


Michal Čihař: Weblate 3.1

15:30 UTC

face

Weblate 3.1 has been released today. It contains mostly bug fixes, but there are some new feature as well, for example support for Amazon Translate.

Full list of changes:

  • Upgrades from older version than 3.0.1 are not supported.
  • Allow to override default commit messages from settings.
  • Improve webhooks compatibility with self hosted environments.
  • Added support for Amazon Translate.
  • Compatibility with Django 2.1.
  • Django system checks are now used to diagnose problems with installation.
  • Removed support for soon shutdown libravatar service.
  • New addon to mark unchanged translations as needing edit.
  • Add support for jumping to specific location while translating.
  • Downloaded translations can now be customized.
  • Improved calculation of string similarity in translation memory matches.
  • Added support by signing Git commits by GnuPG.

Update:

Weblate 3.1.1 was released as well fixing test suite failure on some setups:

  • Fix testsuite failure on some setup.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate


face

Dear Tumbleweed users and hackers,

Again I’m spanning two weeks, mostly because last most of the snapshots were canceled by openQA (some due to real issues found, some because OBS was too fast and testing did not complete before the next snapshot was ready). So, I just group two weeks together, which gives a total of 8 snapshots: 0712, 0714, 0719, 0720, 0721, 0722, 0723 and 0726).

The snapshots included those changes:

  • File 5.33 – beware: some scripts might misbehave if they use file to check if a binary is PIE enabled and the script queries for ‘PIE shared object’. Since file 5.33, executables are reported differently to libraries
  • Mutter gained pipewire support
  • KDE Plasma 5.13.3
  • KDE Applications 18.04.3
  • KDE Frameworks 5.48.0
  • Linux kernel 4.17.5, 4.17.6, 4.17.7 & 4.17.9
  • LibreOffice 6.1.0 final
  • Mesa 18.1.4
  • PulseAudio 12.2
  • X.Org 1.20.0
  • SDDM 0.18.0: note: please ensure to grab the version in the Update channel until snapshot >= 0727 is out. Otherwise, you might experience issues on logout/login
  • Poppler 0.66
  • GCC 8.2RC1

The staging projects are currently busy with these prepared changes:

  • A new Linux kernel (of course): 4.17.10
  • RPM packaging changes: the SUSE specific configuration is being moved to a separate package, theoretically making maintenance of the rpm package easier.
  • binutils 2.31: breaks grub2 and qemu
  • 6.1-20180707: breaks Xen (deprecations added, Xen builds with -Werror)
  • Usage of the new System Role selection instead of the legacy ‘desktop selection’
  • Python 3.7

Some of those things might take a moment until they reach Tumbleweed (e.g. python 3.7).


Thursday
26 July, 2018


Michael Meeks: 2018-07-26 Thursday.

21:00 UTCmember

face
  • Mail chew, strained away at JQuery with George - eventually finding that we need a tabindex to make something focus-able; interesting - progress at least.
  • Partner call, sales call, assembled a screen stand with George - hopefully gives one better posture; lets see. Still far too hot.

face

Several packages were updated in openSUSE Tumbleweed snapshots this week and developers will notice the snapshots are reported to be extremely stable.

Wireshark, sysdig, GNOME’s evolution, KDE’s Frameworks and Applications, Ceph, vim and python-setuptools were just a few of the many packages that arrived in Tumbleweed this week.

Wireshark 2.6.2 received several Common Vulnerabilities and Exposures (CVE) updates in snapshot 20180723, which included a HTTP2 dissector crash. The sysdig tool for deep system visibility with native support for containers had a minor update to 0.22.0 and added support for addional custom container types alongside Docker. Configurable text editor vim was updated to version 8.1.0200 and poppler 0.66.0 fixed compilations with some strict compilers when rendering PDFs. Google’s RE2 package, which is fast, safe, thread-friendly alternative to backtracking regular expression engines like those used in PCRE, Perl, and Python, simplified the spec file and fixed a Deterministic Finite Automaton (DFA) out of memory error. Cups-filters 1.20.4 made some ipp and ipps changes and also removed support for hardware-implemented reversing of page order in PostScript printers for some rare printers.

Snapshot 20180722 brought a new feature for bubblewrap 0.3.0 that bwrap now supports being invoked recursively when user namespaces are enabled, and the outer container manager allows it. Both xf86-input-mouse 1.9.3 and xf86-video-r128 6.11.0 packages fixed compatibility with and builds against xorg-server 1.20 respectively, which was also updated in snapshot; the new xorg-server 1.20 provides quite a few new features including input grab and tablet support in Xwayland as well as updating to RANDR 1.6 for enabling leasing RANDR resources to a client for its exclusive use. Both libva and libva-gl had a version bump to 2.2.0, which added support for High Efficiency Video Coding range extension decoding.

KDE Frameworks 5.48.0 arrived in snapshot 20180721. The KDE updates added several new features including introducing an ActionToolbar in Kirigami for mobile developers and implementing support for the voice and call interfaces with the ModemManagerQt package. The BluezQt package in frameworks also updates D-Bus xml files to use “Out*” for signal type Qt annotations. Other packages in the snapshot that were not KDE related were efivar, which moved back from version 36 to 35, python-cryptography 2.3, python-pycurl 7.43.0.2 and the cross-platform desktop calculator qalculate 2.6.1.

Beginning the week, several project packages were updated in the snapshot 20180719. KDE Applications 18.04.3 had about 20 recorded bug fixes with improvements to Kontact, Ark, Cantor, Dolphin, Gwenview, and more. Mozilla Thunderbird 52.9.1 changed a prompt to compact IMAP folders and fixed a few issues like the deleting or detaching attachments corrupted messages under certain circumstance, which was only found in the 52.9.0 version.

The ceph update in the snapshot increased the memory constraint for build workers after builds started failing on workers with exactly 8G of RAM. The 3D


Wednesday
25 July, 2018


Michael Meeks: 2018-07-25 Wednesday.

21:00 UTCmember

face
  • Mail, admin. Talked through researching data and writing logs with George; poked at some JQuery-ness. Bruce & Anne, S,C,A&J over - out to Prezzo for a lovely birthday lunch.
  • Back, more hacking, code review; ESC call, sync with Dennis, dinner. Very hot ...

face

A little bit less than a month and I will be at Akademy again, KDE's annual conference. This is the place where you can meet one of the most amazing open source communities. To me it's kind of my home community. This is where I have learned a lot about open source, where I contributed tons of code and other work, where I met a lot of awesome friends. I have been to most Akademy events, including the first KDE conference "Kastle" in 2003. But I missed the one last year. I'm more than happy to be back this year in Vienna on August 11.


Akademy will start with the conference on the weekend, August 11-12. I was in the program committee this year and I think we have put together an exciting program. You will see what's going on in KDE, what the community is doing on their goals of privacy, community onboarding, and productivity, hear about the activities of KDE e.V., get to know some of the students who work as part of one of the mentoring programs such as the Google Summer of Code, and much more.

It's a special honor to me to present the Akademy Awards this year together with my fellow award winners from last year. It was hard to choose because there are so many people who do great stuff in KDE. But we have identified a set of people who definitely deserve this prize. Join us at the award ceremony to find out who they are.

Being at Akademy is always special. It's such an amazing group of people hold together by a common idea, culture, and passion. You could never hire such a fantastic group. So I feel lucky that I got and took the opportunity to work with many of these people over the years.

It's also very rewarding to see new people join the community. Akademy always has this special mix of KDE dinosaurs, the young fresh people who just joined, and everything in between. The mentoring KDE does with great care and enthusiasm pays off, with interest.

Vienna is calling. I'll quickly answer the call. See you there.

Older blog entries ->