Fri, Feb 2nd, 2024
Where's my python code?
Python is a interpreted language, so the python code are just text
files with the .py
extension. For simple scripts it's really easy to
have your files located, but when you starts to use dependencies and
different projects with different requirements the thing starts to get
more complex.
PYTHONPATH
The Python interpreter uses a list of paths to try to locate python modules, for example this is what you can get in a modern GNU/Linux distribution by default:
Python 3.11.7 (main, Dec 15 2023, 10:49:17) [GCC] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.path
['',
'/usr/lib64/python311.zip',
'/usr/lib64/python3.11',
'/usr/lib64/python3.11/lib-dynload',
'/usr/lib64/python3.11/site-packages',
'/usr/lib64/python3.11/_import_failed',
'/usr/lib/python3.11/site-packages']
These are the default paths where the python modules are installed. If
you install any python module using your linux packaging tool, the
python code will be placed inside the site-packages
folder.
So system installed python modules can be located in:
-
/usr/lib/python3.11/site-packages
for modules that are architecture independent (pure python, all.py
files) -
/usr/lib64/python3.11/site-packages
for modules that depends on the arquitecture, that's something that uses low level libraries and needs to build so there are some.so
files.
pip
When you need a new python dependency you can try to install from your
GNU/Linux distribution using the default package manager like
zypper
, dnf
or apt
, and those python files will be placed in the
system paths that you can see above.
But distributions doesn't pack all the python modules and even if they do, you can require an specific version that's different from the one packaged in your favourite distribution, so in python it's common to install dependencies from the Python Package Index (PyPI).
Python has a tool to install and manage Python packages that looks for desired python modules in PyPI.
You can install new dependencies with pip
just like:
$ pip install django
And that command looks for the django
python module in the PyPI,
downloads and install it, in your user
$HOME/.local/lib/python3.11/site-packages
folder if you
use --user
, or in a global system path like /usr/local/lib
or
/usr/lib
if you run pip as root.
But the usage of pip
directly in the system is something not
recommended today, and even it's disabled in some
distributions, like openSUSE Tumbleweed.
[danigm@localhost ~] $ pip install django
error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try
zypper install python311-xyz, where xyz is the package
you are trying to install.
If you wish to install a non-rpm packaged Python package,
create a virtual environment using python3.11 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip.
If you wish to install a non-rpm packaged Python application,
it may be easiest to use `pipx install xyz`, which will manage a
virtual environment for you. Install pipx via `zypper install python311-pipx` .
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
virtualenvs
Following the current recommendation, the correct way of installing
third party python modules is to use virtualenvs
.
The virtualenvs
are just specific folders where you install your
python modules and some scripts that make's easy to use it in
combination with your system libraries so you don't need to modify the
PYTHONPATH
manually.
So if you've a custom project and want to install python modules you can create your own virtualenv and use pip to install dependencies there:
[danigm@localhost tmp] $ python3 -m venv myenv
[danigm@localhost tmp] $ . ./myenv/bin/activate
(myenv) [danigm@localhost tmp] $ pip install django
Collecting django
...
Successfully installed asgiref-3.7.2 django-5.0.1 sqlparse-0.4.4
So all dependencies are installed in my new virtualenv folder and if I use the python from the virtualenv it's using those paths, so all the modules installed there are usable inside that virtualenv:
(myenv) [danigm@localhost tmp] $ ls myenv/lib/python3.11/site-packages/django/
apps contrib db forms __init__.py middleware shortcuts.py templatetags urls views
conf core dispatch http __main__.py __pycache__ template test utils
(myenv) [danigm@localhost tmp] $ python3 -c "import django; print(django.__version__)"
5.0.1
(myenv) [danigm@localhost tmp] $ deactivate
With virtualenvs you can have multiple python projects, with different dependencies, isolated, so you use different dependencies when you activate your desired virtualenv:
- activate
$ . ./myenv/bin/activate
- deactivate
$ deactivate
High level tools to handle virtualenvs
The venv module is a default Python module and as you can see
above, it's really simple to use, but there are some tools that
provides some tooling around it, to make it easy for you, so usually
you don't need to use venv
directly.
pipx
For final python tools, that you are not going to use as dependencies in your python code, the recommended tool to use is pipx.
The tool creates virtualenv automatically and links the binaries so
you don't need to worry about anything, just use as a way to install
third party python applications and update/uninstall using it. The
pipx
won't mess your system libraries and each installation will use
a different virtualenv, so even tools with incompatible dependencies
will work nicely together in the same system.
Libraries, for Python developers
In the case of Python developers, when you need to manage dependencies for your project, there are a lot of nice high level tools for managing dependencies.
These tools provides different ways of managing dependencies, but all
of them relies in the use of venv
, creating the virtualenv in
different locations and providing tools to enable/disable and manage
dependencies inside those virtualenvs.
For example, poetry
creates virtualenvs by default inside the
.cache
folder, in my case I can find all poetry created virtualenvs
in:
/home/danigm/.cache/pypoetry/virtualenvs/
Most of these tools add other utilities on top of the dependency
management. Just for installing python modules easily you can always
use default venv
and pip
modules, but for more complex projects
it's worth to investigate high level tools, because it'll make easy to
manage your project dependencies and virtualenvs.
Conclusion
There are a lot of python code inside any modern Linux distribution and if you're a python developer it's possible to have a lot of python code. Make sure to know the source of your modules and do not mix different environments to avoid future headaches.
As a final trick, if you don't know where's the actual code of some python module in your running python script, you can always ask:
>>> import django
>>> django.__file__
'/tmp/myenv/lib64/python3.11/site-packages/django/__init__.py'
This could be even more complicated if you start to use containers and different python versions, so keep you dependencies clean and up to date and make sue that you know where is your Python code.
Wed, Jan 31st, 2024
SUSE BuildOPS Team
Tue, Jan 30th, 2024
Using OpenTelemetry between syslog-ng instances
Do you have to forward large amounts of logs between two syslog-ng instances? OTLP (OpenTelemetry protocol) support in syslog-ng was contributed by Axoflow, and it can solve this problem. Just like the ewmm() destination, syslog-ng-otlp() forwards most name-value pairs, however, unlike a tcp() connection, it scales well with multiple CPU cores.
Support for OpenTelemetry was added to syslog-ng a couple of releases ago. OpenTelemetry is an observability framework, mainly used in Linux / Cloud / Kubernetes environments. However, I already had users asking to make this feature available on FreeBSD. (It already worked once, but now it fails to compile again.)
Version 4.6.0 added many new OTLP-related enhancements. Batching and multiple workers make OTLP connections significantly faster, while compression can save you bandwidth at the expense of some more CPU usage. This changes the syslog-ng-otlp() destination from an interesting experiment into something really useful. It enables you to send a lot more log messages between two syslog-ng instances than with a tcp() connection, while using less bandwidth.
Read more at https://www.syslog-ng.com/community/b/blog/posts/using-opentelemetry-between-syslog-ng-instances
openSUSE Conference Travel Info
The openSUSE Conference in Nuremberg, Germany, at the end of June may feel like a long time away, but if you are planning to oSC24, there are topics that need action now before timelines become too short.
Getting a visa may take some time. There are certain requirements necessary to receive a visa for those who are not citizens of a Schengen country. You may need a formal invitation letter that fully explains the nature of your visit.
An overview of visa requirements/exemptions for entry into the Federal Republic of Germany can be found at the Federal Foreign Office website. You must apply for a Schengen visa through the German embassy or consulate in your country.
If you plan to attend the conference and are coming from a country where you will need a formal invitation letter, email ddemaio(at)opensuse.org with the email subject “oSC24 Visa”.
Also important to consider is the Travel Support Program. People who want to use the TSP for this year’s openSUSE Conference must read the How to Apply section and follow the appropriate steps. The TSP is now being done by the Geeko Foundation, so be aware that some new procedures and certain limitations apply. Read the info on the wiki and use tsp.opensuse.org to apply. Requests will be accepted until May 10. The TSP is a helpful way to get people in the community to attend the conference. Funding is limited, but help to support the community’s attendance.
Here are a few reminders regarding the TSP:
- Please read the TSP policy page carefully before you apply.
- The TSP can reimburse up to 80% of travel and/or lodging costs. That includes hotel, hostel, plane, train, bus
- Important: Food and all local expenses are on you!
- We want to sponsor as many people as possible so please check the best deal.
- No receipts = no money. It is the rule! (Original receipts are required from German residences.)
The call for papers is open until April 15. Submit a talk today.
Presentations can be submitted for the following length of time:
- Lightning Talk (15 mins)
- Virtual Lightning Talk (15 mins)
- Short Talk (30 mins)
- Virtual Talk (30 mins)
- Long Talk (45 mins)
- Workshop (1 hour)
- Open 4 Business (15 mins)
The following tracks are listed for the conference:
- Cloud and Containers
- Community
- Embedded Systems and Edge Computing
- New Technologies
- Open Source
- openSUSE
- Open 4 Business
Fri, Jan 26th, 2024
openSUSE Tumbleweed Monthly Update - January
Welcome to the monthly update for openSUSE Tumbleweed for January 2024. This will be the new format going forward as recommended by those contributing to the marketing efforts of openSUSE. These updates will highlight major changes, improvements and key issues addressed in openSUSE Tumbleweed snapshots throughout the month. The aim is to keep the community and users informed about the developments and updates of the distribution. Should readers desire a more frequent amount of information about openSUSE Tumbleweed snapshots, readers are advised to subscribe to the openSUSE Factory mailing list.
New Features and Enhancements
-
Linux Kernel: Updates to versions 6.6.7, 6.6.9, 6.6.10, 6.6.11 and 6.7.1.
- Fixes have been applied for memory management and security vulnerabilities, enhancing overall system safety.
- Support for new hardware models
- PCI: Adds ACS quirks for more Zhaoxin Root Ports, enhancing compatibility and performance for Zhaoxin’s CPUs and motherboards.
- ALSA (Advanced Linux Sound Architecture): Added driver properties for cs35l41 for Lenovo Legion Slim 7 Gen 8 series, and introduced support for additional Dell models without _DSD, along with fixes for HP Envy X360 13-ay0xxx’s mute and mic-mute LEDs, indicating broader compatibility for sound hardware in laptops.
- LEDs: The ledtrig-tty module receives updates to the free allocated ttyname buffer on deactivation, impacting how LED triggers are handled for terminal activities.
-
Mozilla Firefox: Updates to version 121.0 and 121.0.1
- The update resolves a bug that caused Firefox to hang when loading sites with column-based layouts, enhancing stability and performance.
- Fixes applied to ensure rounded corners for videos and proper closure of Firefox that prevents USB security key conflicts with other applications.
-
KDE Frameworks: Update for version 5.114.0.
- Significant updates include fixes in Extra CMake Modules, introduction of holidays in Kenya observed by KHolidays, and quality settings adjustments for AVIF in KImageFormats.
- Key improvements in KIO for handling malformed Exec entries, accessibility enhancements in Kirigami, and stability fixes in KJobWidgets to prevent potential use-after-free errors.
-
Mesa: Updates to 23.3.3
- Focus on Python 3.6 build fixes and enhancements in driver support.
- The release introduces NVK, a new Vulkan driver for NVIDIA hardware, which marks a step forward in support for NVIDIA GPUs, yet it remains in the experimental phase.
- Improved graphics performance and compatibility Asahi and RADV and enhancements of OpenGL ES and Vulkan capabilities
- Introduces critical updates like the requirement of libvulkan1 for Mesa-dri to support zink/swrast driver fallbacks, which further improves the overall user experience with graphics applications and games.
-
systemd: Updates to version 254.8
- Reverts patches related to udev device node updates and workarounds for issues. Took a cautious approach to fixing reported bugs and ensuring stability in device management systems.
- Adjustments to udev ensure the proper existence and ownership of
%_modulesloaddir
, facilitating smoother module installation by other packages, thereby improving system configuration and module management.
-
PHP: Updated from version 8.2.14 to 8.2.15,
- Fix for a false positive SSA integrity verification failure and a resolution for Autoconf warnings during cross-compilation.
- The CLI built-in web server now correctly handles timeouts when using router scripts in conjunction with
max_input_time
. - Fixes a crash when using
stream_wrapper_register
withFFI\CData
and interaction issues betweenFFI::new
and observers. - IntlDateFormatter now correctly accepts ‘C’ as a valid locale.
- A hanging issue in the Hash extension for large strings with sha512 is resolved.
-
GStreamer: Updates to version 1.22.8
- Addressing vulnerabilities within the AV1 video codec parser.
- Fixes include resolving a potential deadlock in the avdec video decoder with FFmpeg 6.1
- Improvements in reverse playback and seeking in qtdemux for files with raw audio streams
- Enhancements to the GstPlay and GstPlayer libraries
- Updates to the Cerbero build tool to address python 3.12 string escape warnings
-
Samba : Updates to version 4.19.4
- Addresses issues like the inability of
net changesecretpw
to set the machine account password with an emptysecrets.tdb
, - Improves documentation generation with respect to
XML_CATALOG_FILES
environment variable. - Resolved issues where
smbd
did not detect ctdb public IPv6 addresses for multichannel exclusion, and theforce user = localunixuser
setting was ineffective whenallow trusted domains = no
. - Addressed critical vulnerabilities and bugs, such as visible Deleted Object tombstones in AD LDAP to normal users CVE-2018-14628, and various smbget authentication and functionality fixes, enhancing security and user experience.
- Addresses issues like the inability of
Security Updates
This month’s updates include critical security patches across various packages. Notable security improvements were integrated into the Firefox, systemd, Samba and PHP updates and more.
Bug Fixes
- xorg-x11-server 21.1.11 and xwayland 23.2.4: These updates addressed multiple CVEs, improving security and stability in the display server protocols. A list of this CVEs can be found in the security advisory.
- gnutls 3.8.3: CVE-2024-0553 was a vulnerability that allows timing attacks in RSA-PSK, risking data leak and a fix for CVE-2024-0567 was made, which is a flaw in cockpit’s certificate validation that enables remote denial of service attacks.
- java-11-openjdk 11.0.22.0: Multiple CVEs. CVE-2024-20919, CVE-2024-20926 , CVE-2024-20921, CVE-2024-20918, CVE-2024-20945, CVE-2024-20952
- samba 4.19.4: CVE-2018-14628 an authenticated but unprivileged attacker could have discovered the names and preserved attributes of deleted objects in the LDAP store.
- python-Jinja2 3.1.3: CVE-2024-22195 was a flaw where the xmlattr filter improperly allows space-containing keys, enabling attackers to inject harmful attributes through user inputs.
- rdma-core 49.1: Although specific CVEs addressed in the update were not mentioned, the update is part of regular maintenance to ensure stability and security.
Contributing to openSUSE Tumbleweed
Your contributions and feedback make openSUSE Tumbleweed better with every update. Whether reporting bugs, suggesting features, or participating in community discussions, your involvement is highly valued.
Conclusion
We will continue to refine and enhance this format. We look forward to another exciting year of development and community engagement with openSUSE Tumbleweed. See you all at FOSDEM next week. Happy computing!
Thu, Jan 25th, 2024
Testing kernels with sporadic issues until heisenbug shows in openQA
This is a follow up to my previous post about How to test things with openQA without running your own instance, so you might want to read that first.
Now, while hunting for bsc#1219073 which is quite sporadic, and took quite some time to show up often enough so that became noticeable and traceable, once stars aligned and managed to find a way to get a higher failure rate, I wanted to have a way for me and for the developer to test the kernel with the different patches to help with the bisecting and ease the process of finding the culprit and finding a solution for it.
I came with a fairly simple solution, using the --repeat
parameter of the openqa-cli tool, and a simple shell script to run it:
```bash
$ cat ~/Downloads/trigger-kernel-openqa-mdadm.sh
# the kernel repo must be the one without https; tests don't have the kernel CA installed
KERNEL="KOTD_REPO=http://download.opensuse.org/repositories/Kernel:/linux-next/standard/"
REPEAT="--repeat 100" # using 100 by default
JOBS="https://openqa.your.instan.ce/tests/13311283 https://openqa.your.instan.ce/tests/13311263 https://openqa.your.instan.ce/tests/13311276 https://openqa.your.instan.ce/tests/13311278"
BUILD="bsc1219073"
for JOB in $JOBS; do
openqa-clone-job --within-instance $JOB CASEDIR=https://github.com/foursixnine/os-autoinst-distri-opensuse.git#tellmewhy ${REPEAT} \
_GROUP=DEVELOPERS ${KERNEL} BUILD=${BUILD} FORCE_SERIAL_TERMINAL=1\
TEST="${BUILD}_checkmdadm" YAML_SCHEDULE=schedule/qam/QR/15-SP5/textmode/textmode-skip-registration-extra.yaml INSTALLONLY=0 DESKTOP=textmode\
|& tee jobs-launched.list;
done;
There are few things to note here:
- the kernel repo must be the one without https; tests don’t have the CA installed by default.
- the
--repeat
parameter is set to 100 by default, but can be changed to whatever number is desired. - the
JOBS
variable contains the list of jobs to clone and run, having all supported architecures is recommended (at least for this case) - the
BUILD
variable can be anything, but it’s recommended to use the bug number or something that makes sense. - the
TEST
variable is used to set the name of the test as it will show in the test overview page, you can useTEST+=foo
if you want to append text instead of overriding it, the--repeat
parameter, will append a number incrementally to your test, see os-autoinst/openQA#5331 for more details. - the
YAML_SCHEDULE
variable is used to set the yaml schedule to use, there are other ways to modify the schedule, but in this case I want to perform a full installation
Running the script
- Ensure you can run at least the openQA client; if you need API keys, see post linked at the beginning of this post
- replace the kernel repo with your branch in line 5
- run the script
$ bash trigger-kernel-openqa-mdadm.sh
and you should get the following, times the--repeat
if you modified it
1 job has been created:
- sle-15-SP5-Full-QR-x86_64-Build134.5-skip_registration+workaround_modules@64bit -> https://openqa.your.instan.ce/tests/13345270
Each URL, will be a job triggered in openQA, depending on the load and amount of jobs, you might need to wait quite a bit (some users can help moving the priority of these jobs so it executes faster)
The review stuff:
Looking at the results
- Go to https://openqa.your.instan.ce/tests/overview?distri=sle&build=bsc1219073&version=15-SP5 or from any job from the list above click on
Job groups
menu at the top, and selectBuild bsc1219073
- Click on “Filter”
- type the name of the test module to filter in the field Module name, e.g
mdadm
, and select the desired result of such test module e.gfailed
(you can also type, and select multiple result types) - Click Apply
- The overall summary of the build overview page, will provide you with enough information to calculate the pass/fail rate.
A rule of thumb: anything above 5% is bad, but you need to also understand your sample size + the setup you’re using; YMMV.
Ain’t nobody got time to wait
The script will generate a file called: jobs-launched.list
, in case you absolutely need to change the priority of the jobs, set it to 45, so it runs higher than default priority, which is 50
cat jobs-launched.list | grep https | sed -E 's/^.*->\s.*tests\///' | xargs -r -I {} bash -c "openqa-cli api --osd -X POST jobs/{}/prio prio=45; sleep 1"
The magic
The actual magic is in the schedule, so right after booting the system and setting it up, before running the mdadm test, I inserted the update_kernel
module, which will add the kernel repo specified by KOTD_REPO, and install the kernel from there, reboot the system, and leave the system ready for the actual test,
however I had to add very small changes:
---
tests/kernel/update_kernel.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tests/kernel/update_kernel.pm b/tests/kernel/update_kernel.pm
index 1d6312bee0dc..048da593f68f 100644
--- a/tests/kernel/update_kernel.pm
+++ b/tests/kernel/update_kernel.pm
@@ -398,7 +398,7 @@ sub boot_to_console {
sub run {
my $self = shift;
- if ((is_ipmi && get_var('LTP_BAREMETAL')) || is_transactional) {
+ if ((is_ipmi && get_var('LTP_BAREMETAL')) || is_transactional || get_var('FORCE_SERIAL_TERMINAL')) {
# System is already booted after installation, just switch terminal
select_serial_terminal;
} else {
@@ -476,7 +476,7 @@ sub run {
reboot_on_changes;
} elsif (!get_var('KGRAFT')) {
power_action('reboot', textmode => 1);
- $self->wait_boot if get_var('LTP_BAREMETAL');
+ $self->wait_boot if (get_var('FORCE_SERIAL_TERMINAL') || get_var('LTP_BAREMETAL'));
}
}
Likely I’ll make a new pull request to have this in the test distribution, but for now this is good enough to help kernel developers to do some self-service and trigger their own openQA tests, that have many more tests (hopefully in parallel) and faster than if there was a person doing all of this manually.
Special thanks to the QE Kernel team, who do the amazing job of thinking of some scenarios like this, because they save a lot of time.
Revamping the Request Build Status Page and Introducing the Dark Mode
Tue, Jan 23rd, 2024
Native MacOS source in syslog-ng
You know that support for MacOS is important when every third visitor at the syslog-ng booth of Red Hat Summit asks if syslog-ng works on MacOS. With the upcoming syslog-ng version 4.6.0, syslog-ng not only compiles on MacOS, but it also collects local log messages natively. From this blog you can learn how to compile syslog-ng yourself, options of the MacOS source, and also a bit of history.
https://www.syslog-ng.com/community/b/blog/posts/native-macos-source-in-syslog-ng
The syslog-ng Insider 2024-01: HTTP; Cloudflare; systemd-journal; Humio / Logscale;
The January syslog-ng newsletter is now on-line:
- Why use a http()-based destination in syslog-ng?
- An overview of Cloudflare’s logging pipeline
- Working with multiple systemd-journal namespaces in syslog-ng
- Logging to Humio / Logscale simplified in syslog-ng
It is available at https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2024-01-http-cloudflare-systemd-journal-humio-logscale
Call for Hosts Begins for openSUSE Conference
The openSUSE Project is asking interested people to submit a call for hosts for the openSUSE Conference 2025.
This event is a cornerstone of the openSUSE community and aims to bring together a diverse group of users, developers and enthusiasts to share, learn and collaborate. The aim is not only to strengthen the existing community but also to welcome new members into the fold.
Having the conference in different locations allows the project to be more accessible to people and can help with increasing awareness about the openSUSE Project and open source. While the project is unable to commit to a host being selected based on the project’s available funds, participants in the community encourage people/groups to provide a submission. This will let members of the community discover areas where the project can have a conference if funding allows it.
What We Are Looking For
Hosting the openSUSE Conference is an opportunity to showcase your community and city. The project seeks passionate teams who can address the following key criteria in their submissions:
- New Attendees: Strategies to attract and engage new participants.
- Accessibility: Ensuring the venue and events are accessible to all.
- Community Involvement: Hosts must attend community meetings and provide regular updates if selected.
- Cost Efficiency: Detailed budget plan showing the total cost to run the event.
- Community Growth Goal: A plan to make the conference as successful as previous ones in gaining new members.
- Engaging Educational Institutions: Technical universities and educational bodies willing to host an openSUSE Conference is an ideal demographic for a submission.
- Bidding Process: Similar to the Debian Conference Bidding Process proposals will be submitted via openSUSE Wiki pages. There is a potential that it can be followed by a community voting process.
- Submission Period: Please refer to the openSUSE Conference wiki for detailed timeline and submission deadline.
- Season Flexibility: The event can be planned for any season, allowing flexibility for the hosts.
How to Submit Your Proposal
- Prepare Your Proposal: Detailing how you’ll address the above criteria.
- Submit on openSUSE Wiki: Create a dedicated page for your bid on the openSUSE Wiki and use the template provided at en.opensuse.org/openSUSE:Call_for_hosts to assist with developing the proposal.
- Community Voting: The final selection will involve a voting process by the openSUSE community.
Why Host the openSUSE Conference?
Hosting the openSUSE Conference can spotlight your local community. It’s an opportunity to contribute to the growth of the project.
More Information and Submission Guidelines
Visit the conference checklist for more information on guidelines to help host an openSUSE Conference.