Skip to main content
openSUSE's Geeko chameleon's head overlayed on a cell-shaded planet Earth, rotated to show the continents of Europe and Africa

Welcome to English Planet openSUSE

This is a feed aggregator that collects what the contributors to the openSUSE Project are writing on their respective blogs
To have your blog added to this aggregator, please read the instructions

a silhouette of a person's head and shoulders, used as a default avatar

My new toy: Openwebui First Steps

Once I got hardware-accelerated AI working under Linux on my AI mini workstation from HP, my next goal was to make it easier to use. From this blog, you can read about my initial experiments with Open WebUI on Fedora Linux.

Open WebUI talking about central log collection :-)

Everything in containers

As Open WebUI is not yet available as a package in Fedora, my initial approach was to use containers. I found a Docker compose setup which was tested on Fedora Linux 43 according to its documentation: https://github.com/jesuswasrasta/ollama-rocm-webui-docker. As I (also) use Fedora 43, it sounded like a good choice.

It worked; however, I quickly realized that hardware acceleration for AI was not working. Instead of that, most CPUs were running close to 100%. It was a good test for cooling: I could hear the miniature box from the next room through closed doors :-)

ollama eating CPU :-)

As it turned out, the content of the HSA_OVERRIDE_GFX_VERSION environment variable was incorrect. When I set it according to the docs, hardware acceleration still did not work. Removing the environment variable ollama found the hardware, but never answered a prompt anymore.

Ollama from the system

My next experiment was that I kept using Open WebUI from the container, but I installed ollama from the Fedora package repository directly on the system. The good news? Some smaller models ran really fast, using hardware acceleration. The bad news: most models failed to load with an error message that the given model format is unknown.

Update to Fedora 44 beta

I guessed that ollama was too old in Fedora 43. Solution? Update the whole system to Fedora 44 beta. It seems to have helped. A lot more models work now, including the largest freely available Granite models from IBM.

Why Granite?

First of all: I’m an IBM Champion, and thus using IBM technologies is for granted. But also because I learned some background stories from a friend working at IBM on LSF, which makes it also a personal choice.

What I’ve been showing here is AI inferencing on my HP AI system. But before the model can be used (for inferencing), it needs to be trained. These models are trained on large, GPU rich conpute clusters. To get an idea of the scale of such clusters, you can learn more in this research paper (https://arxiv.org/abs/2407.05467). It duscusses the IBM Blue Vela system which supports IBMs’ GenAI mission. What’s interesting is the Blue Vela uses a more traditional HPC software stack including IBM LSF for workload management and Storage Scale (GPFS) for rapid access to large data sets.

AI in a miniature box :-)

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

a silhouette of a person's head and shoulders, used as a default avatar
a silhouette of a person's head and shoulders, used as a default avatar

My new toy: first steps with AI on Linux

Ever since I bought my AI mini workstation from HP, my goal was to run hardware accelerated artificial intelligence workloads in a Linux environment. Read more to learn how things turned out on Ubuntu and Fedora!

I have been using various AI tools for a while now. Generating pictures about some impossible situations, like a dinosaur climbing the Hungarian parliament building, finding information where a simple web search is useless, or explaining syslog-ng code to me. All these are nice, sometimes even useful, however I prefer to know what is behind the magic. Well, at least part of it :-) I want to get a bottom up view of various components and processes, and getting my hands dirty. Hopefully this miniature but powerful box will help me in getting known with AI better.

AI in a miniature box :-)

Testing AI on Ubuntu

As mentioned in my installing Ubuntu blog, the 24.04 LTS installer did not work on this machine. I found a nice tutorial about AI on the Ryzen AI Max+ 395 which mentioned using 25.10, so I installed that version instead of the LTS. It installed without any troubles, 3D graphics worked out of the box.

However, AI is a different story. ROCm, hardware acceleration for AI workloads on AMD chips, is only packaged for Ubuntu LTS releases. The workaround described in the tutorial was to use distrobox. Unfortunately, the steps described in the tutorial did not work. Containerization brought in various problems with permissions, software availability, and so on. Most likely an experienced distrobox user could resolve these. In my case, after reading the distrobox documentation for hours, I just gave up.

Getting started with hardware accelerated AI on Fedora

Next, I turned to Fedora Linux 43. The wiki page of the Fedora Heterogeneous Computing Special Interest Group proved to be a good starting point. Fedora has ROCm packaged as part of the distro, and the wiki page gives clear instructions how to get started.

Once I set up user rights and installed the necessary packages, I was able to get some info about my hardware. You can see the output of rocminfo and rocm-clinfo at the bottom of this blog. I did not want to shorten those, but given the many lines of output, I was not sure if anyone would read the rest of my blog :-)

Testing with llama

Of course, seeing info about the hardware is nice, but it’s even better to see it in action. The Ubuntu ROCm tutorial mentioned llama, so I started with that one. Luckily Fedora includes it as a ready to install package, so I did not have to compile it from source. I also installed huggingface-hub, also from a package:

dnf install python3-huggingface-hub llama-cpp

This allowed me to download the model mentioned in the tutorial, and ask a few questions from the downloaded LLM. For now I just used the sample command line, but based on the output llama found the hardware and used it. Next up: learn more about the available models.

You can find the output of the following command at the end of this blog:

llama-cli   -m ~/models/llama-2-7b.Q4_K_M.gguf   --no-mmap   -ngl 99   -p "Explain quantum computing in simple terms:"   -n 256

Testing with pytorch

When I mentioned a friend that hardware accelerated AI seems to work on my Linux box, he suggested to me to try it with PyTorch. Luckily this was available as a ready to install package for Fedora as well:

dnf install python3-torch

I was quite a bit surprised, as the above command installed 8 GB worth of RPM packages (texlive accounting for a good part of it). I do not know much about PyTorch, but did a quick test anyway. Here is the really complex Pyhon code I built based on the documentation:

import torch
x = torch.rand(5, 3)
print(x)
print('Is hw AI accel available')
print(torch.cuda.is_available())

And here is the output from the above code:

tensor([[0.1034, 0.0183, 0.1233],
        [0.1787, 0.0097, 0.8426],
        [0.2872, 0.6351, 0.8468],
        [0.8226, 0.2991, 0.8539],
        [0.2061, 0.6422, 0.8146]])
Is hw AI accel available
True

It’s simple, but looks promising :-)

Outputs

Ooutput of rocminfo and rocm-clinfo

czanik@fedora:~$ rocminfo 
ROCk module is loaded
=====================    
HSA System Attributes    
=====================    
Runtime Version:         1.1
Runtime Ext Version:     1.7
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE                              
System Endianness:       LITTLE                             
Mwaitx:                  DISABLED
XNACK enabled:           NO
DMAbuf Support:          YES
VMM Support:             YES

==========               
HSA Agents               
==========               
*******                  
Agent 1                  
*******                  
  Name:                    AMD RYZEN AI MAX+ PRO 395 w/ Radeon 8060S
  Uuid:                    CPU-XX                             
  Marketing Name:          AMD RYZEN AI MAX+ PRO 395 w/ Radeon 8060S
  Vendor Name:             CPU                                
  Feature:                 None specified                     
  Profile:                 FULL_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        0(0x0)                             
  Queue Min Size:          0(0x0)                             
  Queue Max Size:          0(0x0)                             
  Queue Type:              MULTI                              
  Node:                    0                                  
  Device Type:             CPU                                
  Cache Info:              
    L1:                      49152(0xc000) KB                   
  Chip ID:                 0(0x0)                             
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   5187                               
  BDFID:                   0                                  
  Internal Node ID:        0                                  
  Compute Unit:            32                                 
  SIMDs per CU:            0                                  
  Shader Engines:          0                                  
  Shader Arrs. per Eng.:   0                                  
  WatchPts on Addr. Ranges:1                                  
  Memory Properties:       
  Features:                None
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: FINE GRAINED        
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 3                   
      Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 4                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
  ISA Info:                
*******                  
Agent 2                  
*******                  
  Name:                    gfx1151                            
  Uuid:                    GPU-XX                             
  Marketing Name:          Radeon 8060S Graphics              
  Vendor Name:             AMD                                
  Feature:                 KERNEL_DISPATCH                    
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        128(0x80)                          
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          131072(0x20000)                    
  Queue Type:              MULTI                              
  Node:                    1                                  
  Device Type:             GPU                                
  Cache Info:              
    L1:                      32(0x20) KB                        
    L2:                      2048(0x800) KB                     
    L3:                      32768(0x8000) KB                   
  Chip ID:                 5510(0x1586)                       
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          128(0x80)                          
  Max Clock Freq. (MHz):   2900                               
  BDFID:                   50432                              
  Internal Node ID:        1                                  
  Compute Unit:            40                                 
  SIMDs per CU:            2                                  
  Shader Engines:          2                                  
  Shader Arrs. per Eng.:   2                                  
  WatchPts on Addr. Ranges:4                                  
  Coherent Host Access:    FALSE                              
  Memory Properties:       APU
  Features:                KERNEL_DISPATCH 
  Fast F16 Operation:      TRUE                               
  Wavefront Size:          32(0x20)                           
  Workgroup Max Size:      1024(0x400)                        
  Workgroup Max Size per Dimension:
    x                        1024(0x400)                        
    y                        1024(0x400)                        
    z                        1024(0x400)                        
  Max Waves Per CU:        32(0x20)                           
  Max Work-item Per CU:    1024(0x400)                        
  Grid Max Size:           4294967295(0xffffffff)             
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)             
    y                        4294967295(0xffffffff)             
    z                        4294967295(0xffffffff)             
  Max fbarriers/Workgrp:   32                                 
  Packet Processor uCode:: 34                                 
  SDMA engine uCode::      18                                 
  IOMMU Support::          None                               
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    65568416(0x3e87ea0) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    65568416(0x3e87ea0) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:2048KB                             
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 3                   
      Segment:                 GROUP                              
      Size:                    64(0x40) KB                        
      Allocatable:             FALSE                              
      Alloc Granule:           0KB                                
      Alloc Recommended Granule:0KB                                
      Alloc Alignment:         0KB                                
      Accessible by all:       FALSE                              
  ISA Info:                
    ISA 1                    
      Name:                    amdgcn-amd-amdhsa--gfx1151         
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
    ISA 2                    
      Name:                    amdgcn-amd-amdhsa--gfx11-generic   
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
*******                  
Agent 3                  
*******                  
  Name:                    aie2                               
  Uuid:                    AIE-XX                             
  Marketing Name:          AIE-ML                             
  Vendor Name:             AMD                                
  Feature:                 AGENT_DISPATCH                     
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        1(0x1)                             
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          64(0x40)                           
  Queue Type:              SINGLE                             
  Node:                    0                                  
  Device Type:             DSP                                
  Cache Info:              
    L2:                      2048(0x800) KB                     
    L3:                      32768(0x8000) KB                   
  Chip ID:                 0(0x0)                             
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          0(0x0)                             
  Max Clock Freq. (MHz):   0                                  
  BDFID:                   0                                  
  Internal Node ID:        0                                  
  Compute Unit:            0                                  
  SIMDs per CU:            0                                  
  Shader Engines:          0                                  
  Shader Arrs. per Eng.:   0                                  
  WatchPts on Addr. Ranges:0                                  
  Memory Properties:       
  Features:                AGENT_DISPATCH
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: KERNARG, COARSE GRAINED
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    65536(0x10000) KB                  
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:0KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 3                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    131136832(0x7d0fd40) KB            
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Recommended Granule:4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
  ISA Info:                
*** Done ***             

and

czanik@fedora:~$ rocm-clinfo 
Number of platforms:				 1
  Platform Profile:				 FULL_PROFILE
  Platform Version:				 OpenCL 2.1 AMD-APP (3649.0)
  Platform Name:				 AMD Accelerated Parallel Processing
  Platform Vendor:				 Advanced Micro Devices, Inc.
  Platform Extensions:				 cl_khr_icd cl_amd_event_callback 


  Platform Name:				 AMD Accelerated Parallel Processing
Number of devices:				 1
  Device Type:					 CL_DEVICE_TYPE_GPU
  Vendor ID:					 1002h
  Board name:					 Radeon 8060S Graphics
  Device Topology:				 PCI[ B#197, D#0, F#0 ]
  Max compute units:				 20
  Max work items dimensions:			 3
    Max work items[0]:				 1024
    Max work items[1]:				 1024
    Max work items[2]:				 1024
  Max work group size:				 256
  Preferred vector width char:			 4
  Preferred vector width short:			 2
  Preferred vector width int:			 1
  Preferred vector width long:			 1
  Preferred vector width float:			 1
  Preferred vector width double:		 1
  Native vector width char:			 4
  Native vector width short:			 2
  Native vector width int:			 1
  Native vector width long:			 1
  Native vector width float:			 1
  Native vector width double:			 1
  Max clock frequency:				 2900Mhz
  Address bits:					 64
  Max memory allocation:			 57070749280
  Image support:				 Yes
  Max number of images read arguments:		 128
  Max number of images write arguments:		 8
  Max image 2D width:				 16384
  Max image 2D height:				 16384
  Max image 3D width:				 16384
  Max image 3D height:				 16384
  Max image 3D depth:				 8192
  Max samplers within kernel:			 16
  Max size of kernel argument:			 1024
  Alignment (bits) of base address:		 2048
  Minimum alignment (bytes) for any datatype:	 128
  Single precision floating point capability
    Denorms:					 Yes
    Quiet NaNs:					 Yes
    Round to nearest even:			 Yes
    Round to zero:				 Yes
    Round to +ve and infinity:			 Yes
    IEEE754-2008 fused multiply-add:		 Yes
  Cache type:					 Read/Write
  Cache line size:				 128
  Cache size:					 32768
  Global memory size:				 67142057984
  Constant buffer size:				 57070749280
  Max number of constant args:			 8
  Local memory type:				 Local
  Local memory size:				 65536
  Max pipe arguments:				 16
  Max pipe active reservations:			 16
  Max pipe packet size:				 1236174432
  Max global variable size:			 57070749280
  Max global variable preferred total size:	 67142057984
  Max read/write image args:			 64
  Max on device events:				 1024
  Queue on device max size:			 8388608
  Max on device queues:				 1
  Queue on device preferred size:		 262144
  SVM capabilities:				 
    Coarse grain buffer:			 Yes
    Fine grain buffer:				 Yes
    Fine grain system:				 No
    Atomics:					 No
  Preferred platform atomic alignment:		 0
  Preferred global atomic alignment:		 0
  Preferred local atomic alignment:		 0
  Kernel Preferred work group size multiple:	 32
  Error correction support:			 0
  Unified memory for Host and Device:		 1
  Profiling timer resolution:			 1
  Device endianess:				 Little
  Available:					 Yes
  Compiler available:				 Yes
  Execution capabilities:				 
    Execute OpenCL kernels:			 Yes
    Execute native function:			 No
  Queue on Host properties:				 
    Out-of-Order:				 No
    Profiling :					 Yes
  Queue on Device properties:				 
    Out-of-Order:				 Yes
    Profiling :					 Yes
  Platform ID:					 0x7ffb97d11d80
  Name:						 gfx1151
  Vendor:					 Advanced Micro Devices, Inc.
  Device OpenCL C version:			 OpenCL C 2.0 
  Driver version:				 3649.0 (HSA1.1,LC)
  Profile:					 FULL_PROFILE
  Version:					 OpenCL 2.0 
  Extensions:					 cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program 

Output from llama

root@fedora:~# llama-cli   -m ~/models/llama-2-7b.Q4_K_M.gguf   --no-mmap   -ngl 99   -p "Explain quantum computing in simple terms:"   -n 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
  Device 0: Radeon 8060S Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32
build: 0 (unknown) with HIP version: 6.4.43484-9999 for x86_64-redhat-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device ROCm0 (Radeon 8060S Graphics) - 64031 MiB free
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /root/models/llama-2-7b.Q4_K_M.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 15
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
print_info: file format = GGUF V2
print_info: file type   = Q4_K - Medium
print_info: file size   = 3.80 GiB (4.84 BPW) 
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 3
load: token to piece cache size = 0.1684 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 4096
print_info: n_embd           = 4096
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 32
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: is_swa_any       = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 1
print_info: n_embd_k_gqa     = 4096
print_info: n_embd_v_gqa     = 4096
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: f_attn_scale     = 0.0e+00
print_info: n_ff             = 11008
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 4096
print_info: rope_finetuned   = unknown
print_info: model type       = 7B
print_info: model params     = 6.74 B
print_info: general.name     = LLaMA v2
print_info: vocab type       = SPM
print_info: n_vocab          = 32000
print_info: n_merges         = 0
print_info: BOS token        = 1 '<s>'
print_info: EOS token        = 2 '</s>'
print_info: UNK token        = 0 '<unk>'
print_info: LF token         = 13 '<0x0A>'
print_info: EOG token        = 2 '</s>'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 32 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 33/33 layers to GPU
load_tensors:        ROCm0 model buffer size =  3820.94 MiB
load_tensors:          CPU model buffer size =    70.31 MiB
..................................................................................................
llama_context: constructing llama_context
llama_context: n_seq_max     = 1
llama_context: n_ctx         = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch       = 2048
llama_context: n_ubatch      = 512
llama_context: causal_attn   = 1
llama_context: flash_attn    = 0
llama_context: freq_base     = 10000.0
llama_context: freq_scale    = 1
llama_context:  ROCm_Host  output buffer size =     0.12 MiB
llama_kv_cache_unified:      ROCm0 KV buffer size =  2048.00 MiB
llama_kv_cache_unified: size = 2048.00 MiB (  4096 cells,  32 layers,  1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_kv_cache_unified: LLAMA_SET_ROWS=0, using old ggml_cpy() method for backwards compatibility
llama_context:      ROCm0 compute buffer size =   288.00 MiB
llama_context:  ROCm_Host compute buffer size =    16.01 MiB
llama_context: graph nodes  = 1158
llama_context: graph splits = 2
common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
main: llama threadpool init, n_threads = 16

system_info: n_threads = 16 (n_threads_batch = 16) / 32 | ROCm : NO_VMM = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : LLAMAFILE = 1 | REPACK = 1 | 

sampler seed: 2232334333
sampler params: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 4096
	top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-n-sigma -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist 
generate: n_ctx = 4096, n_batch = 2048, n_predict = 256, n_keep = 1

 Explain quantum computing in simple terms: what is it, how does it work, and what are its potential benefits?
This is a difficult question to answer because quantum computing is not yet a well-defined field of study, and many of the potential applications are still being researched. However, we can say that quantum computing is a type of computation that relies on the principles of quantum mechanics (the branch of physics that describes the behaviour of particles such as electrons and photons).
These particles obey a set of rules that are different from those obeyed by classical computers, which rely on the principles of classical mechanics. Quantum computing uses a particle’s quantum state (such as its spin) to store information. This means that quantum computers can perform computations that are not possible on classical computers.
In the simplest terms, quantum computing is a type of computation that takes advantage of the unique properties of quantum mechanics. These properties include superposition, entanglement, and non-locality. Superposition is the ability of a quantum system to exist in multiple states simultaneously.
This means that a quantum system can be in two different places at the same time, or have two different properties at the same time. Entanglement is the ability of two quantum systems to be inter

llama_perf_sampler_print:    sampling time =       4.27 ms /   265 runs   (    0.02 ms per token, 62075.43 tokens per second)
llama_perf_context_print:        load time =     631.46 ms
llama_perf_context_print: prompt eval time =      63.57 ms /     9 tokens (    7.06 ms per token,   141.57 tokens per second)
llama_perf_context_print:        eval time =    7110.09 ms /   255 runs   (   27.88 ms per token,    35.86 tokens per second)
llama_perf_context_print:       total time =    7184.25 ms /   264 tokens

Closing words

These are just my first steps. Most of the time I was not even fully aware what I was doing, just reused some sample command lines and code. These experiments were good enough to see that AI works on Linux as well, not just on Windows.

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

the avatar of Just Another Tech Blog

Himmelblau Workshop – Hands-On Integration on April 21 in Germany

On April 22, 2026—one day after the 25th sambaXP—the first official Himmelblau Workshop will take place in Göttingen, Germany.

At last year’s sambaXP, I presented “Azure Entra ID Authentication in Samba Using the Himmelblaud Daemon”.
Since then, the project has evolved rapidly, moving from a technical introduction to practical deployment.

My workshop this year builds on that foundation and is aimed at:

  • Linux system administrators
  • Identity and Entra ID engineers
  • Intune and device management teams
  • IT professionals managing hybrid Linux environments

Participants will work hands-on with Linux clients, both with and without a GUI, and will configure:

  • Entra ID authentication
  • Multi-factor authentication
  • Policy enforcement
  • License management
  • Intune-based device management

The session uses the current stable Himmelblau release. Entra ID accounts will be provided, so no personal tenant or prior setup is required.

If you are responsible for integrating Linux systems with Entra ID and want to move from protocol discussion to real-world implementation, this workshop provides a structured, practical environment.

Registration for the workshop is required and available at sambaxp.org.

Also, take the opportunity to explore this year’s sambaXP: my talk, “Linux Meets Intune: From Enrollment to Enforcement in Himmelblau,” is scheduled for Day 1 of the conference—I’d be glad to have you join!

the avatar of Nathan Wolf

Linux Saloon 192 | Storm OS Distribution Exploration

The Linux Saloon discussed Storm OS, an Arch-based distribution created by Ben and contributors. Feedback highlighted the need for productivity apps to attract intermediate users. Participants shared their experiences in tech, including testing openSUSE Tumbleweed. Suggestions for improvement focused on appealing to a broader audience of potential users.

a silhouette of a person's head and shoulders, used as a default avatar

Tumbleweed – Review of the week 2026/12

Dear Tumbleweed users and hackers,

Tumbleweed is rolling full steam ahead with 7 snapshots in 7 days (0312 through 0318). No major issues have shown up in openQA – everything was detected and fixed in the staging areas.

Without further ado, let’s look at what those snapshots brought you this week:

  • libzypp 17.38.4 / zypper 1.14.95 / libsolv 0.7.36
  • sdbootutil 20260311 & 20260313
  • Mesa 26.0.2
  • cURL 8.19.0
  • Linux kernel 6.19.7 & 6.19.8
  • php 8.4.19
  • systemd 259.5
  • KDE Frameworks 6.24.0
  • gimp 3.2.0
  • kbd 2.9.0
  • pipewire 1.6.2
  • Ruby 4.0.2
  • elfutils 0.194
  • gpg 2.5.18

Let’s see if we can keep that pace next week, and if so, what changes you can expect:

  • GCC 16: build fix for s390x
  • Linux kernel 6.19.9
  • Switch default bootloader on uefi systems to systemd-boot (aligning tumbleweed to microos)
  • cmake 4.3.0
  • LLVM 22
  • GCC 16 as the default compiler
  • Autoconf 2.73.0: currently the bate staged to identify issues
  • GNOME 50: Final is staged for QA, some sec reviews and 3rd party package fixes pending
  • glibc 2.43: metabug: https://bugzilla.opensuse.org/show_bug.cgi?id=1257250

the avatar of openSUSE News

Planet News Roundup

This is a roundup of articles from the openSUSE community listed on planet.opensuse.org.

The community blog feed aggregator lists the featured highlights below from March 13 to March 19.

Blogs this week highlight OBS request workflow improvements with better comment visibility and RPMLint integration, and a new cockpit-client-launcher package simplifying Cockpit setup on Tumbleweed and Leap. Blogs also cover KDE Plasma 6.6’s third bugfix update, Marknote 1.5’s new raw Markdown editing mode, Kontainer as a KDE-native Distrobox manager, openSUSE’s new Cavil-Qwen3.5-4B legal classification model and more.

Blogs also cover KDE Plasma 6.6’s third bugfix update, Marknote 1.5’s new raw Markdown editing mode, Kontainer as a KDE-native Distrobox manager, openSUSE’s new Cavil-Qwen3.5-4B legal classification model, a LogAI tool for querying system logs in plain English, Victorhck handing off his unofficial openSUSE guide, the GNOME 50 wallpaper design story, and more.

Here is a summary and links for each post:

My New Toy: FreeBSD on the HP Z2 Mini Revisited

Peter Czánik’s Blog continues the series on his HP Z2 Mini AI workstation. This time he revisits FreeBSD after resolving a graphics driver issue by switching from the AMD to the ATI kernel module. The GNOME desktop is now stable and functional. A switch to KDE’s Plasma resolves a remaining screen-locking issue.

Debate: The State vs. Social Networks – New Event by GNU/Linux València

The KDE Blog announces an upcoming event organized by the nonprofit association GNU/Linux València on March 27 in Valencia, Spain. The evening begins with a Linux install party at 17:00, followed at 18:30 by an open debate on new social media regulations, their implications for privacy, and how federated alternatives like the Fediverse compare. Admission is free.

Latest Improvements to the Request Page

The Open Build Service Blog hightlights improvements to the OBS request workflow. Changes include visual highlighting of comments, an enhanced Accept dropdown with improved accessibility, and integrated contextual descriptions for RPMLint results directly into the UI.

Friday Sketches (Part 2)

Jakub Steiner’s Blog shares a large collection of app icon sketches produced during GNOME Design Team Friday sessions over the past two years. Scroll through all the sketches.

New Launcher Aims to Simplify Cockpit Installations

The openSUSE News team introduces cockpit-client-launcher, which is a new package that gives openSUSE users a straightforward desktop entry point for the Cockpit web-based system administration interface. The launcher, which features a YaST-inspired icon, automates systemd service activation and firewall configuration on first launch. It is available as an official package on both Tumbleweed and Leap.

Central Log Collection – More Than Just Compliance

Peter Czánik’s Blog makes the case that a centralized log collection benefits more than just regulatory compliance; it improves operational ease, log availability during outages, and security against log tampering. The post walks through practical scenarios at different scales, from a handful of machines to networks of hundreds.

Displaying System Info with Native KDE Plasma 6 Plasmoids

Victorhck walks through how to use the built-in System Monitor Sensor plasmoide in KDE Plasma 6 to display system information such as uptime directly on the desktop. The guide covers configuring the widget’s appearance, selecting sensors, and customizing displayed labels.

Third Update of KDE Plasma 6.6

The KDE Blog announces the third bugfix update for Plasma 6.6. The update is part of KDE’s regular maintenance cadence and follows the full Plasma 6.6 feature release. The update is strongly recommended for all users.

Brazilian Digital Children’s Law and Linux: Debunking the Panic

Alessandro’s Blog takes a detailed look at Brazil’s Lei 15.211/2025 (the “Digital Statute for Children and Adolescents”), which sparked widespread but unfounded claims that Linux would be banned in Brazil. The post argues that the episode was driven more by misinformation and social media panic than by the actual legal text.

GNOME 50 Wallpapers

Jakub Steiner’s Blog celebrates the GNOME 50 release by walking through the design history behind the new default wallpaper. The post also covers updates to the Symbolics and glass chip wallpapers, and previews the new Tubes design aimed at dark-theme users.

My New Toy: AI First Steps with the HP Z2 Mini

Peter Czánik’s Blog recounts first experiments with AI features on Windows using the AMD Ryzen 395’s NPU on the HP Z2 Mini workstation. The Windows Recall feature could not be tested since it requires Secure Boot, which was disabled for Linux dual-boot compatibility.

My Unofficial openSUSE Guide Changes Hands

Victorhck announces that the Spanish-language unofficial openSUSE guide, which he has maintained since 2016, is being handed off to community member Diablo Rojo for continued maintenance and modernization. The guide, aimed at newcomers to openSUSE Leap, has been updated to reflect a decade of changes in the project including the rise of Tumbleweed, the transition away from YaST, and the arrival of Myrlyn.

Code Mode in Marknote, S3 Support in Dolphin, and Glaxnimate Release – This Month in KDE Apps

The KDE Blog summarizes a month’s worth of KDE application progress. Marknote gained a plain-text code mode, a note-linking dialog, search-and-replace, and animated UI transitions; Dolphin added S3 support of custom endpoints and is no longer limited to AWS-compatible services.

Kontainer – Distrobox Container Manager Built for KDE Plasma

The CubicleNate Blog reviews Kontainer, a KDE-native graphical interface for managing Distrobox containers. The app integrates well with the Plasma desktop and simplifies installing and running software from other Linux distributions inside containers.

The openSUSE News team announces Cavil-Qwen3.5-4B, which is a new fine-tuned language model on the project’s HuggingFace page. It is designed to automate detection of license declarations and copyright notices in code repositories. GGUF quantized versions contributed by a community member are also available for local use with tools like llama.cpp.

Marknote 1.5 Arrives in KDE

Victorhck covers the Marknote 1.5 release. It highlights the new source mode that lets users edit raw Markdown without the WYSIWYG renderer. Other additions include wiki-style internal note linking with cross-notebook search, drag-and-drop note management, a KRunner plugin for instant note access, and more.

This Month in KDE Linux – February Progress

The KDE Blog summarizes Nate Graham’s February update on KDE. Highlights include more accurate download size reporting in Discover, the introduction of Kapsule as a new container-based software installer, improved Flatpak localization, and better AMD GPU crash protection.

LogAI - Asking The System Logs in Plain English

Zoltán Balogh’s Blog introduces LogAI; It is a locally-run RAG (Retrieval-Augmented Generation) system for querying the Linux journalctl system log. RAG is a technique that enables large language models to retrieve and incorporate new information from external data sources. The post describes the motivation: replacing grep-heavy log triage with natural language questions like “What went wrong last night?”

Press and Hold for Alternative Characters – This Week in Plasma

The KDE Blog covers the latest Plasma development highlights like a new press-and-hold feature for the plasma-keyboard virtual keyboard that surfaces alternative and diacritic characters. Other changes include custom sound theme installation from downloaded files, a Global Menu widget fix for multi-monitor setups, and more.

Linux Saloon 192 – Open Mic Night

The CubicleNate Blog recaps episode 192 of the Linux Saloon podcast. Topics ranged from protecting personal data while browsing the internet to early streaming memories with RealPlayer and the history of IRC.

Personal Digital Sovereignty

Cornelius Schumacher’s Blog reflects on what personal digital sovereignty means in practice. The post emphasizes the importance of one having control over their digital life along with choosing to leave it one desires. He uses examples of his own stack built around Linux, KDE, self-hosted Nextcloud, and GitJournal to get his points across.

24th Update of KDE Frameworks 6

The KDE Blog hightlights KDE Frameworks 6.24.0, the 24th monthly maintenance update for version 6. The blog starts with Attica and then focuses on several Qt-based projects.

openSUSE Tumbleweed Weekly Review – Week 11 of 2026

Victorhck and dimstar report on a full week of snapshots delivered in week 11. A total of seven snapshots were submitted and six were. Snapshot 0309 was held back due to a SELinux policy sync issue with systemd 259.3 that was resolved in the next snapshot. Delivered updates include the Linux kernel 6.19.6 (and kernel longterm 6.18.16), KDE Gear 25.12.3, systemd 259.3, Pipewire 1.6.1, and more.

Released Glaxnimate 0.6, the 2D vector graphics editor for animation creation

The KDE Blog announces the release of 2D vector graphics editor Glaxnimate 0.6.0. The integration brings improved cross-platform support (including Microsoft Store and macOS), KDE theming support, and increases translated languages from 8 to 26. New features include better SVG import/export, undo/redo for layer visibility and more.

View more blogs or learn to publish your own on planet.opensuse.org.

a silhouette of a person's head and shoulders, used as a default avatar

Agama 19 - A New Beginning

In our previous post from November 2025 we already told you to expect a temporal slow down in this blog activity. And here we are, more than four months later, to finally break that hiatus by announcing a new Agama version. But, why did it take so long to go from Agama 18 to Agama 19?

The key is that Agama 19 is not just another incremental change. This new version of Agama actually represents a new starting point in several aspects, from the architectural design to the organization of the web user interface, including some rewritten components and much more.

Architectural revamp

We always wanted Agama to follow the schema displayed below, in which the core of the installer could be controlled through a consistent and simple programming interface (an API, in developers jargon). In that schema, the web-based user interface, the command-line tools and the unattended installation are built on top of that generic API.

Agama general architecture

But previous versions of Agama were full of quirks that didn't allow us to define an API that would match our quality standards as a solid foundation to build a simple but comprehensive installer. Agama 19 represents a quite significant architectural overhaul, needed to leave all those quirks behind and to define mechanisms that can be the cornerstone for any future development (see #2951 and #2998).

Of course, such a drastic change opens the door for potential bugs. Your testing, feedback and kind bug reports will help us to consolidate the new mechanisms in upcoming Agama versions.

Note that, despite the redesign of the programming interface, the JSON-based configuration format remains fully backwards compatible. Any JSON or Jsonnet profile that worked in previous versions of Agama will keep working in Agama 19 and beyond.

In a similar way, we also expect to declare the Agama API as stable soon. So anyone could then write their own tools to directly interact with the Agama core, without depending on the web user interface or the Agama command-line tools.

Reorganization of the web user interface

Having a better API enabled us to adjust the web user interface to be closer to our original vision. We still have a long way to go in our road to a fully usable interface but the new navigation experience, based on a better overview page and a more useful confirmation dialog, sets the direction to follow (see #2988, #3009 and #3025).

Agama overview page

Although most configuration sections remain similar to previous versions of Agama, we plan to revamp some of them. The process has already started for the sections to configure iSCSI, DASD, zFCP and network.

Regarding network, there are two important changes. On the one hand, now the user interface dynamicaly reacts to changes in the underlying system. For instance, when a new cable is plugged in or when a new WiFi adapter is connected (see #3285). On the other hand, now it is possible to define new ethernet connections. That is very relevant in installation scenarios with several network adapters that need to be configured in different ways. For example, where one network is used to access storage devices and another one is used to reach the installation repositories.

New network section

The web user interface also got a new option to download the current installer configuration in the JSON format used by the Agama command line tools and for unattended installation. That is the first step to turn the web interface into a useful learning and prototyping tool for more advanced scenarios, although this new functionality could benefit from several usability improvements. Stay tuned.

Downloading Agama configuration

All the mentioned changes in the user interface will require several updates to the screenshots and guides available at the project home page. That will not happen overnight, so please bear with us during that gradual process. Of course, the page is (just like Agama itself) maintained in a public repository, so feel free to contribute to speed the process up.

Rewritten internal components

As you may know, YaST still lives in the core of Agama. Many tasks like managing storage devices or configuring the boot loader are done under the hood by the corresponding YaST modules (ie. yast2-storage-ng or yast2-bootloader). But lately the usage of some particular YaST modules became more a limiting factor than an advantage.

That is the case for yast2-users and yast2-software. Both are very complex due to historical reasons and to their ability to both install a new system and administer an already installed one, something that is out of the scope of Agama.

Thus, we decided to use the architectural revamp as an opportunity to replace those YaST parts with simpler implementations that will allow us to evolve faster in the future. Agama 19 includes its own management of users and, even more important and ambitious, its own management of software including the registration of SUSE Linux Enterprise and associated products and extensions (see #2915, #2978 and #2982).

Installation modes

But Agama 19 does not only bring restructuring and rewrites, it also comes with a bunch of new functionality, like the new ability to install some distributions in different so-called installation modes.

When installing the experimental pre-releases of SLES 16.1 or the corresponding version of SLES for SAP Applications, now it is possible to select between the Standard and the Immutable modes. See the following screenshot for details.

Installation modes for SLES 16.1

Agama support for installation modes is not limited to the use case illustrated above. Other distributions ("products" in Agama jargon) like openSUSE Leap or Tumbleweed may make use of modes in the future to redefine their software and storage configurations, offering different variants of a same operating system.

More configuration options

Although modes are the most visible of the new features, we also added other new capabilities to Agama that are, at least for now, only accessible using the JSON configuration. That makes those new features available for users of the command-line interface and of unattended installations.

Probably, the most awaited of those new features is the ability to install into an existing LVM volume group (see #3210). When doing so, it is possible to create new logical volumes within the pre-existing volume group and it is also possible to reuse, delete or resize the existing logical volumes. Agama 19 even allows to add new physical volumes to an existing volume group as part of the process. Most of those capabilities will soon be added to the web user interface.

We also extended the configuration of the boot loader with a new setting updateNvram (see #3185) that, when disabled, prevents the boot loader updates of the persistent RAM (NVRAM). That is an expert feature that was requested by several users to handle broken firmwares or network setups.

Last but not least, now it is possible to specify several SSH public keys to authenticate the root user and also to use SSH keys as authentication mechanism for the non-root user created by Agama.

Many changes in the installation media

As you can see, Agama 19 is quite a significant release. But there is room for many things in four months, even to work beyond Agama itself. During this time we also incorporated several changes to the live ISO that most of you use to execute Agama.

Those changes include several improvements in the boot menu (like better support for serial console or adapted timeouts), dropping the "Boot from Disk" option in most architectures, unifying the location of kernel and initramfs between the different architectures and a new boot argument live.net_config_tui=1 (see #2923) to trigger nmtui (an interactive network configuration tool) before Agama starts.

Back to regular speed

It is clear that we consider Agama 19 to be a crucial milestone in the (still short) Agama history, but it is by no means the end of the path. Quite the opposite, we expect to recover our usual development pace and deliver new versions almost every month, as you can see in the updated roadmap.

But with great software rewrites comes great opportunity for new bugs, so we depend on your bug reports, your feedback and your contributions to keep improving. Do not hesitate to reach us at the Agama project at GitHub and the #yast channel at Libera.chat.

Have a lot of fun!

a silhouette of a person's head and shoulders, used as a default avatar

My new toy: FreeBSD on the HP Z2 mini revisited

Last week, I wrote about my initial FreeBSD experiences on my new toy, an AI workstation from HP. FreeBSD runs lightning fast on it, but the desktop was somewhat problematic. Well, I made lots of improvements this week!

A bit of debugging

While there are still some rough edges, there have been tons of improvements since last week. I do not have plans to use FreeBSD on the desktop in the long term, but still, I just could not believe that the FreeBSD GUI is this problematic on this device. I did some experimentation though and it helped a lot… :-)

The initial problem I realized while browsing the output of dmesg was that desktop-installer enabled the wrong kernel modules repository for me. The line leading there was this:

KLD amdgpu.ko: depends on kernel - not available or version mismatch

The next problem occurred when I fixed this problem: there was a kernel panic on boot, when amdgpu.ko was loaded.

I did a fresh FreeBSD install and instead of using the latest packages, I decided to go with the quarterly packages. This way, the desktop installer configured the right kmod repo – however, loading amdgpu.ko still caused a kernel panic. Another experiment I made was using the ATI driver instead of AMD. The installer says that AMD is for modern cards, and ATI is for older ones. Well, as it turned out, even if the chip is barely half a year old, it counts as “old”… :-)

I am still not convinced that proper hardware-based acceleration works: both X.org logs and the GNOME “About” page showed software rendering. However, I had no problem with graphics performance: TuxRacer worked perfectly well… :-) And the GNOME desktop also worked nicely and as stable, including video playback. The only pain point when using GNOME was that screen locking still did not work.

KDE to the rescue

Even if it’s just software rendering, the graphics problem seems to be resolved. However, the screen locking problem still bothered me, as I’m an IT security guy with a healthy dose of paranoia (which means that I lock my screen even when I’m home alone… :-)).

So even if I haven’t tried KDE for the past 5+ years, I gave it a try now. After so many years on XFCE and GNOME, the interface looks a bit weird. However, everything I tried on it seems to work just fine, including screen locking.

KDE on FreeBSD

This blog is part of a longer series about my adventures with my new machine and AI. You can reach me to discuss this blog on one of the contacts listed in the upper right corner. You can read the rest of the blogs under the toy tag.

the avatar of Open Build Service

Latest Improvements to the Request Page

The improvement of the Open Build Service (OBS) Request Page continues! This update introduces several new features and bug fixes, focusing on smarter action menus and more accessible metadata. Here’s a breakdown of what’s new in this iteration: Highlighting of Commented Lines Reviewing code is now easier. When a line in a diff gets commented, it is clearly highlighted to help you focus on the discussion. Enhanced Visibility in the Accept Dropdown The “Accept” menu...