You are here

Agreguesi i feed

Bilal Elmoussaoui: Testing a Rust library - Code Coverage

Planet GNOME - Hën, 13/10/2025 - 2:00pd

It has been a couple of years since I started working on a Rust library called oo7 as a Secret Service client implementation. The library ended up also having support for per-sandboxed app keyring using the Secret portal with a seamless API for end-users that makes usage from the application side straightforward.

The project, with time, grew support for various components:

  • oo7-cli: A secret-tool replacement but much better, as it allows not only interacting with the Secret service on the DBus session bus but also with any keyring. oo7-cli --app-id com.belmoussaoui.Authenticator list, for example, allows you to read the sandboxed app with app-id com.belmoussaoui.Authenticator's keyring and list its contents, something that is not possible with secret-tool.
  • oo7-portal: A server-side implementation of the Secret portal mentioned above. Straightforward, thanks to my other library ASHPD.
  • cargo-credential-oo7: A cargo credential provider built using oo7 instead of libsecret.
  • oo7-daemon: A server-side implementation of the Secret service.

The last component was kickstarted by Dhanuka Warusadura, as we already had the foundation for that in the client library, especially the file backend reimplementation of gnome-keyring. The project is slowly progressing, but it is almost there!

The problem with replacing such a very sensitive component like gnome-keyring-daemon is that you have to make sure the very sensitive user data is not corrupted, lost, or inaccessible. For that, we need to ensure that both the file backend implementation in the oo7 library and the daemon implementation itself are well tested.

That is why I spent my weekend, as well as a whole day off, working on improving the test suite of the wannabe core component of the Linux desktop.

Coverage Report

One metric that can give the developer some insight into which lines of code or functions of the codebase are executed when running the test suite is code coverage.

In order to get the coverage of a Rust project, you can use a project like Tarpaulin, which integrates with the Cargo build system. For a simple project, a command like this, after installing Tarpaulin, can give you an HTML report:

cargo tarpaulin \ --package oo7 \ --lib \ --no-default-features \ --features "tracing,tokio,native_crypto" \ --ignore-panics \ --out Html \ --output-dir coverage

Except in our use case, it is slightly more complicated. The client library supports switching between Rust native cryptographic primitives crates or using OpenSSL. We must ensure that both are tested.

For that, we can export our report in LCOV for native crypto and do the same for OpenSSL, then combine the results using a tool like grcov.

mkdir -p coverage-raw cargo tarpaulin \ --package oo7 \ --lib \ --no-default-features \ --features "tracing,tokio,native_crypto" \ --ignore-panics \ --out Lcov \ --output-dir coverage-raw mv coverage-raw/lcov.info coverage-raw/native-tokio.info cargo tarpaulin \ --package oo7 \ --lib \ --no-default-features \ --features "tracing,tokio,openssl_crypto" \ --ignore-panics \ --out Lcov \ --output-dir coverage-raw mv coverage-raw/lcov.info coverage-raw/openssl-tokio.info

and then combine the results with

cat coverage-raw/*.info > coverage-raw/combined.info grcov coverage-raw/combined.info \ --binary-path target/debug/ \ --source-dir . \ --output-type html \ --output-path coverage \ --branch \ --ignore-not-existing \ --ignore "**/portal/*" \ --ignore "**/cli/*" \ --ignore "**/tests/*" \ --ignore "**/examples/*" \ --ignore "**/target/*"

To make things easier, I added a bash script to the project repository that generates coverage for both the client library and the server implementation, as both are very sensitive and require intensive testing.

With that script in place, I also used it on CI to generate and upload the coverage reports at https://bilelmoussaoui.github.io/oo7/coverage/. The results were pretty bad when I started.

Testing

For the client side, most of the tests are straightforward to write; you just need to have a secret service implementation running on the DBus session bus. Things get quite complicated when the methods you have to test require a Prompt, a mechanism used in the spec to define a way for the user to be prompted for a password to unlock the keyring, create a new collection, and so on. The prompter is usually provided by a system component. For now, we just skipped those tests.

For the server side, it was mostly about setting up a peer-to-peer connection between the server and the client:

let guid = zbus::Guid::generate(); let (p0, p1) = tokio::net::UnixStream::pair().unwrap(); let (client_conn, server_conn) = tokio::try_join!( // Client zbus::connection::Builder::unix_stream(p0).p2p().build(), // Server zbus::connection::Builder::unix_stream(p1) .server(guid) .unwrap() .p2p() .build(), ) .unwrap();

Thanks to the design of the client library, we keep the low-level APIs under oo7::dbus::api, which allowed me to straightforwardly write a bunch of server-side tests already.

There are still a lot of tests that need to be written and a few missing bits to ensure oo7-daemon is in an acceptable shape to be proposed as an alternative to gnome-keyring.

Don't overdo it

The coverage report is not meant to be targeted at 100%. It’s not a video game. You should focus only on the critical parts of your code that must be tested. Testing a Debug impl or a From trait (if it is straightforward) is not really useful, other than giving you a small dose of dopamine from "achieving" something.

Till then, may your coverage never reach 100%.

Hubert Figuière: Dev Log September 2025

Planet GNOME - Sht, 11/10/2025 - 2:00pd

Not as much as I wanted to do was done in September.

libopenraw

Extracting more of the calibration values for colour correction on DNG. Currently work on fixing the purple colour cast.

Added Nikon ZR and EOS C50.

ExifTool

Submitted some metadata updates to ExifTool. Because it nice to have, and also because libopenraw uses some of these autogenerated: I have a Perl script to generate Rust code from it (it used to do C++).

Niepce

Finally merged the develop branch with all the import dialog work after having requested that it be removed from Damned Lies to not strain the translator is there is a long way to go before we can freeze the strings.

Supporting cast

Among the number of packages I maintain / update on flathub, LightZone is a digital photo editing application written in Java1. Updating to the latest runtime 25.08 cause it to ignore the HiDPI setting. It will honour GDK_SCALE environment but this isn't set. So I wrote the small command line too gdk-scale to output the value. See gdk-scale on gitlab. And another patch in the wrapper script.

HiDPI support remains a mess across the board. Fltk just recently gained support for it (it's used by a few audio plugins).

1

Don't try this at home.

Sebastian Wick: SO_PEERPIDFD Gets More Useful

Planet GNOME - Pre, 10/10/2025 - 7:04md

A while ago I wrote about the limited usefulness of SO_PEERPIDFD. for authenticating sandboxed applications. The core problem was simple: while pidfds gave us a race-free way to identify a process, we still had no standardized way to figure out what that process actually was - which sandbox it ran in, what application it represented, or what permissions it should have.

The situation has improved considerably since then.

cgroup xattrs

Cgroups now support user extended attributes. This feature allows arbitrary metadata to be attached to cgroup inodes using standard xattr calls.

We can change flatpak (or snap, or any other container engine) to create a cgroup for application instances it launches, and attach metadata to it using xattrs. This metadata can include the sandboxing engine, application ID, instance ID, and any other information the compositor or D-Bus service might need.

Every process belongs to a cgroup, and you can query which cgroup a process belongs to through its pidfd - completely race-free.

Standardized Authentication

Remember the complexity from the original post? Services had to implement different lookup mechanisms for different sandbox technologies:

  • For flatpak: look in /proc/$PID/root/.flatpak-info
  • For snap: shell out to snap routine portal-info
  • For firejail: no solution

All of this goes away. Now there’s a single path:

  1. Accept a connection on a socket
  2. Use SO_PEERPIDFD to get a pidfd for the client
  3. Query the client’s cgroup using the pidfd
  4. Read the cgroup’s user xattrs to get the sandbox metadata

This works the same way regardless of which sandbox engine launched the application.

A Kernel Feature, Not a systemd One

It’s worth emphasizing: cgroups are a Linux kernel feature. They have no dependency on systemd or any other userspace component. Any process can manage cgroups and attach xattrs to them. The process only needs appropriate permissions and is restricted to a subtree determined by the cgroup namespace it is in. This makes the approach universally applicable across different init systems and distributions.

To support non-Linux systems, we might even be able to abstract away the cgroup details, by providing a varlink service to register and query running applications. On Linux, this service would use cgroups and xattrs internally.

Replacing Socket-Per-App

The old approach - creating dedicated wayland, D-Bus, etc. sockets for each app instance and attaching metadata to the service which gets mapped to connections on that socket - can now be retired. The pidfd + cgroup xattr approach is simpler: one standardized lookup path instead of mounting special sockets. It works everywhere: any service can authenticate any client without special socket setup. And it’s more flexible: metadata can be updated after process creation if needed.

For compositor and D-Bus service developers, this means you can finally implement proper sandboxed client authentication without needing to understand the internals of every container engine. For sandbox developers, it means you have a standardized way to communicate application identity without implementing custom socket mounting schemes.

Jiri Eischmann: Fedora & CentOS at LinuxDays 2025

Planet GNOME - Mar, 07/10/2025 - 6:23md

Another edition of LinuxDays took place in Prague last weekend – the country’s largest Linux event drawing more than 1200 attendees and as every yearm we had a Fedora booth there – this time we also representing CentOS.

I was really glad that Tomáš Hrčka helped me staff the booth. I’m focused on the desktop part of Fedora and don’t follow the rest of the project in such detail. As a member of FESCo and Fedora infra team he has a great overview of what is going on in the project and our knowledge complemented each other very well when answering visitors’ questions. I’d also like to thank Adellaide Mikova who helped us tremendously despite not being a technical person.

This year I took our heavy 4K HDR display and showcased HDR support in Fedora Linux whose implementation was a multi-year effort for our team. I played HDR videos in two different video players (one that supports HDR and one that doesn’t), so that people could see a difference, and explained what needed to be implemented to make it work.

Another highlight of our booth were the laptops that run Fedora exceptionally well: Slimbook and especially Framework Laptop. Visitors were checking them out and we spoke about how the Fedora community works with the vendors to make sure Fedora Linux runs flawlessly on their laptops.

We also got a lot of questions about CentOS. We met quite a few people who were surprised that CentOS still exists. We explained to them that it lives on in the form of CentOS Stream and tried to dispel some of common misconceptions surrounding it.

Exhausting as it is, I really enjoy going to LinuxDays, but it’s also a great opportunity to explain things and get direct feedback from the community.

Dimitri John Ledkov: Achieving actually full disk encryption of UEFI ESP at rest with TCG OPAL, FIPS, LUKS

Planet Ubuntu - Hën, 28/07/2025 - 1:13md
Achieving full disk encryption using FIPS, TCG OPAL and LUKS to encrypt UEFI ESP on bare-metal and in VMs

Many security standards such as CIS and STIG require to protect information at rest. For example, NIST SP 800-53r5 SC-28 advocate to use cryptographic protection, offline storage and TPMs to enhance protection of information confidentiality and/or integrity.

Traditionally to satisfy such controls on portable devices such as laptops one would utilize software based Full Disk Encryption - Mac OS X FileVault, Windows Bitlocker, Linux cryptsetup LUKS2. In cases when FIPS cryptography is required, additional burden would be placed onto these systems to operate their kernels in FIPS mode.

Trusted Computing Group works on establishing many industry standards and specifications, which are widely adopted to improve safety and security of computing whilst keeping it easy to use. One of their most famous specifications them is TCG TPM 2.0 (Trusted Platform Module). TPMs are now widely available on most devices and help to protect secret keys and attest systems. For example, most software full disk encryption solutions can utilise TCG TPM to store full disk encryption keys providing passwordless, biometric or pin-base ways to unlock the drives as well as attesting that system have not been modified or compromised whilst offline.

TCG Storage Security Subsystem Class: Opal Specification is a set of specifications for features of data storage devices. The authors and contributors to OPAL are leading and well trusted storage manufacturers such as Samsung, Western Digital, Seagate Technologies, Dell, Google, Lenovo, IBM, Kioxia, among others. One of the features that Opal Specification enables is self-encrypting drives which becomes very powerful when combined with pre-boot authentication. Out of the box, such drives always and transparently encrypt all disk data using hardware acceleration. To protect data one can enter UEFI firmware setup (BIOS) to set NVMe single user password (or user + administrator/recovery passwords) to encrypt the disk encryption key. If one's firmware didn't come with such features, one can also use SEDutil to inspect and configure all of this. Latest release of major Linux distributions have SEDutil already packaged.

Once password is set, on startup, pre-boot authentication will request one to enter password - prior to booting any operating systems. It means that full disk is actually encrypted, including the UEFI ESP and all operating systems that are installed in case of dual or multi-boot installations. This also prevents tampering with ESP, UEFI bootloaders and kernels which with traditional software-based encryption often remain unencrypted and accessible. It also means one doesn't have to do special OS level repartitioning, or installation steps to ensure all data is encrypted at rest.

What about FIPS compliance? Well, the good news is that majority of the OPAL compliant hard drives and/or security sub-chips do have FIPS 140-3 certification. Meaning they have been tested by independent laboratories to ensure they do in-fact encrypt data. On the CMVP website one can search for module name terms "OPAL" or "NVMe" or name of hardware vendor to locate FIPS certificates.

Are such drives widely available? Yes. For example, a common Thinkpad X1 gen 11 has OPAL NVMe drives as standard, and they have FIPS certification too. Thus, it is likely in your hardware fleet these are already widely available. Use sedutil to check if MediaEncrypt and LockingSupported features are available.

Well, this is great for laptops and physical servers, but you may ask - what about public or private cloud? Actually, more or less the same is already in-place in both. On CVMP website all major clouds have their disk encryption hardware certified, and all of them always encrypt all Virtual Machines with FIPS certified cryptography without an ability to opt-out. One is however in full control of how the encryption keys are managed: cloud-provider or self-managed (either with a cloud HSM or KMS or bring your own / external). See these relevant encryption options and key management docs for GCP, Azure, AWS. But the key takeaway without doing anything, at rest, VMs in public cloud are always encrypted and satisfy NIST SP 800-53 controls.

What about private cloud? Most Linux based private clouds ultimately use qemu typically with qcow2 virtual disk images. Qemu supports user-space encryption of qcow2 disk, see this manpage. Such encryption encrypts the full virtual machine disk, including the bootloader and ESP. And it is handled entirely outside of the VM on the host - meaning the VM never has access to the disk encryption keys. Qemu implements this encryption entirely in userspace using gnutls, nettle, libgcrypt depending on how it was compiled. This also means one can satisfy FIPS requirements entirely in userspace without a Linux kernel in FIPS mode. Higher level APIs built on top of qemu also support qcow2 disk encryption, as in projects such as libvirt and OpenStack Cinder.

If you carefully read the docs, you may notice that agent support is explicitly sometimes called out as not supported or not mentioned. Quite often agents running inside the OS may not have enough observability to them to assess if there is external encryption. It does mean that monitoring above encryption options require different approaches - for example monitor your cloud configuration using tools such as Wiz and Orca, rather than using agents inside individual VMs. For laptop / endpoint security agents, I do wish they would start gaining capability to report OPAL SED availability and status if it is active or not.

What about using software encryption none-the-less on top of the above solutions? It is commonly referred to double or multiple encryption. There will be an additional performance impact, but it can be worthwhile. It really depends on what you define as data at rest for yourself and which controls you need. If one has a dual-boot laptop, and wants to keep one OS encrypted whilst booted into the other, it can perfectly reasonable to encrypted the two using separate software encryption keys. In addition to the OPAL encryption of the ESP. For more targeted per-file / per-folder encryption, one can look into using gocryptfs which is the best successor to the once popular, but now deprecated eCryptfs (amazing tool, but has fallen behind in development and can lead to data loss).

All of the above mostly talks about cryptographic encryption, which only provides confidentially but not data integrity. To protect integrity, one needs to choose how to maintain that. dm-verity is a good choice for read-only and rigid installations. For read-write workloads, it may be easier to deploy ZFS or Btrfs instead. If one is using filesystems without a built-in integrity support such as XFS or Ext4, one can retrofit integrity layer to them by using dm-integrity (either standalone, or via dm-luks/cryptsetup --integrity option).

If one has a lot of estate and a lot of encryption keys to keep track off a key management solution is likely needed. The most popular solution is likely the one from Thales Group marketed under ChiperTrust Data Security Platform (previously Vormetric), but there are many others including OEM / Vendor / Hardware / Cloud specific or agnostic solutions.

I hope this crash course guide piques your interest to learn and discover modern confidentially and integrity solutions, and to re-affirm or change your existing controls w.r.t. to data protection at rest. 

Full disk encryption, including UEFI ESP /boot/efi is now widely achievable by default on both baremetal machines and in VMs including with FIPS certification. To discuss more let's connect on Linkedin.

Jonathan Carter: DebConf25

Planet Ubuntu - Sht, 19/07/2025 - 7:12md

The last two weeks I attended DebConf and DebCamp in Brest, France.

Usually, I like to do a more detailed write-up of DebConf, but I was already quite burnt out when I got here, so I’ll circle back to a few things that were important to me in later posts.

In the meantime, thanks to everyone who made this DebConf possible, whether you volunteered for one task or were part of the organisation team. Also a special thanks to the wonderful sponsors who made this entire event possible!

See you next year in Argentina!

Jellyfish, taken during daytrip at aquarium.

Faqet

Subscribe to AlbLinux agreguesi