You are here

Agreguesi i feed

Christian Schaller: Fedora Workstation 40 – what are we working on

Planet GNOME - Enj, 28/03/2024 - 7:56md
So Fedora Workstation 40 Beta has just come out so I thought I share a bit about some of the things we are working on for Fedora Workstation currently and also major changes coming in from the community.

Flatpak

Flatpaks has been a key part of our strategy for desktop applications for a while now and we are working on a multitude of things to make Flatpaks an even stronger technology going forward. Christian Hergert is working on figuring out how applications that require system daemons will work with Flatpaks, using his own Sysprof project as the proof of concept application. The general idea here is to rely on the work that has happened in SystemD around sysext/confext/portablectl trying to figure out who we can get a system service installed from a Flatpak and the necessary bits wired up properly. The other part of this work, figuring out how to give applications permissions that today is handled with udev rules, that is being worked on by Hubert Figuière based on earlier work by Georges Stavracas on behalf of the GNOME Foundation thanks to the sponsorship from the Sovereign Tech Fund. So hopefully we will get both of these two important issues resolved soon. Kalev Lember is working on polishing up the Flatpak support in Foreman (and Satellite) to ensure there are good tools for managing Flatpaks when you have a fleet of systems you manage, building on the work of Stephan Bergman. Finally Jan Horak and Jan Grulich is working hard on polishing up the experience of using Firefox from a fully sandboxed Flatpak. This work is mainly about working with the upstream community to get some needed portals over the finish line and polish up some UI issues in Firefox, like this one.

Toolbx

Toolbx, our project for handling developer containers, is picking up pace with Debarshi Ray currently working on getting full NVIDIA binary driver support for the containers. One of our main goals for Toolbx atm is making it a great tool for AI development and thus getting the NVIDIA & CUDA support squared of is critical. Debarshi has also spent quite a lot of time cleaning up the Toolbx website, providing easier access to and updating the documentation there. We are also moving to use the new Ptyxis (formerly Prompt) terminal application created by Christian Hergert, in Fedora Workstation 40. This both gives us a great GTK4 terminal, but we also believe we will be able to further integrate Toolbx and Ptyxis going forward, creating an even better user experience.

Nova

So as you probably know, we have been the core maintainers of the Nouveau project for years, keeping this open source upstream NVIDIA GPU driver alive. We plan on keep doing that, but the opportunities offered by the availability of the new GSP firmware for NVIDIA hardware means we should now be able to offer a full featured and performant driver. But co-hosting both the old and the new way of doing things in the same upstream kernel driver has turned out to be counter productive, so we are now looking to split the driver in two. For older pre-GSP NVIDIA hardware we will keep the old Nouveau driver around as is. For GSP based hardware we are launching a new driver called Nova. It is important to note here that Nova is thus not a competitor to Nouveau, but a continuation of it. The idea is that the new driver will be primarily written in Rust, based on work already done in the community, we are also evaluating if some of the existing Nouveau code should be copied into the new driver since we already spent quite a bit of time trying to integrate GSP there. Worst case scenario, if we can’t reuse code, we use the lessons learned from Nouveau with GSP to implement the support in Nova more quickly. Contributing to this effort from our team at Red Hat is Danilo Krummrich, Dave Airlie, Lyude Paul, Abdiel Janulgue and Phillip Stanner.

Explicit Sync and VRR

Another exciting development that has been a priority for us is explicit sync, which is critical for especially the NVidia driver, but which might also provide performance improvements for other GPU architectures going forward. So a big thank you to Michel Dänzer , Olivier Fourdan, Carlos Garnacho; and Nvidia folks, Simon Ser and the rest of community for working on this. This work has just finshed upstream so we will look at backporting it into Fedora Workstaton 40. Another major Fedora Workstation 40 feature is experimental support for Variable Refresh Rate or VRR in GNOME Shell. The feature was mostly developed by community member Dor Askayo, but Jonas Ådahl, Michel Dänzer, Carlos Garnacho and Sebastian Wick have all contributed with code reviews and fixes. In Fedora Workstation 40 you need to enable it using the command

gsettings set org.gnome.mutter experimental-features "['variable-refresh-rate']"

PipeWire

Already covered PipeWire in my post a week ago, but to quickly summarize here too. Using PipeWire for video handling is now finally getting to the stage where it is actually happening, both Firefox and OBS Studio now comes with PipeWire support and hopefully we can also get Chromium and Chrome to start taking a serious look at merging the patches for this soon. Whats more Wim spent time fixing Firewire FFADO bugs, so hopefully for our pro-audio community users this makes their Firewire equipment fully usable and performant with PipeWire. Wim did point out when I spoke to him though that the FFADO drivers had obviously never had any other consumer than JACK, so when he tried to allow for more functionality the drivers quickly broke down, so Wim has limited the featureset of the PipeWire FFADO module to be an exact match of how these drivers where being used by JACK. If the upstream kernel maintainer is able to fix the issues found by Wim then we could look at providing a more full feature set. In Fedora Workstation 40 the de-duplication support for v4l vs libcamera devices should work as soon as we update Wireplumber to the new 0.5 release.

To hear more about PipeWire and the latest developments be sure to check out this interview with Wim Taymans by the good folks over at Destination Linux.

Remote Desktop

Another major feature landing in Fedora Workstation 40 that Jonas Ådahl and Ray Strode has spent a lot of effort on is finalizing the remote desktop support for GNOME on Wayland. So there has been support for remote connections for already logged in sessions already, but with these updates you can do the login remotely too and thus the session do not need to be started already on the remote machine. This work will also enable 3rd party solutions to do remote logins on Wayland systems, so while I am not at liberty to mention names, be on the lookout for more 3rd party Wayland remoting software becoming available this year.

This work is also important to help Anaconda with its Wayland transition as remote graphical install is an important feature there. So what you should see there is Anaconda using GNOME Kiosk mode and the GNOME remote support to handle this going forward and thus enabling Wayland native Anaconda.

HDR

Another feature we been working on for a long time is HDR, or High Dynamic Range. We wanted to do it properly and also needed to work with a wide range of partners in the industry to make this happen. So over the last year we been contributing to improve various standards around color handling and acceleration to prepare the ground, work on and contribute to key libraries needed to for instance gather the needed information from GPUs and screens. Things are coming together now and Jonas Ådahl and Sebastian Wick are now going to focus on getting Mutter HDR capable, once that work is done we are by no means finished, but it should put us close to at least be able to start running some simple usecases (like some fullscreen applications) while we work out the finer points to get great support for running SDR and HDR applications side by side for instance.

PyTorch

We want to make Fedora Workstation a great place to do AI development and testing. First step in that effort is packaging up PyTorch and making sure it can have working hardware acceleration out of the box. Tom Rix has been leading that effort on our end and you will see the first fruits of that labor in Fedora Workstation 40 where PyTorch should work with GPU acceleration on AMD hardware (ROCm) out of the box. We hope and expect to be able to provide the same for NVIDIA and Intel graphics eventually too, but this is definitely a step by step effort.

Justin W. Flory: Win-win for all: How to run a non-engineering Outreachy internship

Planet GNOME - Enj, 28/03/2024 - 9:00pd

The post Win-win for all: How to run a non-engineering Outreachy internship appeared first on /home/jwf/.

/home/jwf/ - Free & Open Source, technology, travel, and life reflections

This year, I am mentoring again with the Outreachy internship program. It is my third time mentoring for Outreachy and my second time with the Fedora Project. However, it is my first time mentoring as a Red Hat associate. What also makes this time different from before is that I am mentoring a non-engineering project with Outreachy. Or in other words, my project does not require an applicant to write any code. Evidently, the internship description was a hook. We received an extremely large wave of applicants literally overnight. Between 40-50 new contributors arrived to the Fedora Marketing Team in the first week. Planning tasks and contributions for beginners already took effort. Scaling that planning work overnight for up to 50 people simultaneously is extraordinarily difficult.

During this round, my co-mentor Joseph Gayoso and I experimented with new approaches at handling the tsunami wave. There are two competing forces at play. One, you need to provide engagement to top performers so they remain motivated to continue. Two, you need to provide new opportunities for emerging contributors to distinguish themselves. It is easier to do one of these but hard to do both simultaneously. However, Joseph and I agreed on something important. We agreed that all applicants should end the contribution phase with something practically useful. As mentors, we asked ourselves how to prepare applicants to be successful open source contributors beyond this one month.

In this article, you will get some practical takeaways for mentoring with Outreachy. First, I will share our practical approach for structuring and planning an open source project during the Outreachy contribution phase. Second, I will detail the guiding philosophy Joseph and I follow for how we planned the contribution phase.

About Outreachy

This article assumes you already know a thing or two about the Outreachy internship program. If not, Outreachy provides internships in open source and open science. Outreachy provides internships to people subject to systemic bias and impacted by underrepresentation in the technical industry where they live. You can read more on the Outreachy website.

What makes Outreachy unique is that the internships are remote and often open without geographic or nationality constraints. Applicants from nearly every continent of the world have participated in Outreachy. Also, Outreachy is distinguished by the contribution phase. For a one-month period, approved Outreachy applicants are encouraged to participate in the project community as a contributor. Applicants spend the month learning about the project, the community, the mentors, and the work involved for the internship. This provides applicants an opportunity to grow their open source identity. It also gives mentors an opportunity to assess applicants on their skills and communication abilities.

However, this contribution phase can be intimidating as a mentor, especially if you are new to mentoring with Outreachy. A wave of people eager to contribute could suddenly appear overnight at your project’s door steps. If you are not prepared, you will have to adapt quickly!

Pre-Requisite Tasks: Raising the Outreachy bar

My co-mentor and I knew that a wave of applicants was coming. However, we didn’t expect the wave to be as big as it was. After the first week of the contribution phase, we knew we needed a better way to scale ourselves. We were limited in our person-power. The approach we took to addressing the mental overload was defining pre-requisite tasks.

We defined pre-requisite tasks as tasks that any applicant MUST complete in order to be considered eligible for our internship. Without completing these tasks, we explained that final applications would not be accepted by mentors. The defining characteristics of these pre-requisite tasks were that they were personalized, repeatable, and measurable. We came up with five pre-requisite tasks that all applicants were required to complete beyond the initial qualification for Outreachy:

  1. Set up your Fedora Account System (FAS) account
  2. Set up a personal blog
  3. Write a blog post that introduces the Fedora community to your audience
  4. Promote your intro blog post on social media
  5. Write an onboarding guide for Outreachy 2025 applicants
How were initial contributions personalized?

Each of these tasks were personalized to each applicant. They each have a unique account profile, with their pictures, time zones, and chat system usernames. The personal blog is a personal space on the Internet for each applicant to start writing new posts. The blog post prompts encouraged applicants to start filling up their blogs with Fedora content. The social media post helped applicants promote themselves as budding open source enthusiasts in their existing web spaces.

This approach had two benefits. First, it provided clear guidance to all newcomers and early-stage applicants on how to get started with contributing to Fedora for the Outreachy internship. This took a burden off of mentors answering the same questions about getting started. It also gave new applicants something to start on right away. Joseph and I were able to put more time into reviewing incoming contributions and brainstorming new tasks.

Portfolio-driven submissions for Outreachy

Toward the third week, many applicants had completed the pre-requisite tasks and were ready for more advanced tasks. Many had already taken on advanced projects already, beyond the pre-requisite tasks. Although the pre-requisite tasks did reduce the applicant pool, there were still between 20-30 people who completed them all. Again, the approach had to adapt as our ability to keep up with new contributions slowed down.

From here, we encouraged applicants to build personal portfolio pages that described their contributions with Fedora. This encouraged applicants to use the blog they built in the previous tasks, although they are not required to use their blog to host their portfolio. The only requirement we added was that it should be publicly visible on the Internet without a paywall. So, no Google Docs. Most applicants have ended up using their blog for this purpose though.

How did a portfolio help?

Building a portfolio solved multiple challenges for our Outreachy project at once. First, the portfolios will simplify how the project mentors review final applications after the deadline on April 2nd, 2024. It will be streamlined because we will have a single place we can refer to that describes the applicant’s achievements. It gives us a quick, easily shareable place to review and share with other stakeholders.

Second, it ends up being something useful to the applicant as well. The portfolio page captures a month’s worth of contributions to open source. For many applicants, this is their first time ever interacting with an open source community online. So, it is a big deal to block out a month of time to volunteer on a project in a competitive environment for a paid, remote internship opportunity. Writing a portfolio page gives applicants the confidence to represent their contributions to Fedora, regardless of whether they are selected for the Fedora internship. It becomes a milestone marker for themselves and for their professional careers.

Our philosophy: You win, we win.

This idea of applicants building something that is useful for themselves underpins the approach that Joseph and I took on structuring our non-engineering Outreachy internship. If I had to summarize the philosophy in one sentence, it might be like this:

Everyone who participants as an Outreachy applicant to Fedora should finish the contribution phase with more than they had at the start of the contribution phase.

myself

Our philosophy can be applied to engineering and non-engineering internships. However, applying the philosophy to our non-engineering project required improvisation as we went. There are examples of design-centered Outreachy internships, but I have not seen a marketing or community manager internship before. This was a challenge because there were not great models to follow. But it also left us room to innovate and try ideas that we have never tried before.

Adopting this philosophy served as helpful guidance on planning what we directed applicants to do during the contribution phase. It allowed us to think through ways that applicants could make real, recognizable contributions to Fedora. It also enables applicants to achieve a few important outcomes:

  1. Get real experience in a real project.
  2. Build their own brand as open source contributors.
  3. Gain confidence at collaborating in a community.

The contribution phase is not yet over. So, we will continue to follow this philosophy and see where it guides us into the end of this phase!

Share your Outreachy mentoring experience!

Have you experienced or seen a marketing or community manager internship in Outreachy before? Know a project or a person who has done this? Or is this totally new to you? Drop a comment below with your thoughts. Don’t forget to share with someone else if you found this advice useful.

Jordan Petridis: Thoughts on employing PGO and BOLT on the GNOME stack

Planet GNOME - Mar, 26/03/2024 - 4:42md

Christian was looking at PGO and BOLT recently I figured I’d write down my notes from the discussions we had on how we’d go about making things faster on our stack, since I don’t have time or the resource to pursue those plans myself atm.

First off let’s start with the basics, PGO (profile guided optimizations) and BOLT (Binary Optimization and Layout Tool) work in similar ways. You capture one or more “profiles” of a workload that’s representative of a usecase of your code and then the tools do their magic to make the common hot paths more efficient/cache-friendly/etc. Afterwards they produce a new binary that is hopefully faster than the old one and functionally identical so you can just replace it.

Now already we have two issues here that arise here:

First of all we don’t really have any benchmarks in our stack, let alone, ones that are rounded enough to account for the majority of usecases. Additionally we need better instrumentation to capture stats like frames, frame-times, and export them both for sysprof and so we can make the benchmark runners more useful.

Once we have the benchmarks we can use them to create the profiles for optimizations and to verify that any changes have the desired effect. We will need multiple profiles of all the different hardware/software configurations.

For example for GTK ideally we’d want to have a matrix of profiles for the different render backends (NGL/Vulkan) along with the mesa drivers they’d use depending on different hardware AMD/Intel and then also different architectures, so additional profile for Raspberrypi5 and Asahi stacks. We might also want to add a profile captured under qemu+virtio while we are it too.

Maintaining the benchmarks and profiles would be a lot of work and very tailored to each project so they would all have to live in their upstream repositories.

On the other hand, the optimization itself has to be done during the Tree/userland/OS composition and we’d have to aggregate all the profiles from all the projects to apply them. This is easily done when you are in control of the whole deployment as we can do for the GNOME Flatpak Runtime. It’s also easy to do if you are targeting an embedded deployment where most of the time you have custom images you are in full control off and know exactly the workload you will be running.

If we want distros to also apply these optimizations and for this to be done at scale, we’d have to make the whole process automatic and part of the usual compilation process so there would be no room for error during integration. The downside of this would be that we’d have a lot less opportunities for aggregating different usecases/profiles as projects would either have to own optimizations of the stack beneath them (ex: GTK being the one relinking pango) or only relink their own libraries.

To conclude, Post-linktime optimization would be a great avenue to explore as it seems to be one of the lower-hanging fruits when it comes to optimizing the whole stack. But it also would be quite the effort and require a decent amount of work to be committed to it. It would be worth it in the long run.

Andy Wingo: hacking v8 with guix, bis

Planet GNOME - Mar, 26/03/2024 - 12:51md

Good day, hackers. Today, a pragmatic note, on hacking on V8 from a Guix system.

I’m going to skip a lot of the background because, as it turns out, I wrote about this already almost a decade ago. But following that piece, I mostly gave up on doing V8 hacking from a Guix machine—it was more important to just go with the flow of the ever-evolving upstream toolchain. In fact, I ended up installing Ubuntu LTS on my main workstations for precisely this reason, which has worked fine; I still get Guix in user-space, which is better than nothing.

Since then, though, Guix has grown to the point that it’s easier to create an environment that can run a complicated upstream source management project like V8’s. This is mainly guix shell in the --container --emulate-fhs mode. This article is a step-by-step for how to get started with V8 hacking using Guix.

get the code

You would think this would be the easy part: just git clone the V8 source. But no, the build wants a number of other Google-hosted dependencies to be vendored into the source tree. To perform the initial fetch for those dependencies and to keep them up to date, you use helpers from the depot_tools project. You also use depot_tools to submit patches to code review.

When you live in the Guix world, you might be tempted to look into what depot_tools actually does, and to replicate its functionality in a more minimal, Guix-like way. Which, sure, perhaps this is a good approach for packaging V8 or Chromium or something, but when you want to work on V8, you need to learn some humility and just go with the flow. (It’s hard for the kind of person that uses Guix. But it’s what you do.)

You can make some small adaptations, though. depot_tools is mostly written in Python, and it actually bundles its own virtualenv support for using a specific python version. This isn’t strictly needed, so we can set the funny environment variable VPYTHON_BYPASS="manually managed python not supported by chrome operations" to just use python from the environment.

Sometimes depot_tools will want to run some prebuilt binaries. Usually on Guix this is anathema—we always build from source—but there’s only so much time in the day and the build system is not our circus, not our monkeys. So we get Guix to set up the environment using a container in --emulate-fhs mode; this lets us run third-party pre-build binaries. Note, these binaries are indeed free software! We can run them just fine if we trust Google, which you have to when working on V8.

no, really, get the code

Enough with the introduction. The first thing to do is to check out depot_tools.

mkdir src cd src git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git

I’m assuming you have git in your Guix environment already.

Then you need to initialize depot_tools. For that you run a python script, which needs to run other binaries – so we need to make a specific environment in which it can run. This starts with a manifest of packages, is conventionally placed in a file named manifest.scm in the project’s working directory, though you don’t have one yet, so you can just write it into v8.scm or something anywhere:

(use-modules (guix packages) (gnu packages gcc)) (concatenate-manifests (list (specifications->manifest '( "bash" "binutils" "clang-toolchain" "coreutils" "diffutils" "findutils" "git" "glib" "glibc" "glibc-locales" "grep" "less" "ld-gold-wrapper" "make" "nss-certs" "nss-mdns" "openssh" "patch" "pkg-config" "procps" "python" "python-google-api-client" "python-httplib2" "python-pyparsing" "python-requests" "python-tzdata" "sed" "tar" "wget" "which" "xz" )) (packages->manifest `((,gcc "lib")))))

Then, you guix shell -m v8.scm. But you actually do more than that, because we need to set up a container so that we can expose a standard /lib, /bin, and so on:

guix shell --container --network \ --share=$XDG_RUNTIME_DIR --share=$HOME \ --preserve=TERM --preserve=SSH_AUTH_SOCK \ --emulate-fhs \ --manifest=v8.scm

Let’s go through these options one by one.

  • --container: This is what lets us run pre-built binaries, because it uses Linux namespaces to remap the composed packages to /bin, /lib, and so on.

  • --network: Depot tools are going to want to download things, so we give them net access.

  • --share: By default, the container shares the current working directory with the “host”. But we need not only the checkout for V8 but also the sibling checkout for depot tools (more on this in a minute); let’s just share the whole home directory. Also, we share the /run/user/1000 directory, which is $XDG_RUNTIME_DIR, which lets us access the SSH agent, so we can check out over SSH.

  • --preserve: By default, the container gets a pruned environment. This lets us pass some environment variables through.

  • --emulate-fhs: The crucial piece that lets us bridge the gap between Guix and the world.

  • --manifest: Here we specify the list of packages to use when composing the environment.

We can use short arguments to make this a bit less verbose:

guix shell -CNF --share=$XDG_RUNTIME_DIR --share=$HOME \ -ETERM -ESSH_AUTH_SOCK -m manifest.scm

I would like it if all of these arguments could somehow be optional, that I could get a bare guix shell invocation to just apply them, when run in this directory. Perhaps some day.

Running guix shell like this drops you into a terminal. So let’s initialize depot tools:

cd $HOME/src export VPYTHON_BYPASS="manually managed python not supported by chrome operations" export PATH=$HOME/src/depot_tools:$PATH export SSL_CERT_DIR=/etc/ssl/certs/ export SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt gclient

This should download a bunch of things, I don’t know what. But at this point we’re ready to go:

fetch v8

This checks out V8, which is about 1.3 GB, and then probably about as much again in dependencies.

build v8

You can build V8 directly:

# note caveat below! cd v8 tools/dev/gm.py x64.release

This will build fine... and then fail to link. The precise reason is obscure to me: it would seem that by default, V8 uses a whole Debian sysroot for Some Noble Purpose, and ends up linking against it. But it compiles against system glibc, which seems to have replaced fcntl64 with a versioned symbol, or some such nonsense. It smells like V8 built against a too-new glibc and then failed trying to link to an old glibc.

To fix this, you need to go into the args.gn that was generated in out/x64.release and then add use_sysroot = false, so that it links to system glibc instead of the downloaded one.

echo 'use_sysroot = false' >> out/x64.release/args.gn tools/dev/gm.py x64.release

You probably want to put the commands needed to set up your environment into some shell scripts. For Guix you could make guix-env:

#!/bin/sh guix shell -CNF --share=$XDG_RUNTIME_DIR --share=$HOME \ -ETERM -ESSH_AUTH_SOCK -m manifest.scm -- "$@"

Then inside the container you need to set the PATH and such, so we could put this into the V8 checkout as env:

#!/bin/sh # Look for depot_tools in sibling directory. depot_tools=`cd $(dirname $0)/../depot_tools && pwd` export PATH=$depot_tools:$PATH export VPYTHON_BYPASS="manually managed python not supported by chrome operations" export SSL_CERT_DIR=/etc/ssl/certs/ export SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt exec "$@"

This way you can run ./guix-env ./env tools/dev/gm.py x64.release and not have to “enter” the container so much.

notes

This all works fine enough, but I do have some meta-reflections.

I would prefer it if I didn’t have to use containers, for two main reasons. One is that the resulting build artifacts have to be run in the container, because they are dynamically linked to e.g. /lib, at least for the ELF loader. It would be better if I could run them on the host (with the host debugger, for example). Using Guix to make the container is better than e.g. docker, though, because I can ensure that the same tools are available in the guest as I use on the host. But also, I don’t like adding “modes” to my terminals: are you in or out of this or that environment. Being in a container is not like being in a vanilla guix shell, and that’s annoying.

The build process uses many downloaded tools and artifacts, including clang itself. This is a feature, in that I am using the same compiler that colleagues at Google use, which is important. But it’s also annoying and it would be nice if I could choose. (Having the same clang-format though is an absolute requirement.)

There are two tests failing, in this configuration. It is somehow related to time zones. I have no idea why, but I just ignore them.

If the build system were any weirder, I would think harder about maybe using Docker or something like that. Colleagues point to distrobox as being a useful wrapper. It is annoying though, because such a docker image becomes like a little stateful thing to do sysadmin work on, and I would like to avoid that if I can.

Welp, that’s all for today. Hopefully if you are contemplating installing Guix as your operating system (rather than just in user-space), this can give you a bit more information as to what it might mean when working on third-party projects. Happy hacking and until next time!

Jan Lukas Gernert: Newsflash 3.2

Planet GNOME - Hën, 25/03/2024 - 11:34md

Another small feature update just in time for gnome 46.

Subscribe via CLI

Lets start with something that already went into version 3.1.4: you can subscribe to feeds via CLI now. The idea is that this is a building block for seamlessly subscribing to websites from within a browser or something similar. Lets see how this develops further.

Scrap all new Articles of a Feed

If Gitlab upvotes is a valid metric, this feature was the most requested one so far. Feed settings gained a new toggle to scrap the content of new articles. The sync will complete normally and in a second operation Newsflash tries to download the full content of all new articles in the background.

This is especially useful when there is no permanent internet connection. Now you can let Newsflash sync & download content while on WiFi and read the complete articles later even without an internet connection.

Update Feed URL

The local RSS backend gained the ability to update the URL where the feed is located (see the screenshot above). Sadly none of the other services support this via their APIs as far as I know.

Clean Database

The preferences dialog gained the ability to drop all old article and “vacuum” the database right away. Depending on the size of the database file this can take a few seconds, that’s why it is not done in the background during normal operations yet.

(btw: I’m not sure if I should keep the button as “destructive-action”) Internal Refactoring

Just a heads up that a lot of code managing the loading of the article list and keeping track of the displayed article and its state was refactored. If there are any regressions, please let me know.

Profiling

Christian Hergerts constant stream of profiling blog posts finally got to me. So I fired up sysprof. Fully expecting to not be knowledgeable enough to draw any meaningful conclusions from the data. After all, the app is pretty snappy on my machine ™, so any improvements must be hard to find and even harder to solve. But much to my surprise about 30 minutes later two absolutely noticeable low hanging fruit performance problems were discovered and fixed.

So I encourage everyone to just try profiling your code. You may be surprised what you find.

Adwaita Dialogs & Removing Configurable Shortcuts

Of course this release makes use of the new Adwaita Dialogs. For all the dialogs but one:

Configuring custom keybindings still spawns a new modal window. Multiple overlapping dialogs isn’t the greatest thing in the world. This and another annoying issue made me think about removing the feature from Newsflash completely.

The problem is that all shortcuts need to be disabled whenever the user is about to enter text. Otherwise the keybindings with a single letter cannot be entered as text.

All major feed readers (feedly, innoreader, etc) have a fixed set of cohesive keyboard shortcuts. I’ve been thinking about either having 2-3 shortcut configurations to choose from or just hard-coding keybindings all together.

I’d like to hear your thoughts. Do you use custom shortcuts? Would you be fine with a well thought out but hard-coded set of shortcuts? Would you prefer to choose from a few pre-defined shorcut configurations? Let me know, and help me find the best keybindings for all the actions that can be triggered via keyboard.

Dave Patrick Caberto: Kooha 2.3 Released!

Planet GNOME - Dje, 24/03/2024 - 5:29pd

Kooha is a simple screen recorder for Linux with a minimal interface. You can simply click the record button without having to configure a bunch of settings.

While we strive to keep Kooha simple, we also want to make it better. This release, composed of over 300 commits, is focused on quality-of-life improvements and bug fixes.

This release includes a refined interface, improved area selection, more informative notifications, and other changes. Read on to learn more about the new features and improvements.

New Features and Improvements Refined Interface

The main screen now has a more polished look. It now shows the selected format and FPS. This makes it easier to see the current settings at a glance, without having to open the settings window.

Other than that, progress is now shown when flushing the recording. This gives a better indication when encoding or saving is taking longer than expected.

Furthermore, the preferences window is also improved. It is now more descriptive and selecting FPS is now easier with a dropdown menu.

Improved Area Selection

The area selection window is now resizable. You can now resize the window to fit your screen better. Additionally, the previously selected area is now remembered across sessions. This means that if you close Kooha and open it again, the area you selected will be remembered. Other improvements include improved focus handling, sizing fixes, better performance, and a new style.

More Informative Notifications

Record-done notifications now show the duration and size of the recorded video. This is inspired by GNOME Shell screencast notifications.

Moreover, the notification actions now work even when the application is closed.

Other Changes

Besides the mentioned features, this release also includes:

  • Logout and idle are now inhibited while recording.
  • The audio no longer stutters and gets corrupted when recording for a long time.
  • The audio is now recorded in stereo instead of mono when possible.
  • The recordings are no longer deleted when flushing is canceled.
  • Incorrect output video orientation on certain compositors is now fixed.
  • Performance and stability are improved.
Getting Kooha 2.3

Kooha is available on Flathub. You can install it from there, and since all of our code is open-source and can be freely modified and distributed according to the license, you can also download and build it from source.

Closing Words

Thanks to everyone who has supported Kooha, be it through donations, bug reports, translations, or just using it. Your support is what keeps this project going. Enjoy the new release!

Tobias Bernard: Mini GUADEC 2024: We have a Venue!

Planet GNOME - Pre, 22/03/2024 - 7:51md

We’ve had a lot of questions from people planning to attend this year’s edition of the Berlin Mini GUADEC from outside Berlin about where it’s going to happen, so they can book accommodation nearby. We have two good news on that front: First, we have secured (pending a few last organizational details) a very cool venue, and second: The venue has a hostel next to it, so there’s the possibility to stay very close by for cheap :)

Come join us at Regenbogenfabrik

The event will happen at Regenbogenfabrik in Kreuzberg (Lausitzerstraße 21a). The venue is a self-organized cultural center with a fascinating history, and consists of, in addition to the event space, a hostel, bike repair and woodworking workshops, and a kindergarten (lucky for us closed during the GUADEC days).

The courtyard at Regenbogenfabrik

Some of the perks of this venue:

  • Centrally located (a few blocks from Kottbusser Tor)
  • We can stay as late as we want (no being kicked out at 6pm!)
  • Plenty of space for hacking
  • Lots of restaurants, bars, and cafes nearby
  • Right next to the Landwehrkanal and close to Görlitzer Park
  • There’s a ping pong table!

Regenbogenfabrik on Openstreetmap

Stay at the venue

If you’re coming to Berlin from outside and would like to stay close to the venue there’s no better option than staying directly at the venue: We’ve talked to the Regebogenfabrik Hostel, and there’s still somewhere around a dozen spots available during the GUADEC days (in rooms for 2, 3, or 8 people).

Prices range between 20 and 75 Euro per person per night, depending on the size of the room. You can book using the form here (german, but Firefox Translate works well these days :) ).

As the organizing team we don’t have the capacities to get directly involved in booking the accommodations, but we’re in touch with the hostel people and can help with coordination.

Note: If you’re interested in staying at the hostel act fast, because spots are limited. To be sure to get one of the open spots, please book by next Tuesday (March 26th) and mention the codeword “GNOME” so they know to put you in rooms with other GUADEC attendees.

Also, if you’re coming don’t forget to add your name to the attendee list on Hedgedoc, so we know roughly how many people are coming :)

If you have any other questions feel free to join our Matrix room.

See you in Berlin!

Sam Thursfield: Status update, 20/03/2024 – TinySPARQL and Tracker Miners

Planet GNOME - Mër, 20/03/2024 - 5:59md

GNOME 46 just released, and with it comes TinySPARQL 3.7 (aka Tracker SPARQL) and Tracker Miners 3.7. Here’s what I’ve been involved with this month in those projects.

Google Summer of Code

It wasn’t my intention to prepare another internship before the last one was even finished. It seems that in GNOME we have fewer projects and mentors than ever – only eight ideas this year, compared to fourteen confirmed projects back in 2020. So I proposed an idea for TinySPARQL, and here we are.

The idea, in brief: I’ve been working a bit with GraphQL recently, which doesn’t live up to the hype, but does have nice query frontends such as GraphQL Playground and graphiql that let you develop and test queries in realtime. This is a screenshot of graphiql:

In TinySPARQL, we have a commandline tool tracker3 sparql which can run queries and print the results. This is handy for developing testing queries independently of the app logic, but it’s only useful if you’re already something a SPARQL expert.

What if TinySPARQL had a web interface similar to the GraphQL Playground?

Besides running queries and showing the output, this could have example queries, resource browsing, as-you-type error checks, integrated documentation, and more fun things listed in this issue. My hope is this would encourage more folk to play around with the data running interesting queries and would help to visualize what you can do with a detailed metadata index for your local content. I think a lot of people see Tracker Miner FS as a black box that does basic string matching, and not the flexible database that it actually is.

Lots of schools teach HTML and JavaScript so this project seems like a great opportunity for an intern to take ownership of and show their skills. Applications are open until 2nd April, and we’ll be running a couple of online meetups later this week (Thursday 21st and/or Friday 22nd March) to help you create a good application. Join the #tracker:gnome.org Matrix room if you’re interested.

By the way, it’s only recently been possible to separate your queries from the rest of your app’s code. I wrote about this here: Standalone SPARQL Queries. The TrackerSparqlStatement class is flexible and fun and you can read your SPARQL statements straight from a GResource file. If you used libtracker-sparql around 1.x you’ll remember a horrible thing named TrackerSparqlBuilder – the query developer experience has come a long way since then.

New security features

There are some new features this cycle thanks to hard work by Carlos. I’ll let him write up the fun parts. One part that’s not much fun, is the increased security protections for tracker-extract. The background here is that tracker-extract uses many different media parsing libraries, and if any one of those libraries shipped by your distro contains a vulnerability, that could potentially be exploited by getting you to download a malicious file which would then be processed by tracker-extract.

We have no evidence that anyone’s ever actually done this. But there was a writeup on how it could happen recently using a vulnerability in a library named libcue which nobody is maintaining, including a clever bypass of the existing SECCOMP protection. Carlos did a writeup of this on his blog: On CVE-2023-43641.

With Tracker Miners 3.7, Carlos extended the existing SECCOMP sandbox to cover the entire extractor process rather than just the processing thread, which prevents that theoretical line of attack. And, he added an additional layer of sandboxing using a new kernel API called Landlock, which lets a process block itself from accessing any files except those it specifically needs.

From my perspective it’s rather draining to help maintain the sandboxing. When it works, nobody notices. When the sandboxing causes issues, we hear about it straight away. And there are plenty of issues! Even the build-time configuration for Landlock seems to need hours of debate.

SECCOMP works by denying access to any kernel APIs except those legitimately needed by the extractor process and the libraries it uses. Linux has 450+ syscalls and counting, and we maintain an explicit allowlist. Any change to GLibc, GIO, GStreamer or any media parsing library may then change what syscall gets used. If an unexpected syscall is called the tracker-extract process is killed with SIGSYS, which gets reported as a crash in just the same way as segfaults caused by programming errors.

It’s draining to support something that can break randomly by things that are out of our control. What else can we do though?

What’s next?

It might seem like openQA testing and desktop search are unrelated, but there is a clear connection.

Making reproducible integration tests for a search engine is a very hard problem. Back last decade I worked on the project’s Gitlab CI setup and “functional tests”. These tests live in the tracker-miners.git source tree, and run the real the crawler and extractor, testing that we can create a file named hello.txt, wait for it to be indexed and search for its contents. Quite a step forwards from unreproducible “works on my machine” testing that came before, but not representative of real use cases.

Real GNOME users do not have a single file in their home dir named hello.txt. Rather they have GBs or TBs of content to be indexed, and they have expectations about what constitutes the “best match” for a given search term.

I’m not interested in working to solve this kind of thing until we can build regression tests so that things don’t just work, but keep working in the long term. Hence, the work-in-progress gnome_search test for openQA, and the example-desktop-content repo. This is at the “working prototype” stage, and is now ready for some deeper thinking about what specific scenarios we want to test.

Some other things that may or may not happen next cycle in desktop search, depending on whether people care to help push them forwards:

  • beginning the rename: this won’t happen all at once, but we want to start calling the database TinySPARQL, and the indexer something else, still to be decided. (Ideas welcome!)
  • a ‘limiter’ to detect when a directory contains so much content that the indexer would burn significant CPU and IO resource trying to index everything up front (which requires corresponding UI changes so that there’s a way to “opt in” to indexing such locations on demand)
  • indexing the whole $HOME directory (which I personally don’t want to land without the ‘limiter’ in place, but let’s see)

One thing is certain, next month things are certainly going to slow down for me… I’m holiday for two full weeks over Easter, spring is coming and I plan to spend most of my time relaxing in a hammock. Hopefully we’ve sowed a lot of seeds this month which will soon turn into flowers.

Ondřej Holý: What’s new in GVfs for GNOME 46?

Planet GNOME - Mër, 20/03/2024 - 8:05pd

It has been 3 years since my last post with release news for GVfs. This is mainly because previous releases were more or less just bug fixes. In contrast, GVfs 1.54 comes with two new backends. Let’s take a look at them.

OneDrive

One of the backends adds OneDrive support thanks to Jan-Michael Brummer. This requires setting up a Microsoft 365 account through the Online Accounts panel in the Settings application. Then the OneDrive share can be accessed from the sidebar of the Files application.

However, creating the account is a bit tricky now. You need to register on the Microsoft Entra portal to get a client ID. The specific steps can be found in the gnome-online-accounts#308 issue. Efforts are underway to register a client ID for GNOME, so this step will soon be unnecessary.

WS-Discovery

The other backend brings WS-Discovery support. It automatically discovers the shared SMB folders of the Windows devices available on your network. You can find them in the Other Locations view of the Files application. This has not worked since the NT1 protocol was deprecated. For more information on this topic, see my previous post.

You won’t find the Windows Network folder in the Other Locations view, all the discovered shares are directly listed in the Networks section now.

Finally, I would like to thank all the GVfs contributors. Let me know in the comments if you like the new backends. I hope the next releases will also bring some great news.

Arun Raghavan: Asymptotic: A 2023 Review

Planet GNOME - Mar, 19/03/2024 - 3:54md

It’s been a busy few several months, but now that we have some breathing room, I wanted to take stock of what we have done over the last year or so.

This is a good thing for most people and companies to do of course, but being a scrappy, (questionably) young organisation, it’s doubly important for us to introspect. This allows us to both recognise our achievements and ensure that we are accomplishing what we have set out to do.

One thing that is clear to me is that we have been lagging in writing about some of the interesting things that we have had the opportunity to work on, so you can expect to see some more posts expanding on what you find below, as well as some of the newer work that we have begun.

(note: I write about our open source contributions below, but needless to say, none of it is possible without the collaboration, input, and reviews of members of the community)

WHIP/WHEP client and server for GStreamer

If you’re in the WebRTC world, you likely have not missed the excitement around standardisation of HTTP-based signalling protocols, culminating in the WHIP and WHEP specifications.

Tarun has been driving our client and server implementations for both these protocols, and in the process has been refactoring some of the webrtcsink and webrtcsrc code to make it easier to add more signaller implementations. You can find out more about this work in his talk at GstConf 2023 and we’ll be writing more about the ongoing effort here as well.

Low-latency embedded audio with PipeWire

Some of our work involves implementing a framework for very low-latency audio processing on an embedded device. PipeWire is a good fit for this sort of application, but we have had to implement a couple of features to make it work.

It turns out that doing timer-based scheduling can be more CPU intensive than ALSA period interrupts at low latencies, so we implemented an IRQ-based scheduling mode for PipeWire. This is now used by default when a pro-audio profile is selected for an ALSA device.

In addition to this, we also implemented rate adaptation for USB gadget devices using the USB Audio Class “feedback control” mechanism. This allows USB gadget devices to adapt their playback/capture rates to the graph’s rate without having to perform resampling on the device, saving valuable CPU and latency.

There is likely still some room to optimise things, so expect to more hear on this front soon.

Compress offload in PipeWire

Sanchayan has written about the work we did to add support in PipeWire for offloading compressed audio. This is something we explored in PulseAudio (there’s even an implementation out there), but it’s a testament to the PipeWire design that we were able to get this done without any protocol changes.

This should be useful in various embedded devices that have both the hardware and firmware to make use of this power-saving feature.

GStreamer LC3 encoder and decoder

Tarun wrote a GStreamer plugin implementing the LC3 codec using the liblc3 library. This is the primary codec for next-generation wireless audio devices implementing the Bluetooth LE Audio specification. The plugin is upstream and can be used to encode and decode LC3 data already, but will likely be more useful when the existing Bluetooth plugins to talk to Bluetooth devices get LE audio support.

QUIC plugins for GStreamer

Sanchayan implemented a QUIC source and sink plugin in Rust, allowing us to start experimenting with the next generation of network transports. For the curious, the plugins sit on top of the Quinn implementation of the QUIC protocol.

There is a merge request open that should land soon, and we’re already seeing folks using these plugins.

AWS S3 plugins

We’ve been fleshing out the AWS S3 plugins over the years, and we’ve added a new awss3putobjectsink. This provides a better way to push small or sparse data to S3 (subtitles, for example), without potentially losing data in case of a pipeline crash.

We’ll also be expecting this to look a little more like multifilesink, allowing us to arbitrary split up data and write to S3 directly as multiple objects.

Update to webrtc-audio-processing

We also updated the webrtc-audio-processing library, based on more recent upstream libwebrtc. This is one of those things that becomes surprisingly hard as you get into it — packaging an API-unstable library correctly, while supporting a plethora of operating system and architecture combinations.

Clients

We can’t always speak publicly of the work we are doing with our clients, but there have been a few interesting developments we can (and have spoken about).

Both Sanchayan and I spoke a bit about our work with WebRTC-as-a-service provider, Daily. My talk at the GStreamer Conference was a summary of the work I wrote about previously about what we learned while building Daily’s live streaming, recording, and other backend services. There were other clients we worked with during the year with similar experiences.

Sanchayan spoke about the interesting approach to building SIP support that we took for Daily. This was a pretty fun project, allowing us to build a modern server-side SIP client with GStreamer and SIP.js.

An ongoing project we are working on is building AES67 support using GStreamer for FreeSWITCH, which essentially allows bridging low-latency network audio equipment with existing SIP and related infrastructure.

As you might have noticed from previous sections, we are also working on a low-latency audio appliance using PipeWire.

Retrospective

All in all, we’ve had a reasonably productive 2023. There are things I know we can do better in our upstream efforts to help move merge requests and issues, and I hope to address this in 2024.

We have ideas for larger projects that we would like to take on. Some of these we might be able to find clients who would be willing to pay for. For the ideas that we think are useful but may not find any funding, we will continue to spend our spare time to push forward.

If you made this this far, thank you, and look out for more updates!

Christian Schaller: PipeWire camera handling is now happening!

Planet GNOME - Pre, 15/03/2024 - 5:30md

We hit a major milestones this week with the long worked on adoption of PipeWire Camera support finally starting to land!

Not long ago Firefox was released with experimental PipeWire camera support thanks to the great work by Jan Grulich.

Then this week OBS Studio shipped with PipeWire camera support thanks to the great work of Georges Stavracas, who cleaned up the patches and pushed to get them merged based on earlier work by himself, Wim Taymans and Colulmbarius. This means we now have two major applications out there that can use PipeWire for camera handling and thus two applications whose video streams that can be interacted with through patchbay applications like Helvum and qpwgraph.
These applications are important and central enough that having them use PipeWire are in itself useful, but they will now also provide two examples of how to do it for application developers looking at how to add PipeWire camera support to their own applications; there is no better documentation than working code.

The PipeWire support is also paired with camera portal support. The use of the portal also means we are getting closer to being able to fully sandbox media applications in Flatpaks which is an important goal in itself. Which reminds me, to test out the new PipeWire support be sure to grab the official OBS Studio Flatpak from Flathub.

PipeWire camera handling with OBS Studio, Firefox and Helvum.


Let me explain what is going on in the screenshot above as it is a lot. First of all you see Helvum there on the right showning all the connections made through PipeWire, both the audio and in yellow, the video. So you can see how my Logitech BRIO camera is feeding a camera video stream into both OBS Studio and Firefox. You also see my Magewell HDMI capture card feeding a video stream into OBS Studio and finally gnome-shell providing a screen capture feed that is being fed into OBS Studio. On the left you see on the top Firefox running their WebRTC test app capturing my video then just below that you see the OBS Studio image with the direct camera feed on the top left corner, the screencast of Firefox just below it and finally the ‘no signal’ image is from my HDMI capture card since I had no HDMI device connected to it as I was testing this.

For those wondering work is also underway to bring this into Chromium and Google Chrome browsers where Michael Olbrich from Pengutronix has been pushing to get patches written and merged, he did a talk about this work at FOSDEM last year as you can see from these slides with this patch being the last step to get this working there too.

The move to PipeWire also prepared us for the new generation of MIPI cameras being rolled out in new laptops and helps push work on supporting those cameras towards libcamera, the new library for dealing with the new generation of complex cameras. This of course ties well into the work that Hans de Goede and Kate Hsuan has been doing recently, along with Bryan O’Donoghue from Linaro, on providing an open source driver for MIPI cameras and of course the incredible work by Laurent Pinchart and Kieran Bingham from Ideas on board on libcamera itself.

The PipeWire support is of course fresh and I am sure we will find bugs and corner cases that needs fixing as more people test out the functionality in both Firefox and OBS Studio and there are some interface annoyances we are working to resolve. For instance since PipeWire support both V4L and libcamera as a backend you do atm get double entries in your selection dialogs for most of your cameras. Wireplumber has implemented de-deplucation code which will ensure only the libcamera listing will show for cameras supported by both v4l and libcamera, but is only part of the development version of Wireplumber and thus it will land in Fedora Workstation 40, so until that is out you will have to deal with the duplicate options.

Camera selection dialog


We are also trying to figure out how to better deal with infraread cameras that are part of many modern webcams. Obviously you usually do not want to use an IR camera for your video calls, so we need to figure out the best way to identify them and ensure they are clearly marked and not used by default.

Another recent good PipeWire new tidbit that became available with the PipeWire 1.0.4 release PipeWire maintainer Wim Taymans also fixed up the FireWire FFADO support. The FFADO support had been in there for some time, but after seeing Venn Stone do some thorough tests and find issues we decided it was time to bite the bullet and buy some second hand Firewire hardware for Wim to be able to test and verify himself.

Focusrite firewire device

.
Once the Focusrite device I bought landed at Wims house he got to work and cleaned up the FFADO support and make it both work and be performant.
For those unaware FFADO is a way to use Firewire devices without going through ALSA and is popular among pro-audio folks because it gives lower latencies. Firewire is of course a relatively old technology at this point, but the audio equipment is still great and many audio engineers have a lot of these devices, so with this fixed you can plop a Firewire PCI card into your PC and suddenly all those old Firewire devices gets a new lease on life on your Linux system. And you can buy these devices on places like ebay or facebook marketplace for a fraction of their original cost. In some sense this demonstrates the same strength of PipeWire as the libcamera support, in the libcamera case it allows Linux applications a way to smoothly transtion to a new generation of hardware and in this Firewire case it allows Linux applications to keep using older hardware with new applications.

So all in all its been a great few weeks for PipeWire and for Linux Audio AND Video, and if you are an application maintainer be sure to look at how you can add PipeWire camera support to your application and of course get that application packaged up as a Flatpak for people using Fedora Workstation and other distributions to consume.

Alice Mikhaylenko: Libadwaita 1.5

Planet GNOME - Pre, 15/03/2024 - 4:58md

Well, another cycle has passed.

This one was fairly slow, but nevertheless has a major new feature.

Adaptive Dialogs

The biggest feature this time is the new dialog widgetry.

Traditionally, dialogs have been separate windows. While this approach generally works, we never figured out how to reasonably support that on mobile. There was a downstream patch for auto-maximizing dialogs, which in turn required them to be resizable, which is not great on desktop, and the patch was hacky and never really supported upstream.

Another problem is close buttons – we want to keep them in dialogs instead of needing to go to overview to close every dialog, and that’s why mobile gnome-shell doesn’t hide close buttons at all atm. Ideally we want to keep them in dialogs, but be able to remove them everywhere else.

While it would be possible to have shell present dialogs differently, another approach is to move them to the client instead. That’s not a new approach, here are some existing examples:

This has both upsides and downsides. One upside is that the toolkit/app has much more control over them. For example, it’s very easy to ensure their size doesn’t exceed the parent window. While this is possible with windows (AdwMessageDialog does this), it’s hacky and can still break fairly easily with e.g. maximize – in fact, I’m not confident it works across compositors and in both Wayland and X11.

Having dialogs not exceed the parent’s size means not needing to limit their size quite so aggressively – previously it was needed so that the dialog doesn’t get ridiculously large on top of a small window.

The dimming behind the dialog can also vary between light and dark styles – shell cannot do that because it doesn’t know if this particular window is light or dark, only what the whole system prefers.

In future this should also allow to support per-tab dialogs. For apps like web browsers, a background tab spawning a dialog that takes over the whole window is not great.

Meanwhile the main downside is the same thing as was listed in upsides: these dialogs cannot exceed the parent window’s size. Sometimes it’s still needed, e.g. if the parent window is really small.

Bottom Sheets

So, how does that help on mobile? Well, aside from just implementing the existing size constraints on AdwMessageDialog more cleanly, it allows to present these dialogs as bottom sheets on mobile, instead of centered floating sheets.

A previous design has presented dialogs as pages with back buttons, but that had many other problems, especially on small windows on desktop. For example, what happens if you close the window? A dialog and a “regular” subpage would look identical, so you’d probably expect the close button to close the entire window? But if it’s floating above a larger window?

Bottom sheets avoid this issue – you still see the parent window with its own close button, so it’s obvious that they are closed separately – while still being allowed to take full width like a subpage.

They can also be swiped down, though because of GTK limitations this does not work together with scrolling content. It’s still possible to swipe down from header bar or the empty space above the sheet.

And the fact they are attached to the bottom edge makes them easier to reach on huge phones.

Meanwhile, AdwHeaderBar always shows a close button within dialogs, regardless of the system layout. The only hint it takes from the system is whether to display the close button on the right or left side.

API

For the most part they are used similarly to GtkWindow. The main differences are with presenting and closing dialogs.

The :transient-for property has been replaced with a parameter in adw_dialog_present(). It also doesn’t necessarily take a window anymore, but can accept any widget within that window as well. Currently it just fetches the root widget, but once we have per-tab dialogs, that can be controlled with a simple flag instead of needing a new variant of adw_tab_present() that would take a tab page instead of a window.

The ::close-request signal has been replaced as well. Because the dialogs can be swiped down on mobile, we need to know if they can be closed before the gesture starts. So, instead there’s a :can-close property that apps set ahead of time if there’s unsaved data or some other reason to prevent closing.

For close confirmation, there’s a ::close-attempt signal, which will be fired when trying to close a dialog using a close button or a shortcut while :can-close is set to FALSE (or calling adw_dialog_close()). For actual closing, there’s ::closed instead.

Finally, adw_dialog_force_close() closes the dialog while ignoring :can-close. It can be used to close the dialog after confirmation without needing to fiddle with :can-close or repeat ::close-attempt emissions.

If this works well, AdwWindow may have something similar in future.

The rest is fairly straightforward and is modelled after GtkWindow. See AdwDialog docs and migration guide for more details.

Since AdwPreferencesWindow and other widgets can’t be ported to new dialogs without a significant API break, they have been replaced:

For the most part they are identical, with a few differences:

  • AdwPreferencesDialog has search disabled by default, and gets rid of deprecated subpage API
  • AdwAlertDialog can scroll contents, so apps that add their own scrolled windows may want to remove them

Since the new widgets landed right at the end of the cycle, the old widgets are not deprecated yet. However, they will be deprecated next cycle, so it’s recommended to migrate your apps anyway.

Standalone bottom sheets (like in audio players) are not available yet either, but will be in future.

Esc to Close

Traditionally, dialogs have been done via GtkDialog which handled this automatically. But for the last few years, apps have been steadily moving away from GtkDialog and by now it’s deprecated. While that’s not really a problem on its own, one thing that GtkDialog was doing automatically and custom dialogs don’t is closing when pressing Esc. While it’s pretty easy to add that manually, a lot of apps forget to do so.

But since we have dedicated dialog API again, Esc to close is once again automatic.

What about standalone dialogs?

Some dialogs don’t have a parent window. Those are still presented as a window. Note that it still doesn’t work well on mobile: while there will be a close button, the sizing will work just as badly as before, so it’s recommended to avoid them.

Dialogs will also be presented as a window if you try to ad them to a parent that can’t host dialogs (anything that’s not an AdwWindow or AdwApplicationWindow), or the parent is not resizable. The reason for the last one is to accommodate apps like Emblem, which has a small non-resizable window, where dialogs won’t fully fit, and since it’s non-resizable, it doesn’t work on mobile anyway.

What about “Attach Modal Dialogs”

Since we have the window-backed mode, it would be fairly easy to support that preference… except there’s no way to read it from sandboxed apps.

What about portals?

This approach obviously doesn’t work for portals, since they are in a separate process. We do have a plan for them, involving a private protocol in mutter, but it didn’t make it for 46. So, next time.

What about GTK built-in dialogs?

Those will be replaced as well, but it takes time. For now yes, GtkShortcutsWindow etc won’t match other dialogs.

Other Changes

As usual, there are some smaller changes.

As always, thanks to all the contributors who helped to make this release happen.

Marcus Lundblad: Maps and GNOME 46

Planet GNOME - Pre, 15/03/2024 - 1:35md

It's that time again, a new GNOME release is just around the corner.

 The news in Maps for GNOME 46A lot of the new things we've been working on for the 46 release has already been covered, but here is few recaps. The new map styleThe map style used for the vector-based, client-side rendered map which is still considered experimental in 46 has been switched over to our new “GNOME-themed” style, which also supports a dark mode (enabled when the global dark mode is enabled).

The vector map still needs to be explicitly enabled via the “layers menu” (the second headerbar button from the left). This also require the backing installation of libshumate to be built with vector renderer support (which is the case when using the Flatpak from Flathub, and also libshumate will default to building the vector renderer from the 1.2.0 release, so distributions should likely have it enabled in their 46 installations).

The current plan looks like we're leaning towards flipping it on by default after the 46 release, so by 47 it will probably mean the old raster tiles from openstreetmap.org will be retired.

Also icons on the map (such as POIs) are now directly clickable. And labels should be localized to the user's language (when the appropriate language tags are available in the OpenStreetMap data).


Other visual improvementsFor 46 the zoom control buttons has been revamped (again), and put in the lower corner (as also shown in the above screenshots):

The pin used to marked places selected from search results, and other things like pin-pointed locations in GeoJSON files has gained a new modernized design by Jakub Steiner.


The dialog for adding an OpenStreetMap account to edit POIs gained a refresh sporting the new libadwaita dialog and widgets by Felipe Kinoshita.

Also information about which floor a place is located at is shown in the place bubbles when available. This can be useful to find your way around for example big shopping malls and the like (this was an idea that came when looking for a café in a galleria in Riga last summer…).


The favorites menu has also gotten a revamp. Instead of just showing a greyed-out inactive button when there's no favored places it now has an “empty state” hinting on the ability to “star” places.


And favorites can be removed directly from the list without having to open them (and animate to that place to show the bubble).


Looking further onFor the next cycle aside from continuing the refinements to the new map style and making the vector map the main thing another cool project that was initiated during FOSDEM in Februari has caught my attention: TransitousTransitous aims to setup a free and open public transit routing service: https://github.com/public-transport/transitous

It is using the MOTIS project (https://github.com/motis-project/motis) as the backend, with a cround sourcing approach to collect data feeds for timetable data.
The routing can already be tested out at https://transitous.org. Currently it only handles “station to station” routing, so there is not yet support for walking instructions.
Also, unlike the current public transit plugins support we have in Maps with Transitous you would also be able to cross-border planning (utilizing timetables from different data feeds).
When it becomes a bit more mature we should make use of it in Maps ☺.
So this another area to help out by creating PRs for adding transit schedule feeds for your local area that could potentially benefit both Maps and other FOSS projects (such as KDE Itinerary).
Problems aheadAnd now to something of a problem.
The location service backend that we are using (not just used by Maps, but also other parts like Weather, automatic timezone handling) GeoClue has been using Mozilla's location service API (MLS). This will unfortunately be retired https://github.com/mozilla/ichnaea/issues/2065 So there will be a need to come up with alternative solutions https://gitlab.freedesktop.org/geoclue/geoclue/-/issues/186 Maybe in worst case, we'd have to disable showing current location in Maps unless the device has an actual GPS unit. 

Matthew Garrett: Digital forgeries are hard

Planet GNOME - Enj, 14/03/2024 - 10:11pd
Closing arguments in the trial between various people and Craig Wright over whether he's Satoshi Nakamoto are wrapping up today, amongst a bewildering array of presented evidence. But one utterly astonishing aspect of this lawsuit is that expert witnesses for both sides agreed that much of the digital evidence provided by Craig Wright was unreliable in one way or another, generally including indications that it wasn't produced at the point in time it claimed to be. And it's fascinating reading through the subtle (and, in some cases, not so subtle) ways that that's revealed.

One of the pieces of evidence entered is screenshots of data from Mind Your Own Business, a business management product that's been around for some time. Craig Wright relied on screenshots of various entries from this product to support his claims around having controlled meaningful number of bitcoin before he was publicly linked to being Satoshi. If these were authentic then they'd be strong evidence linking him to the mining of coins before Bitcoin's public availability. Unfortunately the screenshots themselves weren't contemporary - the metadata shows them being created in 2020. This wouldn't fundamentally be a problem (it's entirely reasonable to create new screenshots of old material), as long as it's possible to establish that the material shown in the screenshots was created at that point. Sadly, well.

One part of the disclosed information was an email that contained a zip file that contained a raw database in the format used by MYOB. Importing that into the tool allowed an audit record to be extracted - this record showed that the relevant entries had been added to the database in 2020, shortly before the screenshots were created. This was, obviously, not strong evidence that Craig had held Bitcoin in 2009. This evidence was reported, and was responded to with a couple of additional databases that had an audit trail that was consistent with the dates in the records in question. Well, partially. The audit record included session data, showing an administrator logging into the data base in 2011 and then, uh, logging out in 2023, which is rather more consistent with someone changing their system clock to 2011 to create an entry, and switching it back to present day before logging out. In addition, the audit log included fields that didn't exist in versions of the product released before 2016, strongly suggesting that the entries dated 2009-2011 were created in software released after 2016. And even worse, the order of insertions into the database didn't line up with calendar time - an entry dated before another entry may appear in the database afterwards, indicating that it was created later. But even more obvious? The database schema used for these old entries corresponded to a version of the software released in 2023.

This is all consistent with the idea that these records were created after the fact and backdated to 2009-2011, and that after this evidence was made available further evidence was created and backdated to obfuscate that. In an unusual turn of events, during the trial Craig Wright introduced further evidence in the form of a chain of emails to his former lawyers that indicated he had provided them with login details to his MYOB instance in 2019 - before the metadata associated with the screenshots. The implication isn't entirely clear, but it suggests that either they had an opportunity to examine this data before the metadata suggests it was created, or that they faked the data? So, well, the obvious thing happened, and his former lawyers were asked whether they received these emails. The chain consisted of three emails, two of which they confirmed they'd received. And they received a third email in the chain, but it was different to the one entered in evidence. And, uh, weirdly, they'd received a copy of the email that was submitted - but they'd received it a few days earlier. In 2024.

And again, the forensic evidence is helpful here! It turns out that the email client used associates a timestamp with any attachments, which in this case included an image in the email footer - and the mysterious time travelling email had a timestamp in 2024, not 2019. This was created by the client, so was consistent with the email having been sent in 2024, not being sent in 2019 and somehow getting stuck somewhere before delivery. The date header indicates 2019, as do encoded timestamps in the MIME headers - consistent with the mail being sent by a computer with the clock set to 2019.

But there's a very weird difference between the copy of the email that was submitted in evidence and the copy that was located afterwards! The first included a header inserted by gmail that included a 2019 timestamp, while the latter had a 2024 timestamp. Is there a way to determine which of these could be the truth? It turns out there is! The format of that header changed in 2022, and the version in the email is the new version. The version with the 2019 timestamp is anachronistic - the format simply doesn't match the header that gmail would have introduced in 2019, suggesting that an email sent in 2022 or later was modified to include a timestamp of 2019.

This is by no means the only indication that Craig Wright's evidence may be misleading (there's the whole argument that the Bitcoin white paper was written in LaTeX when general consensus is that it's written in OpenOffice, given that's what the metadata claims), but it's a lovely example of a more general issue.

Our technology chains are complicated. So many moving parts end up influencing the content of the data we generate, and those parts develop over time. It's fantastically difficult to generate an artifact now that precisely corresponds to how it would look in the past, even if we go to the effort of installing an old OS on an old PC and setting the clock appropriately (are you sure you're going to be able to mimic an entirely period appropriate patch level?). Even the version of the font you use in a document may indicate it's anachronistic. I'm pretty good at computers and I no longer have any belief I could fake an old document.

(References: this Dropbox, under "Expert reports", "Patrick Madden". Initial MYOB data is in "Appendix PM7", further analysis is in "Appendix PM42", email analysis is "Sixth Expert Report of Mr Patrick Madden")

comments

Martín Abente Lahaye: Gameeky 0.6.0

Planet GNOME - Mër, 13/03/2024 - 5:54md

After a busy month, a new release is out! This new release comes with improved compatibility with other platforms, several usability additions and improvements.

It’s no longer necessary to run terminal commands. The most noticeable change in release is the addition of a properly-integrated development environment for Python. With this, the LOGO-like user experience was greatly improved.

The LOGO-like programming interface is also bit richer. A new Rotate action was added and the general interface was simplified to further improve the user experience.

It’s easier to share projects. A simple dialog to export and import projects was added, available through the redesigned project cards in the launcher.

As shown above, Gameeky now has a cute desktop icon thanks to @jimmac and @bertob.

Should be easier to run Gameeky on other platforms now. Under the hood, many things have changed to support other platforms, e.g., macOS. The sound backend was changed to GStreamer, the communication protocol was simplified, and the use of WebKit is now optional.

There are no installers for other platforms yet but, if anyone is experienced and interested in making these, that would be an awesome contribution.

As a small addition, it’s now possible to select a different entity as the user’s character. Recently, my nephews decided they wanted to their character to be a small boulder. They had a blast with their boulder-hero narrative, and it convinced me there should be more additions like that.

There’s more, so check the full list of changes.

On the community side of things, I already started building alliances with different organizations, e.g., the first-ever Gameeky workshop is planned for March 23 in Encarnación, Paraguay and it’s being organized by the local Python community.

If you’re in Paraguay or nearby in Argentina, feel free to contact me to participate!

Dorothy Kabarozi: Overall experience: My Outreachy internship with GNOME

Planet GNOME - Mar, 12/03/2024 - 5:16md

Embarking on an Outreachy internship is a great start into the heart of open-source , a journey I’ve longed to undertake. December 2023 to March 2024 marked this exhilarating chapter of my life, where I had the honor of diving deep into the GNOME world as an Outreachy intern. In this blog, I’m happy to share my experiences, painting a vivid picture of the growth, challenges, and invaluable experiences that have shaped my journey.

Discovering GNOME: A Gateway to Open-Source Excellence

At its core, GNOME (GNU Network Object Model Environment) is a graphical user interface (GUI) and set of computer desktop applications for users of the Linux operating system.GNOME brings companies, volunteers, professionals, and non-profits together from around the world.

We make GNOME, a completely free software solution for everyone.

Why GNOME Captured My Heart

The Outreachy internship presented a couple of projects to choose from, but my fascination with operating system functionalities—booting, scheduling, memory management, user interface and beyond—drew me irresistibly to GNOME. My mission? To work on the implementation of end-to-end tests, a challenge I embraced head on as i dived into the project documentation to understand the project better.

From the moment I introduced myself on the GNOME community channel in the first days of contribution phase, the warmth and promptness of their welcome were unmatched, shattering the myth of the “busy, distant mentor.” This immediate sense of belonging fueled my determination, despite the initial difficulties of setup procedures and technical trials.

My advice to future Outreachy aspirants

From my experience is to start early, Zero down on a project, try to set up early as this took me almost 2 weeks to finally make a merge request to the project.

Secondly ask questions publicly as this helps you easily get unblocked faster in cases when your mentor is busy.

Milestones and Mastery: The GNOME Journey

Our collective goal for the internship was to implement tests for accessibility features for GNOME desktop and also test some core apps on mobile. The creation of the gnome_accessibility test suite marked our first victory, followed by the genesis of gnome-locales and gnome_mobile test suites. Daily stand ups and weekly mentor meetings became our compass, guiding our efforts and honing our focus on the different tasks.Check out for more details here and share any feedback with us on discourse.

Technically ,I learned a lot about version control and Git workflows, how to actually contribute to a project with a large code base, writing clean, readable and efficient code and ensuring code is thoroughly tested for bugs and errors before pushing it. Some of the soft skills I learned were collaboration, communication skills and the continuous desire to learn new things and being teachable.

Overcoming Obstacles: Hardware Hurdles and Beyond

The revelation that my iOS-based machine was ill-equipped for the task at hand was a stark challenge. The lesson was clear: understanding project specifications is crucial, and adaptability is key. This obstacle, while daunting, taught me the value of preparation and the importance of choosing the right tools for the task.

Beyond Coding: Community, Engagement, and Impact

I have not only interacted with my mentors for the project but also participated in sharing about the work we have done on TWIG where I highlighted the work we had done writing tests for accessibility features ie, High contrast,Large text,Overlay scrollbars, Screen reader, Zoom, Over amplification,Visual alerts and On Screen Keyboard features and added more details on the discourse channel too.

I have had public engagements on contributing to Outreachy over twitter spaces in my community where I shared about how to apply to Outreachy and how to prepare for in the contribution phase and shared more about my internship with GNOME during the GNOME AFRICA Preparatory Boot camp for GSoC & Outreachy, check out my presentation here where I shared more about how to stand out as an Outreachy applicant and my experience working with GNOME .These experiences have not only boosted my technical skills but have also embedded in me a sense of community and courage to tackle the unknown.

A Heartfelt Thank You

As this chapter of my journey with GNOME and Outreachy draws to a close, I am overwhelmed with gratitude.To my selfless mentors , Sam Thursfield and Sonny Piers for the guidance and mentorship . I appreciate you all for what you have planted in us. To Tanjuate you have been an amazing co- intern I could ever ask for. To Kristi Progri and Felipe Borges for coordinating this internship with Outreachy and the GNOME Community.

To Outreachy, thank you for this opportunity. And to every soul who has walked this path with me: your support has been amazing. As I look forward to converging paths at GUADEC in July and beyond, I carry with me not just skills and knowledge, but a heart full of memories, ready to embark on new adventures in the open-source world.

Here’s to infinite learning, enduring friendships, and the unwavering spirit of contribution. May the journey continue to unfold, with success, learning, and boundless possibilities.

Here are some of the accessibility tests for gnome_accessibility testsuite, we added during the internship with GNOME .

Click here to take a more detailed look.

Peter Hutterer: Enforcing a touchscreen mapping in GNOME

Planet GNOME - Mar, 12/03/2024 - 5:33pd

Touchscreens are quite prevalent by now but one of the not-so-hidden secrets is that they're actually two devices: the monitor and the actual touch input device. Surprisingly, users want the touch input device to work on the underlying monitor which means your desktop environment needs to somehow figure out which of the monitors belongs to which touch input device. Often these two devices come from two different vendors, so mutter needs to use ... */me holds torch under face* .... HEURISTICS! :scary face:

Those heuristics are actually quite simple: same vendor/product ID? same dimensions? is one of the monitors a built-in one? [1] But unfortunately in some cases those heuristics don't produce the correct result. In particular external touchscreens seem to be getting more common again and plugging those into a (non-touch) laptop means you usually get that external screen mapped to the internal display.

Luckily mutter does have a configuration to it though it is not exposed in the GNOME Settings (yet). But you, my $age $jedirank, can access this via a commandline interface to at least work around the immediate issue. But first: we need to know the monitor details and you need to know about gsettings relocatable schemas.

Finding the right monitor information is relatively trivial: look at $HOME/.config/monitors.xml and get your monitor's vendor, product and serial from there. e.g. in my case this is:

<monitors version="2"> <configuration> <logicalmonitor> <x>0</x> <y>0</y> <scale>1</scale> <monitor> <monitorspec> <connector>DP-2</connector> <vendor>DEL</vendor> <--- this one <product>DELL S2722QC</product> <--- this one <serial>59PKLD3</serial> <--- and this one </monitorspec> <mode> <width>3840</width> <height>2160</height> <rate>59.997</rate> </mode> </monitor> </logicalmonitor> <logicalmonitor> <x>928</x> <y>2160</y> <scale>1</scale> <primary>yes</primary> <monitor> <monitorspec> <connector>eDP-1</connector> <vendor>IVO</vendor> <product>0x057d</product> <serial>0x00000000</serial> </monitorspec> <mode> <width>1920</width> <height>1080</height> <rate>60.010</rate> </mode> </monitor> </logicalmonitor> </configuration> </monitors> Well, so we know the monitor details we want. Note there are two monitors listed here, in this case I want to map the touchscreen to the external Dell monitor. Let's move on to gsettings.

gsettings is of course the configuration storage wrapper GNOME uses (and the CLI tool with the same name). GSettings follow a specific schema, i.e. a description of a schema name and possible keys and values for each key. You can list all those, set them, look up the available values, etc.:

$ gsettings list-recursively ... lots of output ... $ gsettings set org.gnome.desktop.peripherals.touchpad click-method 'areas' $ gsettings range org.gnome.desktop.peripherals.touchpad click-method enum 'default' 'none' 'areas' 'fingers' Now, schemas work fine as-is as long as there is only one instance. Where the same schema is used for different devices (like touchscreens) we use a so-called "relocatable schema" and that requires also specifying a path - and this is where it gets tricky. I'm not aware of any functionality to get the specific path for a relocatable schema so often it's down to reading the source. In the case of touchscreens, the path includes the USB vendor and product ID (in lowercase), e.g. in my case the path is: /org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/ In your case you can get the touchscreen details from lsusb, libinput record, /proc/bus/input/devices, etc. Once you have it, gsettings takes a schema:path argument like this: $ gsettings list-recursively org.gnome.desktop.peripherals.touchscreen:/org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/ org.gnome.desktop.peripherals.touchscreen output ['', '', ''] Looks like the touchscreen is bound to no monitor. Let's bind it with the data from above: $ gsettings set org.gnome.desktop.peripherals.touchscreen:/org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/ output "['DEL', 'DELL S2722QC', '59PKLD3']" Note the quotes so your shell doesn't misinterpret things.

And that's it. Now I have my internal touchscreen mapped to my external monitor which makes no sense at all but shows that you can map a touchscreen to any screen if you want to.

[1] Probably the one that most commonly takes effect since it's the vast vast majority of devices

Ismael Olea: Modelling protected areas talks

Planet GNOME - Hën, 11/03/2024 - 12:00pd

Just keeping up to date with a talk I gave twice past year. I’m very proud of the work but never shared here neither in my Wikidata User:Olea page.

As a brief introduction, for some time I did a significant work importing to Wikidata the CDDA database of European protected areas as I found we have them completely infra represented. I have previous experience with historical heritage but this showed to be a harder work. I have collected some thoughts about lessons learned and a potential standarizing proposal for natural protected areas but never structured in a comprensive way until being invited to give a couple talks about this.

The first talk was in Lisboa, invited and sponsored by our friends of Wikimedia Portugal, in their Wikidata Days 2023. To be honest, my talk was a little disaster because I didn’t prepared the talk with time enough, but at least I could present a complete draft of the idea.

Then I have the opportunity to talk again about the same issue in the next Data Modelling Days 2023 virtual event.

My participation was sharing the session with VIGNERON, he talk about historical heritage and I did about natural heritage/natural protected area. For this session I was able to rewrite my proposal with the quality a communication requires. Now you have available the video recording of the full session:

And my slides, that would be the final text of my intended proposal:

As a conclusion: yes, I should promote this in Wikidata, but the amount of work it requires (editions and discussions) is, for the moment, outside my interest for my freetime.

Emmanuele Bassi: Accessibility improvements in GTK 4.14

Planet GNOME - Pre, 08/03/2024 - 10:30pd

GTK 4.14 brings various improvements on the accessibility front, especially for applications showing complex, formatted text; for WebKitGTK; and for notifications.

Accessible text interface

The accessibility rewrite for 4.0 provided an implementation for complex, selectable, and formatted text in widgets provided by GTK, like GtkTextView, but out of tree widgets would not be able to do the same, as the API was kept private while we discussed what ATs (assistive technologies) actually needed, and while we were looking at non-Linux implementations. For GTK 4.14 we finally have a public interface that out of tree widgets can implement to provide complex, formatted text to ATs: GtkAccessibleText.

GtkAccessibleText allows widgets to provide the text contents at given offsets; the text attributes applied to the contents; and to notify assistive technologies of changes in the text, caret position, or selection boundaries.

Text widgets implementing GtkAccessibleText should notify ATs in these cases:

Text attributes are mainly left to applications to implement—both in naming and serialization; GTK provides support for common text attributes already in use by various toolkits and assistive technologies, and they are available as constants under the GTK_ACCESSIBLE_ATTRIBUTE_* prefix in the API reference.

The GtkAccessibleText interface is a requirement for implementing the accessibility of virtual terminals; the most common GTK-based library for virtual terminals, VTE, has been ported to GTK4 thanks to the efforts of Christian Hergert and in GNOME 46 will support accessibility through the new GTK interface.

Bridging AT-SPI trees

There are cases when a library or an application implements its own accessible tree using AT-SPI, whether in the same process or out of process. One such library is WebKitGTK, which generates the accessible object tree from the web tree inside separate processes. These processes do not use GTK, so they cannot use the GtkAccessible API to describe their contents.

Thanks to the work of Georges Stavracas GTK now can bridge those accessibility object trees under the GTK widget’s own, allowing ATs to navigate into a web page using WebKit from the UI.

Currently, like the rest of the accessibility API in GTK, this is specific to the AT-SPI protocol on Linux, which means it requires libraries and applications that wish to take advantage of it to ensure that the API is available at compile time, through the use of a pkg-config file and a separate C header, similarly to how the printing API is exposed.

Notifications

Applications using in-app notifications that are decoupled by the current widget’s focus, like AdwToast in libadwaita, can now raise the notification message to ATs via the gtk_accessible_announce() method, thanks to Lukáš Tyrychtr, in a way that is respectful of the current ATs output.

Other improvements

GTK 4.12 ensured that the computed accessible labels and descriptions were up to date with the ARIA specification; GTK 4.14 iterates on those improvements, by removing special cases and duplicates.

Thanks to the work of Michael Weghorn from The Document Foundation, there are new roles for text-related accessible objects, like paragraphs and comments, as well as various fixes in the AT-SPI implementation of the accessibility API.

The accessibility support in GTK4 is incrementally improving with every cycle, thanks to the contributions of many people; ideally, these improvements should also lead to a better, more efficient protocol for toolkits and assistive technologies to share.

We are still exploring the possibility of adding backends for other accessibility platforms, like UIAutomation; and for other libraries, like AccessKit.

Aaron Rainbolt: Making hyper-minimal Ubuntu virtual machines with debootstrap

Planet Ubuntu - Enj, 30/11/2023 - 11:34md

Every so often I have to make a new virtual machine for some specific use case. Perhaps I need a newer version of Ubuntu than the one I’m running on my hardware in order to build some software, and containerization just isn’t working. Or maybe I need to test an app that I made modifications to in a fresh environment. In these instances, it can be quite helpful to be able to spin up these virtual machines quickly, and only install the bare minimum software you need for your use case.

One common strategy when making a minimal or specially customized install is to use a server distro (like Ubuntu Server for instance) as the base and then install other things on top of it. This sorta works, but it’s less than ideal for a couple reasons:

  • Server distros are not the same as minimal distros. They may provide or offer software and configurations that are intended for a server use case. For instance, the ubuntu-server metapackage in Ubuntu depends on software intended for RAID array configuration and logical volume management, and it recommends software that enables LXD virtual machine related features. Chances are you don’t need or want these sort of things.

  • They can be time-consuming to set up. You have to go through the whole server install procedure, possibly having to configure or reconfigure things that are pointless for your use case, just to get the distro to install. Then you have to log in and customize it, adding an extra step.

If you’re able to use Debian as your distro, these problems aren’t so bad since Debian is sort of like Arch Linux - there’s a minimal base that you build on to turn it into a desktop or server. But for Ubuntu, there’s desktop images (not usually what you want), server images (not usually what you want), cloud images (might be usable but could be tricky), and Ubuntu Core images (definitely not what you want for most use cases). So how exactly do you make a minimal Ubuntu VM?

As hinted at above, a cloud image might work, but we’re going to use a different solution here. As it turns out, you don’t actually have to use a prebuilt image or installer to install Ubuntu. Similar to the installation procedure Arch Linux provides, you can install Ubuntu manually, giving you very good control over what goes into your VM and how it’s configured.

This guide is going to be focused on doing a manual installation of Ubuntu into a VM, using debootstrap to install the initial minimal system. You can use this same technique to install Ubuntu onto physical hardware by just booting from a live USB and then using this technique on your hardware’s physical disk(s). However we’re going to be primarily focused on using a VM right now. Also, the virtualization software we’re going to be working with is QEMU. If you’re using a different hypervisor like VMware, VirtualBox, or Hyper-V, you can make a new VM and then install Ubuntu manually into it the same way you would install Ubuntu onto physical hardware using this technique. QEMU, however, provides special tools that make this procedure easier, and QEMU is more flexible than other virtualization software in my experience. You can install it by running sudo apt install qemu-system-x86 on your host system.

With that laid out, let us begin.

Open a terminal on your physical machine, and make a directory for your new VM to reside in. I’ll use “~/VMs/Ubuntu” here.

mkdir ~/VMs/Ubuntu
cd ~/VMs/Ubuntu

Next, let’s make a virtual disk image for the VM using the qemu-img utility.

qemu-img create -f qcow2 ubuntu.img 32G

This will make a 32 GiB disk image - feel free to customize the size or filename as you see fit. The -f parameter at the beginning specifies the VM disk image format. QCOW2 is usually a good option since the image will start out small and then get bigger as necessary. However, if you’re already using a copy-on-write filesystem like BTRFS or ZFS, you might want to use -f raw rather than -f qcow2 - this will make a raw disk image file and avoid the overhead of the QCOW2 file format.

Now we need to attach the disk image to the host machine as a device. I usually do this with you can use qemu-nbd, which can attach a QEMU-compatible disk image to your physical system as a network block device. These devices look and work just like physical disks, which makes them extremely handy for modifying the contents of a disk image.

qemu-nbd requires that the nbd kernel module be loaded, and at least on Ubuntu, it’s not loaded by default, so we need to load it before we can attach the disk image to our host machine.

sudo modprobe nbd
sudo qemu-nbd -f qcow2 -c /dev/nbd0 ./ubuntu.img

This will make our ubuntu.img file available through the /dev/nbd0 device. Make sure to specify the format via the -f switch, especially if you’re using a raw disk image. QEMU will keep you from writing a new partition table to the disk image if you give it a raw disk image without telling it directly that the disk image is raw.

Once your disk image is attached, we can partition it and format it just like a real disk. For simplicity’s sake, we’ll give the drive an MBR partition table, create a single partition enclosing all of the disk’s space, then format the partition as ext4.

sudo fdisk /dev/nbd0
n
p
1


w
sudo mkfs.ext4 /dev/nbd0p1

(The two blank lines are intentional - they just accept the default options for the partition’s first and last sector, which makes a partition that encloses all available space on the disk.)

Now we can mount the new partition.

mkdir vdisk
sudo mount /dev/nbd0p1 ./vdisk

Now it’s time to install the minimal Ubuntu system. You’ll need to know the first part of the codename for the Ubuntu version you intend to install. The codenames for Ubuntu releases are an adjective followed by the name of an animal, like “Jammy Jellyfish”. The first word (“Jammy” in this instance) is the one you need. These codenames are easy to look up online. Here’s the codenames for the currently supported LTS versions of Ubuntu, as well as the codename for the current development release:

+-------------------+-------+
| 20.04 | Focal |
|-------------------+-------+
| 22.04 | Jammy |
|-------------------+-------+
| 24.04 Development | Noble |
|-------------------+-------+

To install the initial minimal Ubuntu system, we’ll use the debootstrap utility. This utility will download and install the bare minimum packages needed to have a functional Ubuntu system. Keep in mind that the Ubuntu installation this tool makes is really minimal - it doesn’t even come with a bootloader or Linux kernel. We’ll need to make quite a few changes to this installation before it’s ready for use in a VM.

Assuming we’re installing Ubuntu 22.04 LTS into our VM, the command to use is:

sudo debootstrap jammy ./vdisk

After a few minutes, our new system should be downloaded and installed. (Note that debootstrap does require root privileges.)

Now we’re ready to customize the VM! To do this, we’ll use a utility called chroot - this utility allows us to “enter” an installed Linux system, so we can modify with it without having to boot it. (This is done by changing the root directory (from the perspective of the chroot process) to whatever directory you specify, then launching a shell or program inside the specified directory. The shell or program will see its root directory as being the directory you specified, and volia, it’s as if we’re “inside” the installed system without having to boot it. This is a very weak form of containerization and shouldn’t be relied on for security, but it’s perfect for what we’re doing.)

There’s one thing we have to account for before chrooting into our new Ubuntu installation. Some commands we need to run will assume that certain special directories are mounted properly - in particular, /proc should point to a procfs filesystem, /sys should point to a sysfs filesystem, /dev needs to contain all of the device files of our system, and /dev/pts needs to contain the device files for pseudoterminals (you don’t have to know what any of that means, just know that those four directories are important and have to be set up properly). If these directories are not properly mounted, some tools will behave strangely or not work at all. The easiest way to solve this problem is with bind mounts. These basically tell Linux to make the contents of one directory visible in some other directory too. (These are sort of like symlinks, but they work differently - a symlink says “I’m a link to something, go over here to see what I contain”, whereas a bind mount says “make this directory’s contents visible over here too”. The differences are subtle but important - a symlink can’t make files outside of a chroot visible inside the chroot. A bind mount, however, can.)

So let’s bind mount the needed directories from our system into the chroot:

sudo mount --bind /dev ./vdisk/dev
sudo mount --bind /proc ./vdisk/proc
sudo mount --bind /sys ./vdisk/sys
sudo mount --bind /dev/pts ./vdisk/dev/pts

And now we can chroot in!

sudo chroot ./vdisk

Run ping -c1 8.8.8.8 just to make sure that Internet access is working - if it’s not, you may need to copy the host’s /etc/resolv.conf file into the VM. However, you probably won’t have to do this. Assuming Internet is working, we can now start customizing things.

By default, debootstrap only enables the “main” repository of Ubuntu. This repository only contains free-and-open-source software that is supported by Canonical. This does *not* include most of the software available in Ubuntu - most of it is in the “universe”, “restricted”, and “multiverse” repositories. If you really know what you’re doing, you can leave some of these repositories out, but I would highly recommend you enable them. Also, only the “release” pocket is enabled by default - this pocket includes all of the software that came with your chosen version of Ubuntu when it was first released, but it doesn’t include bug fixes, security updates, or newer versions of software. All those are in the “updates”, “security”, and “backports” pockets.

To fix this, run the following block of code, adjusted for your release of Ubuntu:

tee /etc/apt/sources.list << ENDSOURCESLIST
deb http://archive.ubuntu.com/ubuntu jammy main universe restricted multiverse
deb http://archive.ubuntu.com/ubuntu jammy-updates main universe restricted multiverse
deb http://archive.ubuntu.com/ubuntu jammy-security main universe restricted multiverse
deb http://archive.ubuntu.com/ubuntu jammy-backports main universe restricted multiverse
ENDSOURCESLIST

Replace “jammy” with the codename corresponding to your chosen release of Ubuntu. Once you’ve run this, run cat /etc/apt/sources.list to make sure the file looks right, then run apt update to refresh your software database with the newly enabled repositories. Once that’s done, run apt full-upgrade to update any software in the base installation that’s out-of-date.

What exactly you install at this point is up to you, but here’s my list of recommendations:

  • linux-generic. Highly recommended. This provides the Linux kernel. Without it, you’re going to have significant trouble booting. You can replace this with a different kernel metapackage if you want to for some reason (like linux-lowlatency).

  • grub-pc. Highly recommended. This is the bootloader. You might be able to replace this with an alternative bootloader like systemd-boot.

  • vim (or some other decent text editor that runs in a terminal). Highly recommended. The minimal install of Ubuntu doesn’t come with a good text editor, and you’ll really want one of those most likely.

  • network-manager. Highly recommended. If you don’t install this or some other network manager, you won’t have Internet access. You can replace this with an alternative network manager if you’d like.

  • tmux. Recommended. Unless you’re going to install a graphical environment, you’ll probably want a terminal multiplexer so you don’t have to juggle TTYs (which is especially painful in QEMU).

  • openssh-server. Optional. This is handy since it lets you use your terminal emulator of choice on your physical machine to interface with the virtual machine. You won’t be stuck using a rather clumsy and slow TTY in a QEMU display.

  • pulseaudio. Very optional. Provides sound support within the VM.

  • icewm + xserver-xorg + xinit + xterm. Very optional. If you need or want a graphical environment, this should provide you with a fairly minimal and fast one. You’ll still log in at a TTY, but you can use startx to start a desktop.

Add whatever software you want to this list, remove whatever you don’t want, and then install it all with this command:

apt install listOfPackages

Replace “listOfPackages” with the actual list of packages you want to install. For instance, if I were to install everything in the above list except openssh-server, I would use:

apt install linux-generic grub-pc vim network-manager tmux icewm xserver-xorg xinit xterm

At this point our software is installed, but the VM still has a few things needed to get it going.

  • We need to install and configure the bootloader.

  • We need an /etc/fstab file, or the system will boot with the drive mounted read-only.

  • We should probably make a non-root user with sudo access.

  • There’s a file in Ubuntu that will prevent Internet access from working. We should delete it now.

The bootloader is pretty easy to install and configure. Just run:

sudo grub-install /dev/nbd0
sudo update-grub

For /etc/fstab, there are a few options. One particularly good one is to label the partition we installed Ubuntu into using e2label, then use that label as the ID of the drive we want to mount as root. That can be done like this:

e2label /dev/nbd0p1 ubuntu-inst
echo "LABEL=ubuntu-inst / ext4 defaults 0 1" > /etc/fstab

Making a user account is fairly easy:

adduser user # follow the prompts to create the user
adduser user sudo

And lastly, we should remove the Internet blocker file. I don’t understand why exactly this file exists in Ubuntu, but it does, and it causes problems for me when I make a minimal VM in this way. Removing it fixes the problem.

rm /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf

EDIT: January 21, 2024: This rm command doesn’t actually work forever - an update to NetworkManager can end up putting this file back, breaking networking again. Rather than using rm on it, you should dpkg-divert it somewhere benign, for instance with dpkg-divert --divert /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf --rename /var/nm-globally-managed-devices-junk.old, which should persist even after an update.

And that’s it! Now we can exit the chroot, unmount everything, and detach the disk image from our host machine.

exit
sudo umount ./vdisk/dev/pts
sudo umount ./vdisk/dev
sudo umount ./vdisk/proc
sudo umount ./vdisk/sys
sudo umount ./vdisk
sudo qemu-nbd -d /dev/nbd0

Now we can try and boot the VM. But before doing that, it’s probably a good idea to make a VM launcher script. Run vim ./startVM.sh (replacing “vim” with your text editor of choice), then type the following contents into the file:

#!/bin/bash
qemu-system-x86_64 -enable-kvm -machine q35 -m 4G -smp 2 -vga qxl -display sdl -monitor stdio -device intel-hda -device hda-duplex -usb -device usb-tablet -drive file=./ubuntu.img,format=qcow2,if=virtio

Refer to the qemu-system-x86_64 manpage or QEMU Invocation documentation page at https://www.qemu.org/docs/master/system/invocation.html for more info on what all these options do. Basically this gives you a VM with 4 GB RAM, 2 CPU cores, decent graphics (not 3d accelerated but not as bad as plain VGA), and audio support. You can tweak the amount of RAM and number of CPU cores by changing the -m and -smp parameters respectively. You’ll have access to the QEMU monitor through whatever terminal you run the launcher script in, allowing you to do things like switch to a different TTY, insert and remove devices and storage media on the fly, and things like that.

Finally, it’s time to see if it works.

chmod +x ./startVM.sh
./startVM.sh

If all goes well, the VM should boot and you should be able to log in! If you installed IceWM and its accompanying software like mentioned earlier, try running startx once you log in. This should pop open a functional IceWM desktop.

Some other things you should test once you’re logged in:

  • Do you have Internet access? ping -c1 8.8.8.8 can be used to test. If you don’t have Internet, run sudo nmtui in a terminal and add a new Ethernet network within the VM, then try activating it. If you get an error about the Ethernet device being strictly unmanaged, you probably forgot to remove the /usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf file mentioned earlier.

  • Can you write anything to the drive? Try running touch test to make sure. If you can’t, you probably forgot to create the /etc/fstab file.

If either of these things don’t work, you can power off the VM, then re-attach the VM’s virtual disk to your host machine, mount it, and chroot in like this:

sudo qemu-nbd -f qcow2 -c /dev/nbd0 ./ubuntu.img
sudo mount /dev/nbd0p1 ./vdisk
sudo chroot vdisk

Since all you’ll be doing is writing or removing a file, you don’t need to bind mount all the special directories we had to work with earlier.

Once you’re done fixing whatever is wrong, you can exit the VM, unmount and detach its disk, and then try to boot it again like this:

exit
sudo umount vdisk
sudo qemu-nbd -d /dev/nbd0
./startVM.sh

You now have a fully functional, minimal VM! Some extra tips that you may find handy:

  • If you choose to install an SSH server into your VM, you can use the “hostfwd” setting in QEMU to forward a port on your local machine to port 22 within the VM. This will allow you to SSH into the VM. Add a parameter like -nic user,hostfwd=tcp:127.0.0.1:2222-:22 to your QEMU command in the “startVM.sh” script. This will forward port 2222 of your host machine to port 22 of the VM. Then you can SSH into the VM by running ssh user@127.0.0.1 -p 2222. The “hostfwd” QEMU feature is documented at https://www.qemu.org/docs/master/system/invocation.html - just search the page for “hostfwd” to find it.

  • If you intend to use the VM through SSH only and don’t want a QEMU window at all, remove the following three parameters from the QEMU command in “startVM.sh”:

    • -vga qxl

    • -display sdl

    • -monitor stdio

    Then add the following switch:

    • -nographic

    This will disable the graphical QEMU window entirely and provide no video hardware to the VM.

  • You can disable sound support by removing the following switches from the QEMU command in “startVM.sh”:

    • -device intel-hda

    • -device hda-duplex

There’s lots more you can do with QEMU and manual Ubuntu installations like this, but I think this should give you a good start. Hope you find this useful! God bless.

Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

Faqet

Subscribe to AlbLinux agreguesi