You are here

Agreguesi i feed

Meta Changes Teen AI Chatbot Responses as Senate Begins Probe Into 'Romantic' Conversations

Slashdot - Sht, 30/08/2025 - 2:45pd
Meta is rolling out temporary restrictions on its AI chatbots for teens after reports revealed they were allowed to engage in "romantic" conversations with minors. A Meta spokesperson said the AI chatbots are now being trained so that they do not generate responses to teens about subjects like self-harm, suicide, disordered eating or inappropriate romantic conversations. Instead, the chatbots will point teens to expert resources when appropriate. CNBC reports: "As our community grows and technology evolves, we're continually learning about how young people may interact with these tools and strengthening our protections accordingly," the company said in a statement. Additionally, teenage users of Meta apps like Facebook and Instagram will only be able to access certain AI chatbots intended for educational and skill-development purposes. The company said it's unclear how long these temporary modifications will last, but they will begin rolling out over the next few weeks across the company's apps in English-speaking countries. The "interim changes" are part of the company's longer-term measures over teen safety. Further reading: Meta Created Flirty Chatbots of Celebrities Without Permission

Read more of this story at Slashdot.

Vivaldi Browser Doubles Down On Gen AI Ban

Slashdot - Sht, 30/08/2025 - 2:02pd
Vivaldi CEO Jon von Tetzchner has doubled down on his company's refusal to integrate generative AI into its browser, arguing that embedding AI in browsing dehumanizes the web, funnels traffic away from publishers, and primarily serves to harvest user data. "Every startup is doing AI, and there is a push for AI inside products and services continuously," he told The Register in a phone interview. "It's not really focusing on what people need." The Register reports: On Thursday, Von Tetzchner published a blog post articulating his company's rejection of generative AI in the browser, reiterating concerns raised last year by Vivaldi software developer Julien Picalausa. [...] Von Tetzchner argues that relying on generative AI for browsing dehumanizes and impoverishes the web by diverting traffic away from publishers and onto chatbots. "We're taking a stand, choosing humans over hype, and we will not turn the joy of exploring into inactive spectatorship," he stated in his post. "Without exploration, the web becomes far less interesting. Our curiosity loses oxygen and the diversity of the web dies." Von Tetzchner told The Register that almost all the users he hears from don't want AI in their browser. "I'm not so sure that applies to the general public, but I do think that actually most people are kind of wary of something that's always looking over your shoulder," he said. "And a lot of the systems as they're built today that's what they're doing. The reason why they're putting in the systems is to collect information." Von Tetzchner said that AI in browsers presents the same problem as social media algorithms that decide what people see based on collected data. Vivaldi, he said, wants users to control their own data and to make their own decisions about what they see. "We would like users to be in control," he said. "If people want to use AI as those services, it's easily accessible to them without building it into the browser. But I think the concept of building it into the browser is typically for the sake of collecting information. And that's not what we are about as a company, and we don't think that's what the web should be about." Vivaldi is not against all uses of AI, and in fact uses it for in-browser translation. But these are premade models that don't rely on user data, von Tetzchner said. "It's not like we're saying AI is wrong in all cases," he said. "I think AI can be used in particular for things like research and the like. I think it has significant value in recognizing patterns and the like. But I think the way it is being used on the internet and for browsing is net negative."

Read more of this story at Slashdot.

Battlefield 6 Dev Apologizes For Requiring Secure Boot To Power Anti-Cheat Tools

Slashdot - Sht, 30/08/2025 - 1:20pd
An anonymous reader quotes a report from Ars Technica: Earlier this month, EA announced that players in its Battlefield 6 open beta on PC would have to enable Secure Boot in their Windows OS and BIOS settings. That decision proved controversial among players who weren't able to get the finicky low-level security setting working on their machines and others who were unwilling to allow EA's anti-cheat tools to once again have kernel-level access to their systems. Now, Battlefield 6 technical director Christian Buhl is defending that requirement as something of a necessary evil to combat cheaters, even as he apologizes to any potential players that it has kept away. "The fact is I wish we didn't have to do things like Secure Boot," Buhl said in an interview with Eurogamer. "It does prevent some players from playing the game. Some people's PCs can't handle it and they can't play: that really sucks. I wish everyone could play the game with low friction and not have to do these sorts of things." Throughout the interview, Buhl admits that even requiring Secure Boot won't completely eradicate cheating in Battlefield 6 long term. Even so, he offered that the Javelin anti-cheat tools enabled by Secure Boot's low-level system access were "some of the strongest tools in our toolbox to stop cheating. Again, nothing makes cheating impossible, but enabling Secure Boot and having kernel-level access makes it so much harder to cheat and so much easier for us to find and stop cheating." [...] Despite all these justifications for the Secure Boot requirement on EA's part, it hasn't been hard to find people complaining about what they see as an onerous barrier to playing an online shooter. A quick Reddit search turns up dozens of posts complaining about the difficulty of getting Secure Boot on certain PC configurations or expressing discomfort about installing what they consider a "malware rootkit" on their machine. "I want to play this beta but A) I'm worried about bricking my PC. B) I'm worried about giving EA complete access to my machine," one representative Redditor wrote.

Read more of this story at Slashdot.

Meta Created Flirty Chatbots of Celebrities Without Permission

Slashdot - Sht, 30/08/2025 - 12:40pd
Reuters has found that Meta appropriated the names and likenesses of celebrities to create dozens of flirty social-media chatbots without their permission. "While many were created by users with a Meta tool for building chatbots, Reuters discovered that a Meta employee had produced at least three, including two Taylor Swift 'parody' bots." From the report: Reuters also found that Meta had allowed users to create publicly available chatbots of child celebrities, including Walker Scobell, a 16-year-old film star. Asked for a picture of the teen actor at the beach, the bot produced a lifelike shirtless image. "Pretty cute, huh?" the avatar wrote beneath the picture. All of the virtual celebrities have been shared on Meta's Facebook, Instagram and WhatsApp platforms. In several weeks of Reuters testing to observe the bots' behavior, the avatars often insisted they were the real actors and artists. The bots routinely made sexual advances, often inviting a test user for meet-ups. Some of the AI-generated celebrity content was particularly risque: Asked for intimate pictures of themselves, the adult chatbots produced photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread. Meta spokesman Andy Stone told Reuters that Meta's AI tools shouldn't have created intimate images of the famous adults or any pictures of child celebrities. He also blamed Meta's production of images of female celebrities wearing lingerie on failures of the company's enforcement of its own policies, which prohibit such content. "Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery," he said. While Meta's rules also prohibit "direct impersonation," Stone said the celebrity characters were acceptable so long as the company had labeled them as parodies. Many were labeled as such, but Reuters found that some weren't. Meta deleted about a dozen of the bots, both "parody" avatars and unlabeled ones, shortly before this story's publication.

Read more of this story at Slashdot.

Linus Torvalds Marks Bcachefs as Now 'Externally Maintained'

Slashdot - Sht, 30/08/2025 - 12:00pd
Linus Torvalds updated the kernel's MAINTAINERS file to mark Bcachefs as "externally maintained," signaling he won't accept new Bcachefs pull requests for now. "MAINTAINERS: mark bcachefs externally maintained," wrote Torvalds with the patch. "As per many long discussion threads, public and private." "The Bcachefs code is still present in the mainline Linux kernel likely to prevent users from having any immediate fall-out in Bcachefs file-systems they may already be using, but it doesn't look like Linus Torvalds will be honoring any new Bcachefs pull requests in the near future," adds Phoronix's Michael Larabel.

Read more of this story at Slashdot.

FCC Rejects Calls For Cable-like Fees on Broadband Providers

Slashdot - Pre, 29/08/2025 - 11:21md
The Federal Communications Commission has rejected a call from the National Association of Broadcasters and some industry trade groups that would have imposed cable-style regulatory fees on streaming services, tech companies and pure broadband providers. From a report: In a Report and Order issued on Friday, the FCC reaffirmed that regulatory fees are calculated based on the number of full-time equivalent employees assigned to specific industries under the agency's jurisdiction. Broadcasters, satellite operators and other licensees are already assessed annual payments, which help fund the FCC's operational costs. The NAB, in concert with other groups like Telesat, Iridium and the State Broadcasters Associations, pressed the FCC to expand the list of fee payers to include broadband providers and large technology firms. They argued that companies operating online platforms and broadband services rely on FCC resources and should contribute to the costs of regulation. "Big Tech should not be permitted to free ride on the FCC's oversight," NAB said in submitted comments earlier this year. The NAB argued that online platforms enjoy regulator benefits without paying into the agency's budget, as broadcasters and satellite operators do.

Read more of this story at Slashdot.

WhatsApp Fixes 'Zero-Click' Bug Used To Hack Apple Users With Spyware

Slashdot - Pre, 29/08/2025 - 10:40md
An anonymous reader quotes a report from TechCrunch: WhatsApp said on Friday that it fixed a security bug in its iOS and Mac apps that was being used to stealthily hack into the Apple devices of "specific targeted users." The Meta-owned messaging app giant said in its security advisory that it fixed the vulnerability, known officially as CVE-2025-55177, which was used alongside a separate flaw found in iOS and Macs, which Apple fixed last week and tracks as CVE-2025-43300. Apple said at the time that the flaw was used in an "extremely sophisticated attack against specific targeted individuals." Now we know that dozens of WhatsApp users were targeted with this pair of flaws. Donncha O Cearbhaill, who heads Amnesty International's Security Lab, described the attack in a post on X as an "advanced spyware campaign" that targeted users over the past 90 days, or since the end of May. O Cearbhaill described the pair of bugs as a "zero-click" attack, meaning it does not require any interaction from the victim, such as clicking a link, to compromise their device. The two bugs chained together allow an attacker to deliver a malicious exploit through WhatsApp that's capable of stealing data from the user's Apple device. Per O Cearbhaill, who posted a copy of the threat notification that WhatsApp sent to affected users, the attack was able to "compromise your device and the data it contains, including messages." It's not immediately clear who, or which spyware vendor, is behind the attacks. When reached by TechCrunch, Meta spokesperson Margarita Franklin confirmed the company detected and patched the flaw "a few weeks ago" and that the company sent "less than 200" notifications to affected WhatsApp users. The spokesperson did not say, when asked, if WhatsApp has evidence to attribute the hacks to a specific attacker or surveillance vendor.

Read more of this story at Slashdot.

Pentagon Halts Chinese Coders Affecting DOD Cloud Systems

Slashdot - Pre, 29/08/2025 - 10:01md
DOD: Defense Secretary Pete Hegseth said the Pentagon has halted a decade-old Microsoft program that has allowed Chinese coders, remotely supervised by U.S. contractors, to work on sensitive DOD cloud systems. In a digital video address to the public posted yesterday, the secretary said DOD was made aware of the "digital escorts" program last month and that the program has exposed the Defense Department to unacceptable risk -- despite being designed to comply with government contracting rules. "If you're thinking 'America first,' and common sense, this doesn't pass either of those tests," Hegseth said, adding that he initiated an immediate review of the program upon learning of it. "I want to report our initial findings. ... The use of Chinese nationals to service Department of Defense cloud environments? It's over," he said. Additionally, Hegseth said DOD has issued a formal letter of concern to Microsoft, documenting a breach of trust, and that DOD is requiring a third-party audit of the digital escorts program to pore over the code and submissions made by Chinese nationals. The audit will be free of charge to U.S. taxpayers, he said.

Read more of this story at Slashdot.

FTC Claims Gmail Filtering Republican Emails Threatens 'American Freedoms'

Slashdot - Pre, 29/08/2025 - 9:25md
Federal Trade Commission Chairman Andrew Ferguson accused Google of using "partisan" spam filtering in Gmail that sends Republican fundraising emails to the spam folder while delivering Democratic emails to inboxes. From a report: Ferguson sent a letter yesterday to Alphabet CEO Sundar Pichai, accusing the company of "potential FTC Act violations related to partisan administration of Gmail." Ferguson's letter revives longstanding Republican complaints that were previously rejected by a federal judge and the Federal Election Commission. "My understanding from recent reporting is that Gmail's spam filters routinely block messages from reaching consumers when those messages come from Republican senders but fail to block similar messages sent by Democrats," Ferguson wrote. The FTC chair cited a recent New York Post report on the alleged practice. The letter told Pichai that if "Gmail's filters keep Americans from receiving speech they expect, or donating as they see fit, the filters may harm American consumers and may violate the FTC Act's prohibition of unfair or deceptive trade practices." Ferguson added that any "act or practice inconsistent with" Google's obligations under the FTC Act "could lead to an FTC investigation and potential enforcement action."

Read more of this story at Slashdot.

Microsoft Says Recent Windows Update Didn't Kill Your SSD

Slashdot - Pre, 29/08/2025 - 8:41md
Microsoft has found no link between the August 2025 KB5063878 security update and customer reports of failure and data corruption issues affecting solid-state drives (SSDs) and hard disk drives (HDDs). From a report: Redmond first told BleepingComputer last week that it is aware of users reporting SSD failures after installing this month's Windows 11 24H2 security update. In a subsequent service alert seen by BleepingComputer, Redmond said that it was unable to reproduce the issue on up-to-date systems and began collecting user reports with additional details from those affected. "After thorough investigation, Microsoft has found no connection between the August 2025 Windows security update and the types of hard drive failures reported on social media," Microsoft said in an update to the service alert this week. "As always, we continue to monitor feedback after the release of every Windows update, and will investigate any future reports."

Read more of this story at Slashdot.

Today's Game Consoles Are Historically Overpriced

Slashdot - Pre, 29/08/2025 - 8:01md
ArsTechnica: Today's video game consoles are hundreds of dollars more expensive than you'd expect based on historic pricing trends. That's according to an Ars Technica analysis of decades of pricing data and price-cut timing across dozens of major US console releases. The overall direction of this trend has been apparent to industry watchers for a while now. Nintendo, Sony, and Microsoft have failed to cut their console prices in recent years and have instead been increasing the nominal MSRP for many current consoles in the past six months. But when you crunch the numbers, it's pretty incredible just how much today's console prices defy historic expectations, even when you account for higher-than-normal inflation in recent years. If today's consoles were seeing anything like what used to be standard price cuts over time, we could be paying around $200 today for pricey systems like the Switch OLED, PS5 Digital Edition, and Xbox Series S.

Read more of this story at Slashdot.

Macron Vows Retaliation If Europe's Digital Sovereignty Attacked

Slashdot - Pre, 29/08/2025 - 7:22md
French President Emmanuel Macron vowed a strong response [non-paywalled source] if any country takes measures that undermine Europe's digital sovereignty. From a report: Earlier this week, US President Donald Trump threatened to impose fresh tariffs and export restrictions on countries that have digital services taxes or regulations that harm American tech companies. France was among the first nations to implement a digital services tax. "We will not let anyone else decide for us on this matter," he told reporters in Toulon, France, on Friday. "We cannot allow our digital sector or the regulations we have chosen for ourselves, which are a necessity, to be threatened today." Trump has long railed against EU tech and antitrust regulation over US tech giants including Alphabet's Google and Apple.

Read more of this story at Slashdot.

Thibault Martin: TIL that You can spot base64 encoded JSON, certificates, and private keys

Planet GNOME - Mar, 05/08/2025 - 3:00md

I was working on my homelab and examined a file that was supposed to contain encrypted content that I could safely commit on a Github repository. The file looked like this

{ "serial": 13, "lineage": "24d431ee-3da9-4407-b649-b0d2c0ca2d67", "meta": { "key_provider.pbkdf2.password_key": "eyJzYWx0IjoianpHUlpMVkFOZUZKcEpSeGo4UlhnNDhGZk9vQisrR0YvSG9ubTZzSUY5WT0iLCJpdGVyYXRpb25zIjo2MDAwMDAsImhhc2hfZnVuY3Rpb24iOiJzaGE1MTIiLCJrZXlfbGVuZ3RoIjozMn0=" }, "encrypted_data": "ONXZsJhz37eJA[...]", "encryption_version": "v0" }

Hm, key provider? Password key? In an encrypted file? That doesn't sound right. The problem is that this file is generated by taking a password, deriving a key from it, and encrypting the content with that key. I don't know what the derived key could look like, but it could be that long indecipherable string.

I asked a colleague to have a look and he said "Oh that? It looks like a base64 encoded JSON. Give it a go to see what's inside."

I was incredulous but gave it a go, and it worked!!

$ echo "eyJzYW[...]" | base64 -d {"salt":"jzGRZLVANeFJpJRxj8RXg48FfOoB++GF/Honm6sIF9Y=","iterations":600000,"hash_function":"sha512","key_length":32}

I couldn't believe my colleague had decoded the base64 string on the fly, so I asked. "What gave it away? Was it the trailing equal signs at the end for padding? But how did you know it was base64 encoded JSON and not just a base64 string?"

He replied,

Whenever you see ey, that's {" and then if it's followed by a letter, you'll get J followed by a letter.

I did a few tests in my terminal, and he was right! You can spot base64 json with your naked eye, and you don't need to decode it on the fly!

$ echo "{" | base64 ewo= $ echo "{\"" | base64 eyIK $ echo "{\"s" | base64 eyJzCg== $ echo "{\"a" | base64 eyJhCg== $ echo "{\"word\"" | base64 eyJ3b3JkIgo=

But there's even better! As tyzbit reported on the fediverse, you can even spot base64 encoded certificates and private keys! They all start with LS, which reminds of the LS in "TLS certificate."

$ echo -en "-----BEGIN CERTIFICATE-----" | base64 LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0t

[!warning] Errata

As pointed out by gnabgib and athorax on Hacker News, this actually detects the leading dashes of the PEM format, commonly used for certificates, and a YAML file that starts with --- will yield the same result

$ echo "---\n" | base64 LS0tXG4K

This is not a silver bullet!

Thanks Davide and Denis for showing me this simple but pretty useful trick, and thanks tyzbit for completing it with certs and private keys!

Matthew Garrett: Cordoomceps - replacing an Amiga's brain with Doom

Planet GNOME - Mar, 05/08/2025 - 2:30pd
There's a lovely device called a pistorm, an adapter board that glues a Raspberry Pi GPIO bus to a Motorola 68000 bus. The intended use case is that you plug it into a 68000 device and then run an emulator that reads instructions from hardware (ROM or RAM) and emulates them. You're still limited by the ~7MHz bus that the hardware is running at, but you can run the instructions as fast as you want.

These days you're supposed to run a custom built OS on the Pi that just does 68000 emulation, but initially it ran Linux on the Pi and a userland 68000 emulator process. And, well, that got me thinking. The emulator takes 68000 instructions, emulates them, and then talks to the hardware to implement the effects of those instructions. What if we, well, just don't? What if we just run all of our code in Linux on an ARM core and then talk to the Amiga hardware?

We're going to ignore x86 here, because it's weird - but most hardware that wants software to be able to communicate with it maps itself into the same address space that RAM is in. You can write to a byte of RAM, or you can write to a piece of hardware that's effectively pretending to be RAM[1]. The Amiga wasn't unusual in this respect in the 80s, and to talk to the graphics hardware you speak to a special address range that gets sent to that hardware instead of to RAM. The CPU knows nothing about this. It just indicates it wants to write to an address, and then sends the data.

So, if we are the CPU, we can just indicate that we want to write to an address, and provide the data. And those addresses can correspond to the hardware. So, we can write to the RAM that belongs to the Amiga, and we can write to the hardware that isn't RAM but pretends to be. And that means we can run whatever we want on the Pi and then access Amiga hardware.

And, obviously, the thing we want to run is Doom, because that's what everyone runs in fucked up hardware situations.

Doom was Amiga kryptonite. Its entire graphical model was based on memory directly representing the contents of your display, and being able to modify that by just moving pixels around. This worked because at the time VGA displays supported having a memory layout where each pixel on your screen was represented by a byte in memory containing an 8 bit value that corresponded to a lookup table containing the RGB value for that pixel.

The Amiga was, well, not good at this. Back in the 80s, when the Amiga hardware was developed, memory was expensive. Dedicating that much RAM to the video hardware was unthinkable - the Amiga 1000 initially shipped with only 256K of RAM, and you could fill all of that with a sufficiently colourful picture. So instead of having the idea of each pixel being associated with a specific area of memory, the Amiga used bitmaps. A bitmap is an area of memory that represents the screen, but only represents one bit of the colour depth. If you have a black and white display, you only need one bitmap. If you want to display four colours, you need two. More colours, more bitmaps. And each bitmap is stored in an independent area of RAM. You never use more memory than you need to display the number of colours you want to.

But that means that each bitplane contains packed information - every byte of data in a bitplane contains the bit value for 8 different pixels, because each bitplane contains one bit of information per pixel. To update one pixel on screen, you need to read from every bitmap, update one bit, and write it back, and that's a lot of additional memory accesses. Doom, but on the Amiga, was slow not just because the CPU was slow, but because there was a lot of manipulation of data to turn it into the format the Amiga wanted and then push that over a fairly slow memory bus to have it displayed.

The CDTV was an aesthetically pleasing piece of hardware that absolutely sucked. It was an Amiga 500 in a hi-fi box with a caddy-loading CD drive, and it ran software that was just awful. There's no path to remediation here. No compelling apps were ever released. It's a terrible device. I love it. I bought one in 1996 because a local computer store had one and I pointed out that the company selling it had gone bankrupt some years earlier and literally nobody in my farming town was ever going to have any interest in buying a CD player that made a whirring noise when you turned it on because it had a fan and eventually they just sold it to me for not much money, and ever since then I wanted to have a CD player that ran Linux and well spoiler 30 years later I'm nearly there. That CDTV is going to be our test subject. We're going to try to get Doom running on it without executing any 68000 instructions.

We're facing two main problems here. The first is that all Amigas have a firmware ROM called Kickstart that runs at powerup. No matter how little you care about using any OS functionality, you can't start running your code until Kickstart has run. This means even documentation describing bare metal Amiga programming assumes that the hardware is already in the state that Kickstart left it in. This will become important later. The second is that we're going to need to actually write the code to use the Amiga hardware.

First, let's talk about Amiga graphics. We've already covered bitmaps, but for anyone used to modern hardware that's not the weirdest thing about what we're dealing with here. The CDTV's chipset supports a maximum of 64 colours in a mode called "Extra Half-Brite", or EHB, where you have 32 colours arbitrarily chosen from a palette and then 32 more colours that are identical but with half the intensity. For 64 colours we need 6 bitplanes, each of which can be located arbitrarily in the region of RAM accessible to the chipset ("chip RAM", distinguished from "fast ram" that's only accessible to the CPU). We tell the chipset where our bitplanes are and it displays them. Or, well, it does for a frame - after that the registers that pointed at our bitplanes no longer do, because when the hardware was DMAing through the bitplanes to display them it was incrementing those registers to point at the next address to DMA from. Which means that every frame we need to set those registers back.

Making sure you have code that's called every frame just to make your graphics work sounds intensely irritating, so Commodore gave us a way to avoid doing that. The chipset includes a coprocessor called "copper". Copper doesn't have a large set of features - in fact, it only has three. The first is that it can program chipset registers. The second is that it can wait for a specific point in screen scanout. The third (which we don't care about here) is that it can optionally skip an instruction if a certain point in screen scanout has already been reached. We can write a program (a "copper list") for the copper that tells it to program the chipset registers with the locations of our bitplanes and then wait until the end of the frame, at which point it will repeat the process. Now our bitplane pointers are always valid at the start of a frame.

Ok! We know how to display stuff. Now we just need to deal with not having 256 colours, and the whole "Doom expects pixels" thing. For the first of these, I stole code from ADoom, the only Amiga doom port I could easily find source for. This looks at the 256 colour palette loaded by Doom and calculates the closest approximation it can within the constraints of EHB. ADoom also includes a bunch of CPU-specific assembly optimisation for converting the "chunky" Doom graphic buffer into the "planar" Amiga bitplanes, none of which I used because (a) it's all for 68000 series CPUs and we're running on ARM, and (b) I have a quad core CPU running at 1.4GHz and I'm going to be pushing all the graphics over a 7.14MHz bus, the graphics mode conversion is not going to be the bottleneck here. Instead I just wrote a series of nested for loops that iterate through each pixel and update each bitplane and called it a day. The set of bitplanes I'm operating on here is allocated on the Linux side so I can read and write to them without being restricted by the speed of the Amiga bus (remember, each byte in each bitplane is going to be updated 8 times per frame, because it holds bits associated with 8 pixels), and then copied over to the Amiga's RAM once the frame is complete.

And, kind of astonishingly, this works! Once I'd figured out where I was going wrong with RGB ordering and which order the bitplanes go in, I had a recognisable copy of Doom running. Unfortunately there were weird graphical glitches - sometimes blocks would be entirely the wrong colour. It took me a while to figure out what was going on and then I felt stupid. Recording the screen and watching in slow motion revealed that the glitches often showed parts of two frames displaying at once. The Amiga hardware is taking responsibility for scanning out the frames, and the code on the Linux side isn't synchronised with it at all. That means I could update the bitplanes while the Amiga was scanning them out, resulting in a mashup of planes from two different Doom frames being used as one Amiga frame. One approach to avoid this would be to tie the Doom event loop to the Amiga, blocking my writes until the end of scanout. The other is to use double-buffering - have two sets of bitplanes, one being displayed and the other being written to. This consumes more RAM but since I'm not using the Amiga RAM for anything else that's not a problem. With this approach I have two copper lists, one for each set of bitplanes, and switch between them on each frame. This improved things a lot but not entirely, and there's still glitches when the palette is being updated (because there's only one set of colour registers), something Doom does rather a lot, so I'm going to need to implement proper synchronisation.

Except. This was only working if I ran a 68K emulator first in order to run Kickstart. If I tried accessing the hardware without doing that, things were in a weird state. I could update the colour registers, but accessing RAM didn't work - I could read stuff out, but anything I wrote vanished. Some more digging cleared that up. When you turn on a CPU it needs to start executing code from somewhere. On modern x86 systems it starts from a hardcoded address of 0xFFFFFFF0, which was traditionally a long way any RAM. The 68000 family instead reads its start address from address 0x00000004, which overlaps with where the Amiga chip RAM is. We can't write anything to RAM until we're executing code, and we can't execute code until we tell the CPU where the code is, which seems like a problem. This is solved on the Amiga by powering up in a state where the Kickstart ROM is "overlayed" onto address 0. The CPU reads the start address from the ROM, which causes it to jump into the ROM and start executing code there. Early on, the code tells the hardware to stop overlaying the ROM onto the low addresses, and now the RAM is available. This is poorly documented because it's not something you need to care if you execute Kickstart which every actual Amiga does and I'm only in this position because I've made poor life choices, but ok that explained things. To turn off the overlay you write to a register in one of the Complex Interface Adaptor (CIA) chips, and things start working like you'd expect.

Except, they don't. Writing to that register did nothing for me. I assumed that there was some other register I needed to write to first, and went to the extent of tracing every register access that occurred when running the emulator and replaying those in my code. Nope, still broken. What I finally discovered is that you need to pulse the reset line on the board before some of the hardware starts working - powering it up doesn't put you in a well defined state, but resetting it does.

So, I now have a slightly graphically glitchy copy of Doom running without any sound, displaying on an Amiga whose brain has been replaced with a parasitic Linux. Further updates will likely make things even worse. Code is, of course, available.

[1] This is why we had trouble with late era 32 bit systems and 4GB of RAM - a bunch of your hardware wanted to be in the same address space and so you couldn't put RAM there so you ended up with less than 4GB of RAM

comments

Victor Ma: It's alive!

Planet GNOME - Mar, 05/08/2025 - 2:00pd

In the last two weeks, I’ve been working on my lookahead-based word suggestion algorithm. And it’s finally functional! There’s still a lot more work to be done, but it’s great to see that the original problem I set out to solve is now solved by my new algorithm.

Without my changes

Here’s what the upstream Crosswords Editor looks like, with a problematic grid:

The editor suggests words like WORD and WORM, for the 4-Across slot. But none of the suggestions are valid, because the grid is actually unfillable. This means that there are no possible word suggestions for the grid.

The words that the editor suggests do work for 4-Across. But they do not work for 4-Down. They all cause 4-Down to become a nonsensical word.

The problem here is that the current word suggestion algorithm only looks at the row and column where the cursor is. So it sees 4-Across and 1-Down—but it has no idea about 4-Down. If it could see 4-Down, then it would realize that no word that fits in 4-Across also fits in 4-Down—and it would return an empty word suggestion list.

With my changes

My algorithm fixes the problem by considering every intersecting slot of the current slot. In the example grid, the current slot is 4-Across. So, my algorithm looks at 1-Down, 2-Down, 3-Down, and 4-Down. When it reaches 4-Down, it sees that no letter fits in the empty cell. Every possible letter leads to either 4-Across or 4-Down or both slots to contain an invalid word. So, my algorithm correctly returns an empty list of word suggestions.

Julian Hofer: Git Forges Made Simple: gh & glab

Planet GNOME - Hën, 04/08/2025 - 2:00pd

When I set the goal for myself to contribute to open source back in 2018, I mostly struggled with two technical challenges:

  • Python virtual environments, and
  • Git together with GitHub.

Solving the former is nowadays my job, so let me write up my current workflow for the latter.

Most people use Git in combination with modern Git forges like GitHub and GitLab. Git doesn’t know anything about these forges, which is why CLI tools exist to close that gap. It’s still good to know how to handle things without them, so I will also explain how to do things with only Git. For GitHub there’s gh and for GitLab there’s glab. Both of them are Go binaries without any dependencies that work on Linux, macOS and Windows. If you don’t like any of the provided installation methods, you can simply download the binary, make it executable and put it in your PATH.

Luckily, they also have mostly the same command line interface. First, you have to login with the command that corresponds to your git forge:

In the case of gh this even authenticates Git with GitHub. With GitLab, you still have to set up authentication via SSH.

Working Solo

The simplest way to use Git is to use it like a backup system. First, you create a new repository on either Github or GitLab. Then you clone the repository:

git clone <REPO>">

From that point on, all you have to do is:

  • do some work
  • commit
  • push
  • repeat

On its own there aren’t a lot of reasons to choose this approach over a file syncing service like Nextcloud. No, the main reason you do this, is because you are either already familiar with the git workflow, or want to get used to it.

Contributing

Git truly shines as soon as you start collaborating with others. On a high level this works like this:

  • You modify some files in a Git repository,
  • you propose your changes via the Git forge,
  • maintainers of the repository review your changes, and
  • as soon as they are happy with your changes, they will integrate your changes into the main branch of the repository.
Starting Out With a Fresh Branch

Let’s go over the exact commands.

  1. You will want to start out with the latest upstream changes in the default branch. You can find out its name by running the following command:

    git ls-remote --symref origin HEAD
  2. Chances are it displays either ref/heads/main or refs/heads/master. The last component is the branch, so the default branch will be called either main or master. Before you start a new branch, you will run the following two commands to make sure you start with the latest state of the repository:

    git switch <DEFAULT-BRANCH>git pullgit pull">
  3. You switch and create a new branch with:

    git switch --create <BRANCH>">

    That way you can work on multiple features at the same time and easily keep your default branch synchronized with the remote repository.

Open a Pull Request

The next step is to open a pull request on GitHub or merge request on GitLab. Even though they are named differently, they are exactly the same thing. Therefore, I will call both of them pull requests from now on. The idea of a pull request is to integrate the changes from one branch into another branch (typically the default branch). However, you don’t necessarily want to give every potential contributor the power to create new branches on your repository. That is why the concept of forks exists. Forks are copies of a repository that are hosted on the same Git forge. Contributors can now create branches on their own forks and open pull requests based on these branches.

  1. If you don’t have push access to the repository, now it’s time to create your own fork.

  2. Then, you open the Pull Request

Checking out Pull Requests

Often, you want to check out a pull request on your own machine to verify that it works as expected.

Emmanuele Bassi: Governance in GNOME

Planet GNOME - Dje, 03/08/2025 - 10:48md
How do things happen in GNOME?

Things happen in GNOME? Could have fooled me, right?

Of course, things happen in GNOME. After all, we have been releasing every six months, on the dot, for nearly 25 years. Assuming we’re not constantly re-releasing the same source files, then we have to come to the conclusion that things change inside each project that makes GNOME, and thus things happen that involve more than one project.

So let’s roll back a bit.

GNOME’s original sin

We all know Havoc Pennington’s essay on preferences; it’s one of GNOME’s foundational texts, we refer to it pretty much constantly both inside and outside the contributors community. It has guided our decisions and taste for over 20 years. As far as foundational text goes, though, it applies to design philosophy, not to project governance.

When talking about the inception and technical direction of the GNOME project there are really two foundational texts that describe the goals of GNOME, as well as the mechanisms that are employed to achieve those goals.

The first one is, of course, Miguel’s announcement of the GNOME project itself, sent to the GTK, Guile, and (for good measure) the KDE mailing lists:

We will try to reuse the existing code for GNU programs as much as possible, while adhering to the guidelines of the project. Putting nice and consistent user interfaces over all-time favorites will be one of the projects. — Miguel de Icaza, “The GNOME Desktop project.” announcement email

Once again, everyone related to the GNOME project is (or should be) familiar with this text.

The second foundational text is not as familiar, outside of the core group of people that were around at the time. I am referring to Derek Glidden’s description of the differences between GNOME and KDE, written five years after the inception of the project. I isolated a small fragment of it:

Development strategies are generally determined by whatever light show happens to be going on at the moment, when one of the developers will leap up and scream “I WANT IT TO LOOK JUST LIKE THAT” and then straight-arm his laptop against the wall in an hallucinogenic frenzy before vomiting copiously, passing out and falling face-down in the middle of the dance floor. — Derek Glidden, “GNOME vs KDE”

What both texts have in common is subtle, but explains the origin of the project. You may not notice it immediately, but once you see it you can’t unsee it: it’s the over-reliance on personal projects and taste, to be sublimated into a shared vision. A “bottom up” approach, with “nice and consistent user interfaces” bolted on top of “all-time favorites”, with zero indication of how those nice and consistent UIs would work on extant code bases, all driven by somebody’s with a vision—drug induced or otherwise—that decides to lead the project towards its implementation.

It’s been nearly 30 years, but GNOME still works that way.

Sure, we’ve had a HIG for 25 years, and the shared development resources that the project provides tend to mask this, to the point that everyone outside the project assumes that all people with access to the GNOME commit bit work on the whole project, as a single unit. If you are here, listening (or reading) to this, you know it’s not true. In fact, it is so comically removed from the lived experience of everyone involved in the project that we generally joke about it.

Herding cats and vectors sum

During my first GUADEC, back in 2005, I saw a great slide from Seth Nickell, one of the original GNOME designers. It showed GNOME contributors represented as a jumble of vectors going in all directions, cancelling each component out; and the occasional movement in the project was the result of somebody pulling/pushing harder in their direction.

Of course, this is not the exclusive province of GNOME: you could take most complex free and open source software projects and draw a similar diagram. I contend, though, that when it comes to GNOME this is not emergent behaviour but it’s baked into the project from its very inception: a loosey-goosey collection of cats, herded together by whoever shows up with “a vision”, but, also, a collection of loosely coupled projects. Over the years we tried to put a rest to the notion that GNOME is a box of LEGO, meant to be assembled together by distributors and users in the way they most like it; while our software stack has graduated from the “thrown together at the last minute” quality of its first decade, our community is still very much following that very same model; the only way it seems to work is because we have a few people maintaining a lot of components.

On maintainers

I am a software nerd, and one of the side effects of this terminal condition is that I like optimisation problems. Optimising software is inherently boring, though, so I end up trying to optimise processes and people. The fundamental truth of process optimisation, just like software, is to avoid unnecessary work—which, in some cases, means optimising away the people involved.

I am afraid I will have to be blunt, here, so I am going to ask for your forgiveness in advance.

Let’s say you are a maintainer inside a community of maintainers. Dealing with people is hard, and the lord forbid you talk to other people about what you’re doing, what they are doing, and what you can do together, so you only have a few options available.

The first one is: you carve out your niche. You start, or take over, a project, or an aspect of a project, and you try very hard to make yourself indispensable, so that everything ends up passing through you, and everyone has to defer to your taste, opinion, or edict.

Another option: API design is opinionated, and reflects the thoughts of the person behind it. By designing platform API, you try to replicate your toughts, taste, and opinions into the minds of the people using it, like the eggs of parasitic wasp; because if everybody thinks like you, then there won’t be conflicts, and you won’t have to deal with details, like “how to make this application work”, or “how to share functionality”; or, you know, having to develop a theory of mind for relating to other people.

Another option: you try to reimplement the entirety of a platform by yourself. You start a bunch of projects, which require starting a bunch of dependencies, which require refactoring a bunch of libraries, which ends up cascading into half of the stack. Of course, since you’re by yourself, you end up with a consistent approach to everything. Everything is as it ought to be: fast, lean, efficient, a reflection of your taste, commitment, and ethos. You made everyone else redundant, which means people depend on you, but also nobody is interested in helping you out, because you are now taken for granted, on the one hand, and nobody is able to get a word edgewise into what you made on the other.

I purposefully did not name names, even though we can all recognise somebody in these examples. For instance, I recognise myself. I have been all of these examples, at one point or another over the past 20 years.

Painting a target on your back

But if this is what it looks like from within a project, what it looks like from the outside is even worse.

Once you start dragging other people, you raise your visibility; people start learning your name, because you appear in the issue tracker, on Matrix/IRC, on Discourse and Planet GNOME. Youtubers and journalists start asking you questions about the project. Randos on web forums start associating you to everything GNOME does, or does not; to features, design, and bugs. You become responsible for every decision, whether you are or not, and this leads to being the embodiment of all evil the project does. You’ll get hate mail, you’ll be harrassed, your words will be used against you and the project for ever and ever.

Burnout and you

Of course, that ends up burning people out; it would be absurd if it didn’t. Even in the best case possible, you’ll end up burning out just by reaching empathy fatigue, because everyone has access to you, and everyone has their own problems and bugs and features and wouldn’t it be great to solve every problem in the world? This is similar to working for non profits as opposed to the typical corporate burnout: you get into a feedback loop where you don’t want to distance yourself from the work you do because the work you do gives meaning to yourself and to the people that use it; and yet working on it hurts you. It also empowers bad faith actors to hound you down to the ends of the earth, until you realise that turning sand into computers was a terrible mistake, and we should have torched the first personal computer down on sight.

Governance

We want to have structure, so that people know what to expect and how to navigate the decision making process inside the project; we also want to avoid having a sacrificial lamb that takes on all the problems in the world on their shoulders until we burn them down to a cinder and they have to leave. We’re 28 years too late to have a benevolent dictator, self-appointed or otherwise, and we don’t want to have a public consultation every time we want to deal with a systemic feature. What do we do?

Examples

What do other projects have to teach us about governance? We are not the only complex free software project in existence, and it would be an appaling measure of narcissism to believe that we’re special in any way, shape or form.

Python

We should all know what a Python PEP is, but if you are not familiar with the process I strongly recommend going through it. It’s well documented, and pretty much the de facto standard for any complex free and open source project that has achieved escape velocity from a centralised figure in charge of the whole decision making process. The real achievement of the Python community is that it adopted this policy long before their centralised figure called it quits. The interesting thing of the PEP process is that it is used to codify the governance of the project itself; the PEP template is a PEP; teams are defined through PEPs; target platforms are defined through PEPs; deprecations are defined through PEPs; all project-wide processes are defined through PEPs.

Rust

Rust has a similar process for language, tooling, and standard library changes, called “RFC”. The RFC process is more lightweight on the formalities than Python’s PEPs, but it’s still very well defined. Rust, being a project that came into existence in a Post-PEP world, adopted the same type of process, and used it to codify teams, governance, and any and all project-wide processes.

Fedora

Fedora change proposals exist to discuss and document both self-contained changes (usually fairly uncontroversial, given that they are proposed by the same owners of module being changed) and system-wide changes. The main difference between them is that most of the elements of a system-wide change proposal are required, wheres for self-contained proposals they can be optional; for instance, a system-wide change must have a contingency plan, a way to test it, and the impact on documentation and release notes, whereas as self-contained change does not.

GNOME

Turns out that we once did have “GNOME Enhancement Proposals” (GEP), mainly modelled on Python’s PEP from 2002. If this comes as a surprise, that’s because they lasted for about a year, mainly because it was a reactionary process to try and funnel some of the large controversies of the 2.0 development cycle into a productive outlet that didn’t involve flames and people dramatically quitting the project. GEPs failed once the community fractured, and people started working in silos, either under their own direction or, more likely, under their management’s direction. What’s the point of discussing a project-wide change, when that change was going to be implemented by people already working together?

The GEP process mutated into the lightweight “module proposal” process, where people discussed adding and removing dependencies on the desktop development mailing list—something we also lost over the 2.x cycle, mainly because the amount of discussions over time tended towards zero. The people involved with the change knew what those modules brought to the release, and people unfamiliar with them were either giving out unsolicited advice, or were simply not reached by the desktop development mailing list. The discussions turned into external dependencies notifications, which also died up because apparently asking to compose an email to notify the release team that a new dependency was needed to build a core module was far too much of a bother for project maintainers.

The creation and failure of GEP and module proposals is both an indication of the need for structure inside GNOME, and how this need collides with the expectation that project maintainers have not just complete control over every aspect of their domain, but that they can also drag out the process until all the energy behind it has dissipated. Being in charge for the long run allows people to just run out the clock on everybody else.

Goals

So, what should be the goal of a proper technical governance model for the GNOME project?

Diffusing responsibilities

This should be goal zero of any attempt at structuring the technical governance of GNOME. We have too few people in too many critical positions. We can call it “efficiency”, we can call it “bus factor”, we can call it “bottleneck”, but the result is the same: the responsibility for anything is too concentrated. This is how you get conflict. This is how you get burnout. This is how you paralise a whole project. By having too few people in positions of responsibility, we don’t have enough slack in the governance model; it’s an illusion of efficiency.

Responsibility is not something to hoard: it’s something to distribute.

Empowering the community

The community of contributors should be able to know when and how a decision is made; it should be able to know what to do once a decision is made. Right now, the process is opaque because it’s done inside a million different rooms, and, more importantly, it is not recorded for posterity. Random GitLab issues should not be the only place where people can be informed that some decision was taken.

Empowering individuals

Individuals should be able to contribute to a decision without necessarily becoming responsible for a whole project. It’s daunting, and requires a measure of hubris that cannot be allowed to exist in a shared space. In a similar fashion, we should empower people that want to contribute to the project by reducing the amount of fluff coming from people with zero stakes in it, and are interested only in giving out an opinion on their perfectly spherical, frictionless desktop environment.

It is free and open source software, not free and open mic night down at the pub.

Actual decision making process

We say we work by rough consensus, but if a single person is responsible for multiple modules inside the project, we’re just deceiving ourselves. I should not be able to design something on my own, commit it to all projects I maintain, and then go home, regardless of whether what I designed is good or necessary.

Proposed GNOME Changes✝

✝ Name subject to change

PGCs

We have better tools than what the GEP used to use and be. We have better communication venues in 2025; we have better validation; we have better publishing mechanisms.

We can take a lightweight approach, with a well-defined process, and use it not for actual design or decision-making, but for discussion and documentation. If you are trying to design something and you use this process, you are by definition Doing It Wrong™. You should have a design ready, and series of steps to achieve it, as part of a proposal. You should already know the projects involved, and already have an idea of the effort needed to make something happen.

Once you have a formal proposal, you present it to the various stakeholders, and iterate over it to improve it, clarify it, and amend it, until you have something that has a rough consensus among all the parties involved. Once that’s done, the proposal is now in effect, and people can refer to it during the implementation, and in the future. This way, we don’t have to ask people to remember a decision made six months, two years, ten years ago: it’s already available.

Editorial team

Proposals need to be valid, in order to be presented to the community at large; that validation comes from an editorial team. The editors of the proposals are not there to evaluate its contents: they are there to ensure that the proposal is going through the expected steps, and that discussions related to it remain relevant and constrained within the accepted period and scope. They are there to steer the discussion, and avoid architecture astronauts parachuting into the issue tracker or Discourse to give their unwarranted opinion.

Once the proposal is open, the editorial team is responsible for its inclusion in the public website, and for keeping track of its state.

Steering group

The steering group is the final arbiter of a proposal. They are responsible for accepting it, or rejecting it, depending on the feedback from the various stakeholders. The steering group does not design or direct GNOME as a whole: they are the ones that ensure that communication between the parts happens in a meaningful manner, and that rough consensus is achieved.

The steering group is also, by design, not the release team: it is made of representatives from all the teams related to technical matters.

Is this enough?

Sadly, no.

Reviving a process for proposing changes in GNOME without addressing the shortcomings of its first iteration would inevitably lead to a repeat of its results.

We have better tooling, but the problem is still that we’re demanding that each project maintainer gets on board with a process that has no mechanism to enforce compliance.

Once again, the problem is that we have a bunch of fiefdoms that need to be opened up to ensure that more people can work on them.

Whither maintainers

In what was, in retrospect, possibly one of my least gracious and yet most prophetic moments on the desktop development mailing list, I once said that, if it were possible, I would have already replaced all GNOME maintainers with a shell script. Turns out that we did replace a lot of what maintainers used to do, and we used a large Python service to do that.

Individual maintainers should not exist in a complex project—for both the project’s and the contributors’ sake. They are inefficiency made manifest, a bottleneck, a point of contention in a distributed environment like GNOME. Luckily for us, we almost made them entirely redundant already! Thanks to the release service and CI pipelines, we don’t need a person spinning up a release archive and uploading it into a file server. We just need somebody to tag the source code repository, and anybody with the right permissions could do that.

We need people to review contributions; we need people to write release notes; we need people to triage the issue tracker; we need people to contribute features and bug fixes. None of those tasks require the “maintainer” role.

So, let’s get rid of maintainers once and for all. We can delegate the actual release tagging of core projects and applications to the GNOME release team; they are already releasing GNOME anyway, so what’s the point in having them wait every time for somebody else to do individual releases? All people need to do is to write down what changed in a release, and that should be part of a change itself; we have centralised release notes, and we can easily extract the list of bug fixes from the commit log. If you can ensure that a commit message is correct, you can also get in the habit of updating the NEWS file as part of a merge request.

Additional benefits of having all core releases done by a central authority are that we get people to update the release notes every time something changes; and that we can sign all releases with a GNOME key that downstreams can rely on.

Embracing special interest groups

But it’s still not enough.

Especially when it comes to the application development platform, we have already a bunch of components with an informal scheme of shared responsibility. Why not make that scheme official?

Let’s create the SDK special interest group; take all the developers for the base libraries that are part of GNOME—GLib, Pango, GTK, libadwaita—and formalise the group of people that currently does things like development, review, bug fixing, and documentation writing. Everyone in the group should feel empowered to work on all the projects that belong to that group. We already are, except we end up deferring to somebody that is usually too busy to cover every single module.

Other special interest groups should be formed around the desktop, the core applications, the development tools, the OS integration, the accessibility stack, the local search engine, the system settings.

Adding more people to these groups is not going to be complicated, or introduce instability, because the responsibility is now shared; we would not be taking somebody that is already overworked, or even potentially new to the community, and plopping them into the hot seat, ready for a burnout.

Each special interest group would have a representative in the steering group, alongside teams like documentation, design, and localisation, thus ensuring that each aspect of the project technical direction is included in any discussion. Each special interest group could also have additional sub-groups, like a web services group in the system settings group; or a networking group in the OS integration group.

What happens if I say no?

I get it. You like being in charge. You want to be the one calling the shots. You feel responsible for your project, and you don’t want other people to tell you what to do.

If this is how you feel, then there’s nothing wrong with parting ways with the GNOME project.

GNOME depends on a ton of projects hosted outside GNOME’s own infrastructure, and we communicate with people maintaining those projects every day. It’s 2025, not 1997: there’s no shortage of code hosting services in the world, we don’t need to have them all on GNOME infrastructure.

If you want to play with the other children, if you want to be part of GNOME, you get to play with a shared set of rules; and that means sharing all the toys, and not hoarding them for yourself.

Civil service

What we really want GNOME to be is a group of people working together. We already are, somewhat, but we can be better at it. We don’t want rule and design by committee, but we do need structure, and we need that structure to be based on expertise; to have distinct sphere of competence; to have continuity across time; and to be based on rules. We need something flexible, to take into account the needs of GNOME as a project, and be capable of growing in complexity so that nobody can be singled out, brigaded on, or burnt to a cinder on the sacrificial altar.

Our days of passing out in the middle of the dance floor are long gone. We might not all be old—actually, I’m fairly sure we aren’t—but GNOME has long ceased to be something we can throw together at the last minute just because somebody assumed the mantle of a protean ruler, and managed to involve themselves with every single project until they are the literal embodiement of an autocratic force capable of dragging everybody else towards a goal, until the burn out and have to leave for their own sake.

We can do better than this. We must do better.

To sum up

Stop releasing individual projects, and let the release team do it when needed.

Create teams to manage areas of interest, instead of single projects.

Create a steering group from representatives of those teams.

Every change that affects one or more teams has to be discussed and documented in a public setting among contributors, and then published for future reference.

None of this should be controversial because, outside of the publishing bit, it’s how we are already doing things. This proposal aims at making it official so that people can actually rely on it, instead of having to divine the process out of thin air.

The next steps

We’re close to the GNOME 49 release, now that GUADEC 2025 has ended, so people are busy working on tagging releases, fixing bugs, and the work on the release notes has started. Nevertheless, we can already start planning for an implementation of a new governance model for GNOME for the next cycle.

First of all, we need to create teams and special interest groups. We don’t have a formal process for that, so this is also a great chance at introducing the change proposal process as a mechanism for structuring the community, just like the Python and Rust communities do. Teams will need their own space for discussing issues, and share the load. The first team I’d like to start is an “introspection and language bindings” group, for all bindings hosted on GNOME infrastructure; it would act as a point of reference for all decisions involving projects that consume the GNOME software development platform through its machine-readable ABI description. Another group I’d like to create is an editorial group for the developer and user documentation; documentation benefits from a consistent editorial voice, while the process of writing documentation should be open to everybody in the community.

A very real issue that was raised during GUADEC is bootstrapping the steering committee; who gets to be on it, what is the committee’s remit, how it works. There are options, but if we want the steering committee to be a representation of the technical expertise of the GNOME community, it also has to be established by the very same community; in this sense, the board of directors, as representatives of the community, could work on defining the powers and compositions of this committee.

There are many more issues we are going to face, but I think we can start from these and evaluate our own version of a technical governance model that works for GNOME, and that can grow with the project. In the next couple of weeks I’ll start publishing drafts for team governance and the power/composition/procedure of the steering committee, mainly for iteration and comments.

Tobias Bernard: GUADEC 2025

Planet GNOME - Dje, 03/08/2025 - 12:27md

Last week was this year’s GUADEC, the first ever in Italy! Here are a few impressions.

Local-First

One of my main focus areas this year was local-first, since that’s what we’re working on right now with the Reflection project (see the previous blog post). Together with Julian and Andreas we did two lightning talks (one on local-first generally, and one on Reflection in particular), and two BoF sessions.

Local-First BoF

At the BoFs we did a bit of Reflection testing, and reached a new record of people simultaneously trying the app:

This also uncovered a padding bug in the users popover :)

Andreas also explained the p2anda stack in detail using a new diagram we made a few weeks back, which visualizes how the various components fit together in a real app.

p2panda stack diagram

We also discussed some longer-term plans, particularly around having a system-level sync service. The motivation for this is twofold: We want to make it as easy as possible for app developers to add sync to their app. It’s never going to be “free”, but if we can at least abstract some of this in a useful way that’s a big win for developer experience. More importantly though, from a security/privacy point of view we really don’t want every app to have unrestricted access to network, Bluetooth, etc. which would be required if every app does its own p2p sync.

One option being discussed is taking the networking part of p2panda (including iroh for p2p networking) and making it a portal API which apps can use to talk to other instances of themselves on other devices.

Another idea was a more high-level portal that works more like a file “share” system that can sync arbitary files by just attaching the sync context to files as xattrs and having a centralized service handle all the syncing. This would have the advantage of not requiring special UI in apps, just a portal and some integration in Files. Real-time collaboration would of course not be possible without actual app integration, but for many use cases that’s not needed anyway, so perhaps we could have both a high- and low-level API to cover different scenarios?

There are still a lot of open questions here, but it’s cool to see how things get a little bit more concrete every time :)

If you’re interested in the details, check out the full notes from both BoF sessions.

Design

Jakub and I gave the traditional design team talk – a bit underprepared and last-minute (thanks Jakub for doing most of the heavy lifting), but it was cool to see in retrospect how much we got done in the past year despite how much energy unfortunately went into unrelated things. The all-new slate of websites is especially cool after so many years of gnome.org et al looking very stale. You can find the slides here.

Jakub giving the design talk

Inspired by the Summer of GNOME OS challenge many of us are doing, we worked on concepts for a new app to make testing sysexts of merge requests (and nightly Flatpaks) easier. The working title is “Test Ride” (a more sustainable version of Apple’s “Test Flight” :P) and we had fun drawing bicycles for the icon.

Test Ride app mockup

Jakub and I also worked on new designs for Georges’ presentation app Spiel (which is being rebranded to “Podium” to avoid the name clash with the a11y project). The idea is to make the app more resilient and data more future-proof, by going with a file-based approach and simple syntax on top of Markdown for (limited) layout customization.

Georges and Jakub discussing Podium designs Miscellaneous
  • There was a lot of energy and excitement around GNOME OS. It feels like we’ve turned a corner, finally leaving the “science experiment” stage and moving towards “daily-drivable beta”.
  • I was very happy that the community appreciation award went to Alice Mikhaylenko this year. The impact her libadwaita work has had on the growth of the app ecosystem over the past 5 years can not be overstated. Not only did she build dozens of useful and beautiful new adaptive widgets, she also has a great sense for designing APIs in a way that will get people to adopt them, which is no small thing. Kudos, very well deserved!
  • While some of the Foundation conflicts of the past year remain unresolved, I was happy to see that Steven’s and the board’s plans are going in the right direction.
Brescia

The conference was really well-organized (thanks to Pietro and the local team!), and the venue and city of Brescia had a number of advantages that were not always present at previous GUADECs:

  • The city center is small and walkable, and everyone was staying relatively close by
  • The university is 20 min by metro from the city center, so it didn’t feel like a huge ordeal to go back and forth
  • Multiple vegan lunch options within a few minutes walk from the university
  • Lots of tables (with electrical outlets!) for hacking at the venue
  • Lots of nice places for dinner/drinks outdoors in the city center
  • Many dope ice cream places
Piazza della Loggia at sunset

A few (minor) points that could be improved next time:

  • The timetable started veeery early every day, which contributed to a general lack of sleep. Realistically people are not going to sleep before 02:00, so starting the program at 09:00 is just too early. My experience from multi-day events in Berlin is that 12:00 is a good time to start if you want everyone to be awake :)
  • The BoFs could have been spread out a bit more over the two days, there were slots with three parallel ones and times with nothing on the program.
  • The venue closing at 19:00 is not ideal when people are in the zone hacking. Doesn’t have to be all night, but the option to hack until after dinner (e.g. 22:00) would be nice.
  • Since that the conference is a week long accommodation can get a bit expensive, which is not ideal since most people are paying for their own travel and accommodation nowadays. It’d have been great if there was a more affordable option for accommodation, e.g. at student dorms, like at previous GUADECs.
  • A frequent topic was how it’s not ideal to have everyone be traveling and mostly unavailable for reviews a week before feature freeze. It’s also not ideal because any plans you make at GUADEC are not going to make it into the September release, but will have to wait for March. What if the conference was closer to the beginning of the cycle, e.g. in May or June?

A few more random photos:

Matthias showing us fancy new dynamic icon stuff Dinner on the first night feat. yours truly, Robert, Jordan, Antonio, Maximiliano, Sam, Javier, Julian, Adrian, Markus, Adrien, and Andreas Adrian and Javier having an ad-hoc Buildstream BoF at the pizzeria Robert and Maximiliano hacking on Snapshot

Daiki Ueno: Optimizing CI resource usage in upstream projects

Planet GNOME - Dje, 03/08/2025 - 4:22pd

At GnuTLS, our journey into optimizing GitLab CI began when we faced a significant challenge: we lost our GitLab.com Open Source Program subscription. While we are still hoping that this limitation is temporary, this meant our available CI/CD resources became considerably lower. We took this opportunity to find smarter ways to manage our pipelines and reduce our footprint.

This blog post shares the strategies we employed to optimize our GitLab CI usage, focusing on reducing running time and network resources, which are crucial for any open-source project operating with constrained resources.

CI on every PR: a best practice, but not cheap

While running CI on every commit is considered a best practice for secure software development, our experience setting up a self-hosted GitLab runner on a modest Virtual Private Server (VPS) highlighted its cost implications, especially with limited resources. We provisioned a VPS with 2GB of memory and 3 CPU cores, intending to support our GnuTLS CI pipelines.

The reality, however, was a stark reminder of the resource demands. A single CI pipeline for GnuTLS took an excessively long time to complete, often stretching beyond acceptable durations. Furthermore, the extensive data transfer involved in fetching container images, dependencies, building artifacts, and pushing results quickly led us to reach the bandwidth limits imposed by our VPS provider, resulting in throttled connections and further delays.

This experience underscored the importance of balancing CI best practices with available infrastructure and budget, particularly for resource-intensive projects.

Reducing CI running time

Efficient CI pipeline execution is paramount, especially when resources are scarce. GitLab provides an excellent article on pipeline efficiency, though in practice, project specific optimization is needed. We focused on three key areas to achieve faster pipelines:

  • Tiering tests
  • Layering container images
  • De-duplicating build artifacts
Tiering tests

Not all tests need to run on every PR. For more exotic or costly tasks, such as extensive fuzzing, generating documentation, or large-scale integration tests, we adopted a tiering approach. These types of tests are resource-intensive and often provide value even when run less frequently. Instead of scheduling them for every PR, they are triggered manually or on a periodic basis (e.g., nightly or weekly builds). This ensures that critical daily development workflows remain fast and efficient, while still providing comprehensive testing coverage for the project without incurring excessive resource usage on every minor change.

Layering container images

The tiering of tests gives us an idea which CI images are more commonly used in the pipeline. For those common CI images, we transitioned to using a more minimal base container image, such as fedora-minimal or debian:<flavor>-slim. This reduced the initial download size and the overall footprint of our build environment.

For specialized tasks, such as generating documentation or running cross-compiled tests that require additional tools, we adopted a layering approach. Instead of building a monolithic image with all possible dependencies, we created dedicated, smaller images for these specific purposes and layered them on top of our minimal base image as needed within the CI pipeline. This modular approach ensures that only the necessary tools are present for each job, minimizing unnecessary overhead.

De-duplicating build artifacts

Historically, our CI pipelines involved many “configure && make” steps for various options. One of the major culprits of long build times is repeatedly compiling source code, oftentimes resulting in almost identical results.

We realized that many of these compile-time options could be handled at runtime. By moving configurations that didn’t fundamentally alter the core compilation process to runtime, we simplified our build process and reduced the number of compilation steps required. This approach transforms a lengthy compile-time dependency into a quicker runtime check.

Of course, this approach cuts both ways: while it simplifies the compilation process, it could increase the code size and attack surface. For example, support for legacy protocol features such as SSL 3.0 or SHA-1 that may lower the entire security should still be able to be switched off at the compile time.

Another caveat is that some compilation options are inherently incompatible with each other. One example is that thread sanitizer cannot be enabled with address sanitizer at the same time. In such cases a separate build artifact is still needed.

The impact: tangible results

The efforts put into optimizing our GitLab CI configuration yielded significant benefits:

  • The size of the container image used for our standard build jobs is now 2.5GB smaller than before. This substantial reduction in image size translates to faster job startup times and reduced storage consumption on our runners.
  • 9 “configure && make” steps were removed from our standard build jobs. This streamlined the build process and directly contributed to faster execution times.

By implementing these strategies, we not only adapted to our reduced resources but also built a more efficient, cost-effective, and faster CI/CD pipeline for the GnuTLS project. These optimizations highlight that even small changes can lead to substantial improvements, especially in the context of open-source projects with limited resources.

For further information on this, please consult the actual changes.

Next steps

While the current optimizations have significantly improved our CI efficiency, we are continuously exploring further enhancements. Our future plans include:

  • Distributed GitLab runners with external cache: To further scale and improve resource utilization, we are considering running GitLab runners on multiple VPS instances. To coordinate these distributed runners and avoid redundant data transfers, we could set up an external cache, potentially using a solution like MinIO. This would allow shared access to build artifacts, reducing bandwidth consumption and build times.
  • Addressing flaky tests: Flaky tests, which intermittently pass or fail without code changes, are a major bottleneck in any CI pipeline. They not only consume valuable CI resources by requiring entire jobs to be rerun but also erode developer confidence in the test suite. In TLS testing, it is common to write a test script that sets up a server and a client as a separate process, let the server bind a unique port to which the client connects, and instruct the client to initiate a certain event through a control channel. This kind of test could fail in many ways regardless of the test itself, e.g., the port might be already used by other tests. Therefore, rewriting tests without requiring a complex setup would be a good first step.

Jonathan Blandford: GUADEC 2025: Thoughts and Reflections

Planet GNOME - Sht, 02/08/2025 - 9:35md

Another year, another GUADEC. This was the 25th anniversary of the first GUADEC, and the 25th one I’ve gone to. Although there have been multiple bids for Italy during the past quarter century, this was the first successful one. It was definitely worth the wait, as it was one of the best GUADECs in recent memory.

GUADEC’s 25th anniversary cake

This was an extremely smooth conference — way smoother than previous years. The staff and volunteers really came through in a big way and did heroic work! I watched Deepesha, Asmit, Aryan, Maria, Zana, Kristi, Anisa, and especially Pietro all running around making this conference happen. I’m super grateful for their continued hard work in the project. GNOME couldn’t happen without their effort.

La GNOME vita Favorite Talks

I commented on some talks as they happened (Day 1, Day 2, Day 3). I could only attend one track at a time so missed a lot of them, but the talks I saw were fabulous. They were much higher quality than usual this year, and I’m really impressed at this community’s creativity and knowledge. As I said, it really was a strong conference.

I did an informal poll of assorted attendees I ran into on the streets of Brescia on the last night, asking what their favorite talks were. Here are the results:

Whither Maintainers
  • Emmanuele’s talk on “Getting Things Done In GNOME”: This talk clearly struck a chord amongst attendees. He proposed a path forward on the technical governance of the project. It also had a “Wither Maintainers” slide that lead to a lot of great conversations.
  • George’s talk on streaming: This was very personal, very brave, and extremely inspiring. I left the talk wanting to try my hand at live streaming my coding sessions, and I’m not the only one.
  • The poop talk, by Niels: This was a very entertaining lightning talk with a really important message at the end.
  • Enhancing Screen Reader Functionality in Modern Gnome by Lukas: Unfortunately, I was at the other track when this one happened so I don’t know much about it. I’ll have to go back and watch it! That being said, it was so inspiring to see how many people at GUADEC were working on accessibility, and how much progress has been made across the board. I’m in awe of everyone that works in this space.
Honorable mentions

In reality, there were many amazing talks beyond the ones I listed. I highly recommend you go back and see them. I know I’m planning on it!

Crosswords at GUADEC Refactoring gnome-crosswords

We didn’t have a Crosswords update talk this cycle. However we still had some appearances worth mentioning:

  • Federico gave a talk about how to use unidirectional programming to add tests to your application, and used Crosswords as his example. This probably the 5th time one of the two of us have talked about this topic. This was the best one to date, though we keep giving it because we don’t think we’ve gotten the explanation right. It’s a complex architectural change which has a lot of nuance, and is hard for us to explain succinctly. Nevertheless, we keep trying, as we see how this could lead to a big revolution in the quality of GNOME applications. Crosswords is pushing 80KLOC, and this architecture is the only thing allowing us to keep it at a reasonable quality.
  • People keep thinking this is “just” MVC, or a minor variation thereof, but it’s different enough that it requires a new mindset, a disciplined approach, and a good data model. As an added twist, Federico sent an MR to GNOME Calendar to add initial unidirectional support to that application. If crosswords are too obscure a subject for you, then maybe the calendar section of the talk will help you understand it.
  • I gave a lighting talk about some of the awesome work that our GSoC (and prospective GSoC) students are doing.
  • I gave a BOF on Words. It was lightly attended, but led to a good conversation with the always-helpful Martin.
  • Finally, I sat down with Tobias to do a UX review of the crossword game, with an eye to getting it into GNOME Circle. This has been in the works for a long-time and I’m really grateful for the chance to do it in person. We identified many papercuts to fix of course, but Tobias was also able to provide a suggestion to improve a long-standing problem with the interface. We sketched out a potential redesign that I’m really happy with. I hope the Circle team is able to continue to do reviews across the GNOME ecosystem as it provides so much value.
Personal

A final comment: I’m reminded again of how much of a family the GNOME community is. For personal reasons, it was a very tough GUADEC for Zana and I. It was objectively a fabulous GUADEC and we really wanted to enjoy it, but couldn’t. We’re humbled at the support and love this community is capable of. Thank you.

Faqet

Subscribe to AlbLinux agreguesi