You are here

Agreguesi i feed

Common Crawl Criticized for 'Quietly Funneling Paywalled Articles to AI Developers'

Slashdot - Dje, 09/11/2025 - 12:34pd
For more than a decade, the nonprofit Common Crawl "has been scraping billions of webpages to build a massive archive of the internet," notes the Atlantic, making it freely available for research. "In recent years, however, this archive has been put to a controversial purpose: AI companies including OpenAI, Google, Anthropic, Nvidia, Meta, and Amazon have used it to train large language models. "In the process, my reporting has found, Common Crawl has opened a back door for AI companies to train their models with paywalled articles from major news websites. And the foundation appears to be lying to publishers about this — as well as masking the actual contents of its archives..." Common Crawl's website states that it scrapes the internet for "freely available content" without "going behind any 'paywalls.'" Yet the organization has taken articles from major news websites that people normally have to pay for — allowing AI companies to train their LLMs on high-quality journalism for free. Meanwhile, Common Crawl's executive director, Rich Skrenta, has publicly made the case that AI models should be able to access anything on the internet. "The robots are people too," he told me, and should therefore be allowed to "read the books" for free. Multiple news publishers have requested that Common Crawl remove their articles to prevent exactly this use. Common Crawl says it complies with these requests. But my research shows that it does not. I've discovered that pages downloaded by Common Crawl have appeared in the training data of thousands of AI models. As Stefan Baack, a researcher formerly at Mozilla, has written, "Generative AI in its current form would probably not be possible without Common Crawl." In 2020, OpenAI used Common Crawl's archives to train GPT-3. OpenAI claimed that the program could generate "news articles which human evaluators have difficulty distinguishing from articles written by humans," and in 2022, an iteration on that model, GPT-3.5, became the basis for ChatGPT, kicking off the ongoing generative-AI boom. Many different AI companies are now using publishers' articles to train models that summarize and paraphrase the news, and are deploying those models in ways that steal readers from writers and publishers. Common Crawl maintains that it is doing nothing wrong. I spoke with Skrenta twice while reporting this story. During the second conversation, I asked him about the foundation archiving news articles even after publishers have asked it to stop. Skrenta told me that these publishers are making a mistake by excluding themselves from "Search 2.0" — referring to the generative-AI products now widely being used to find information online — and said that, anyway, it is the publishers that made their work available in the first place. "You shouldn't have put your content on the internet if you didn't want it to be on the internet," he said. Common Crawl doesn't log in to the websites it scrapes, but its scraper is immune to some of the paywall mechanisms used by news publishers. For example, on many news websites, you can briefly see the full text of any article before your web browser executes the paywall code that checks whether you're a subscriber and hides the content if you're not. Common Crawl's scraper never executes that code, so it gets the full articles. Thus, by my estimate, the foundation's archives contain millions of articles from news organizations around the world, including The Economist, the Los Angeles Times, The Wall Street Journal, The New York Times, The New Yorker, Harper's, and The Atlantic.... A search for nytimes.com in any crawl from 2013 through 2022 shows a "no captures" result, when in fact there are articles from NYTimes.com in most of these crawls. "In the past year, Common Crawl's CCBot has become the scraper most widely blocked by the top 1,000 websites," the article points out...

Read more of this story at Slashdot.

Scientists Edit Gene in 15 Patients That May Permanently Reduce High Cholesterol

Slashdot - Sht, 08/11/2025 - 11:34md
A CRISPR-based drug given to study participants by infusion is raising hopes for a much easier way to lower cholesterol, reports CNN: With a snip of a gene, doctors may one day permanently lower dangerously high cholesterol, possibly removing the need for medication, according to a new pilot study published Saturday in the New England Journal of Medicine. The study was extremely small — only 15 patients with severe disease — and was meant to test the safety of a new medication delivered by CRISPR-Cas9, a biological sort of scissor which cuts a targeted gene to modify or turn it on or off. Preliminary results, however, showed nearly a 50% reduction in low-density lipoprotein, or LDL, the "bad" cholesterol which plays a major role in heart disease — the No.1 killer of adults in the United States and worldwide. The study, which will be presented Saturday at the American Heart Association Scientific Sessions in New Orleans, also found an average 55% reduction in triglycerides, a different type of fat in the blood that is also linked to an increased risk of cardiovascular disease. "We hope this is a permanent solution, where younger people with severe disease can undergo a 'one and done' gene therapy and have reduced LDL and triglycerides for the rest of their lives," said senior study author Dr. Steven Nissen, chief academic officer of the Sydell and Arnold Miller Family Heart, Vascular & Thoracic Institute at Cleveland Clinic in Ohio.... Today, cardiologists want people with existing heart disease or those born with a predisposition for hard-to-control cholesterol to lower their LDL well below 100, which is the average in the US, said Dr. Pradeep Natarajan, director of preventive cardiology at Massachusetts General Hospital and associate professor of medicine at Harvard Medical School in Boston... People with a nonfunctioning ANGPTL3 gene — which Natarajan says applies to about 1 in 250 people in the US — have lifelong levels of low LDL cholesterol and triglycerides without any apparent negative consequences. They also have exceedingly low or no risk for cardiovascular disease. "It's a naturally occurring mutation that's protective against cardiovascular disease," said Nissen, who holds the Lewis and Patricia Dickey Chair in Cardiovascular Medicine at Cleveland Clinic. "And now that CRISPR is here, we have the ability to change other people's genes so they too can have this protection." "Phase 2 clinical trials will begin soon, quickly followed by Phase 3 trials, which are designed to show the effect of the drug on a larger population, Nissen said." And CNN quotes Nissen as saying "We hope to do all this by the end of next year. We're moving very fast because this is a huge unmet medical need — millions of people have these disorders and many of them are not on treatment or have stopped treatment for whatever reason."

Read more of this story at Slashdot.

Bank of America Faces Lawsuit Over Alleged Unpaid Time for Windows Bootup, Logins, and Security Token Requests

Slashdot - Sht, 08/11/2025 - 10:34md
A former Business Analyst reportedly filed a class action lawsuit claiming that for years, hundreds of remote employees at Bank of America first had to boot up complex computer systems before their paid work began, reports Human Resources Director magazine: Tava Martin, who worked both remotely and at the company's Jacksonville facility, says the financial institution required her and fellow hourly workers to log into multiple security systems, download spreadsheets, and connect to virtual private networks — all before the clock started ticking on their workday. The process wasn't quick. According to the filing in the United States District Court for the Western District of North Carolina, employees needed 15 to 30 minutes each morning just to get their systems running. When technical problems occurred, it took even longer... Workers turned on their computers, waited for Windows to load, grabbed their cell phones to request a security token for the company's VPN, waited for that token to arrive, logged into the network, opened required web applications with separate passwords, and downloaded the Excel files they needed for the day. Only then could they start taking calls from business customers about regulatory reporting requirements... The unpaid work didn't stop at startup. During unpaid lunch breaks, many systems would automatically disconnect or otherwise lose connection, forcing employees to repeat portions of the login process — approximately three to five minutes of uncompensated time on most days, sometimes longer when a complete reboot was required. After shifts ended, workers had to log out of all programs and shut down their computers securely, adding another two to three minutes. Thanks to Slashdot reader Joe_Dragon for sharing the article.

Read more of this story at Slashdot.

Chan Zuckerberg Initiative Shifts Bulk of Philanthropy, 'Going All In on AI-Powered Biology'

Slashdot - Sht, 08/11/2025 - 9:34md
The Associated Press reports that "For the past decade, Dr. Priscilla Chan and her husband Mark Zuckerberg have focused part of their philanthropy on a lofty goal — 'to cure, prevent or manage all disease' — if not in their lifetime, then in their children's." During that decade they also funded other initiatives (including underprivileged schools and immigration reform), according to the article. But there's a change coming: Now, the billionaire couple is shifting the bulk of their philanthropic resources to Biohub, the pair's science organization, and focusing on using artificial intelligence to accelerate scientific discovery. The idea is to develop virtual, AI-based cell models to understand how they work in the human body, study inflammation and use AI to "harness the immune system" for disease detection, prevention and treatment. "I feel like the science work that we've done, the Biohub model in particular, has been the most impactful thing that we have done. So we want to really double down on that. Biohub is going to be the main focus of our philanthropy going forward," Zuckerberg said Wednesday evening at an event at the Biohub Imaging Institute in Redwood City, California.... Chan and Zuckerberg have pledged 99% of their lifetime wealth — from shares of Meta Platforms, where Zuckerberg is CEO — toward these efforts... On Thursday, Chan and Zuckerberg also announced that Biohub has hired the team at EvolutionaryScale, an AI research lab that has created large-scale AI systems for the life sciences... Biohub's ambition for the next years and decades is to create virtual cell systems that would not have been possible without recent advances in AI. Similar to how large language models learn from vast databases of digital books, online writings and other media, its researchers and scientists are working toward building virtual systems that serve as digital representations of human physiology on all levels, such as molecular, cellular or genome. As it is open source — free and publicly available — scientists can then conduct virtual experiments on a scale not possible in physical laboratories. "We will continue the model we've pioneered of bringing together scientists and engineers in our own state-of-the-art labs to build tools that advance the field," according to Thursday's blog post. "We'll then use those tools to generate new data sets for training new biological AI models to create virtual cells and immune systems and engineer our cells to detect and treat disease.... "We have also established the first large-scale GPU cluster for biological research, as well as the largest datasets around human cell types. This collection of resources does not exist anywhere else."

Read more of this story at Slashdot.

World's Largest Cargo Sailboat Completes Historic First Atlantic Crossing

Slashdot - Sht, 08/11/2025 - 8:34md
Long-time Slashdot reader AmiMoJo shared this report from Marine Insight: The world's largest cargo sailboat, Neoliner Origin, completed its first transatlantic voyage on 30 October despite damage to one of its sails during the journey. The 136-metre-long vessel had to rely partly on its auxiliary motor and its remaining sail after the aft sail was damaged in a storm shortly after departure... Neoline, the company behind the project, said the damage reduced the vessel's ability to perform fully on wind power... The Neoliner Origin is designed to reduce greenhouse gas emissions by 80 to 90 percent compared to conventional diesel-powered cargo ships. According to the United Nations Conference on Trade and Development (UNCTAD), global shipping produces about 3 percent of worldwide greenhouse gas emissions... The ship can carry up to 5,300 tonnes of cargo, including containers, vehicles, machinery, and specialised goods. It arrived in Baltimore carrying Renault vehicles, French liqueurs, machinery, and other products. The Neoliner Origin is scheduled to make monthly voyages between Europe and North America, maintaining a commercial cruising speed of around 11 knots.

Read more of this story at Slashdot.

Bombshell Report Exposes How Meta Relied On Scam Ad Profits To Fund AI

Slashdot - Sht, 08/11/2025 - 7:34md
"Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them," writes Ars Technica, citing a lengthy report from Reuters. Reuters reports that Meta "for at least three years failed to identify and stop an avalanche of ads that exposed Facebook, Instagram and WhatsApp's billions of users to fraudulent e-commerce and investment schemes, illegal online casinos, and the sale of banned medical products..." On average, one December 2024 document notes, the company shows its platforms' users an estimated 15 billion "higher risk" scam advertisements — those that show clear signs of being fraudulent — every day. Meta earns about $7 billion in annualized revenue from this category of scam ads each year, another late 2024 document states. Much of the fraud came from marketers acting suspiciously enough to be flagged by Meta's internal warning systems. But the company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain — but still believes the advertiser is a likely scammer — Meta charges higher ad rates as a penalty, according to the documents. The idea is to dissuade suspect advertisers from placing ads. The documents further note that users who click on scam ads are likely to see more of them because of Meta's ad-personalization system, which tries to deliver ads based on a user's interests... The documents indicate that Meta's own research suggests its products have become a pillar of the global fraud economy. A May 2025 presentation by its safety staff estimated that the company's platforms were involved in a third of all successful scams in the U.S. Meta also acknowledged in other internal documents that some of its main competitors were doing a better job at weeding out fraud on their platforms... The documents note that Meta plans to try to cut the share of Facebook and Instagram revenue derived from scam ads. In the meantime, Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion, according to one internal document. But those fines would be much smaller than Meta's revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads that "present higher legal risk," the document says, such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit. That figure almost certainly exceeds "the cost of any regulatory settlement involving scam ads...." A planning document for the first half of 2023 notes that everyone who worked on the team handling advertiser concerns about brand-rights issues had been laid off. The company was also devoting resources so heavily to virtual reality and AI that safety staffers were ordered to restrict their use of Meta's computing resources. They were instructed merely to "keep the lights on...." Meta also was ignoring the vast majority of user reports of scams, a document from 2023 indicates. By that year, safety staffers estimated that Facebook and Instagram users each week were filing about 100,000 valid reports of fraudsters messaging them, the document says. But Meta ignored or incorrectly rejected 96% of them. Meta's safety staff resolved to do better. In the future, the company hoped to dismiss no more than 75% of valid scam reports, according to another 2023 document. A small advertiser would have to get flagged for promoting financial fraud at least eight times before Meta blocked it, a 2024 document states. Some bigger spenders — known as "High Value Accounts" — could accrue more than 500 strikes without Meta shutting them down, other documents say. Thanks to long-time Slashdot reader schwit1 for sharing the article.

Read more of this story at Slashdot.

Jordan Petridis: DHH and Omarchy: Midlife crisis

Planet GNOME - Enj, 06/11/2025 - 1:12pd

Couple weeks ago Cloudflare announced it would be sponsoring some Open Source projects. Throwing money at pet projects of random techbros would hardly be news, but there was a certain vibe behind them and the people leading them.

In an unexpected turn of events, the millionaire receiving money from the billion-dollar company, thought it would be important to devote a whole blog post to random brokeboy from Athens that had an opinion on the Internet.

I was astonished to find the blog post. Now that I moved from normal stalkers to millionaire stalkers, is it a sign that I made it? Have I become such a menace? But more importantly: Who the hell even is this guy?

D-H-Who?

When I was painting with crayons in a deteriorating kindergarten somewhere in Greece, DHH, David Heinemeier Hansson, was busy with dumping Ruby on Rails in the world and becoming a niche tech celebrity. His street cred for releasing Ruby on Rails would later be replaced by his writing on remote work. Famously authoring “Remote: Office Not Required”, a book based on his own company, 37signals.

That cultural cache would go out the window in 2022 when he got in hot water with his own employees after an internal review process concluded that 37signals had been less than stellar when it came to handling race and diversity. Said review process culminated in a clash, where the employees were interested in further exploration of the topic, which DHH responded to them with “You are the person you are complaining about” (meaning: you, pointing out a problem, is the problem).

No politics at work

This incident lead the two founders of 37signals to the executive decision to forbid any kind of “societal and political discussions” inside the company, which, predictably, lead to a third of the company resigning in protest. This was a massive blow to 37signals. The company was famous for being extremely selective when hiring, as well as affording employees great benefits. Suddenly having a third of the workforce resign over disagreement with management sent a far more powerful message than anything they could have imagined.

It would become the starting point for the downwards and radicalizing spiral along with the extended and very public crashout DHH will be going through in the coming years.

Starting your own conference so you can never be banned from it

Subsequently, DHH was uninvited from keynoting at RailsConf on the account of everyone being grossed out about the handling of the matter and in solidarity with the community members along the employees that quit in protest.

That, in turn, would lead to the creation of the Rails Foundation and starting Rails World. A new conference about Rails that 100%-swear-to-god was not just about DHH having his own conference where he can keynote and would never be banned.

In the following years DHH would go to explore and express all the spectrum of “down the alt-right pipeline” opinions, like:

Omarchy

You either log off a hero, or you see yourself create another linux distribution, and having failed the first part, DHH has been pouring his energy into creating a new project. While letting everyone know how he much prefers that than going to therapy. Thus, Omarchy was born, a set of copy pasted Window Manager and Vim configs turned distro. One of the two projects that Cloudflare will be proudly funding shortly. The only possible option for the compositor would be Hyprland, and even though it’s Wayland (bad!), it’s one of the good-non-woke ones. In a similar tone, the project website would be featuring the tight integration of Omarchy with SuperGrok.

Rubygems

On a parallel track, the entire Ruby community more or less collapsed in the last two months. Long story short, is that one of the major Ruby Central sponsors, Sidekiq, pulled out the funding after DHH was invited to speak at RailsConf 2025. Shopify, where DHH sits in the boards of directors, was quick to save the day and match the lost funding. Coincidentally an (allegedly) takeover of key parts of the Ruby Infrastructure was carried out by Ruby Central and placed under the control of Shopify in the following weeks.

This story is ridiculous, and the entire ruby community is imploding following this. There’s an excellent write-up of the story so far here.

In a similar note, and at the same time, we also find DHH drooling over Off-brand Peter Thiel and calling for an Anduril takeover of the Nix community in order to purge all the wokes.

On Framework

At the same time, Framework had been promoting Omarchy in their social media accounts for a good while. And DHH in turn has been posting about how great Framework hardware is and how the Framework CEO is contributing to his Arch Linux reskin. On October 8th, Framework announced its sponsorhip of the Hyprland project, following 37signal doing the same thing couple weeks earlier. On the same day they made another post promoting Omarchy yet again. This caused a huge backlash and overall PR nightmare, with the apex being a forum thread with over 1700 comments so far.

The first reply in forum post, comes from Nirav, Framework’s CEO, with a very questionable choice of words:

We support open source software (and hardware), and partner with developers and maintainers across the ecosystem. We deliberately create a big tent, because we want open source software to win. We don’t partner based on individual’s or organization’s beliefs, values, or political stances outside of their alignment with us on increasing the adoption of open source software.

I definitely understand that not everyone will agree with taking a big tent approach, but we want to be transparent that bringing in and enabling every organization and community that we can across the Linux ecosystem is a deliberate choice.

Mentioning twice a “big tent” as the official policy and response to complains about supporting Fascist and Racist shitheads, is nothing sort of digging a hole for yourself so deep it that it reemerges in another continent.

Later on, Nirav would mention that they were finalizing sponsorship of the GNOME Foundation (12k/year) and KDE e.V. (10k/year). In the linked page you can also find a listing of Rails World (DHH’s personal conference) for a one time payment of 24k dollars.

There has not been an update since, and at no point have they addressed their support and collaboration with DHH. Can’t lose the money cow and free twitter clout I guess.

While I would personally would like to see the donation be rejected, I am not involved with the ongoing discussion on the GNOME Foundation side nor the Foundation itself. What I can say is that myself and others from the GNOME OS team, were involved in initial discussions with Framework, about future collaborations and hardware support. GNOME OS, much like the GNOME Flatpak runtime, is very useful as a reference point in order to identify if a bug, in hardware or software, is distro-specific or not.

It’s been a month since the initial debacle with Framework. Regardless of what the GNOME Foundation plans on doing, the GNOME OS team certainly does not feel comfortable in further collaboration given how they have handled the situation so far. It’s sad because the people working there understand the issue, but this does not seem to be a trait shared by the management.

A software midlife crisis

During all this, DHH decided that his attention must be devoted to get into a mouth-off with a greek kid that called him a Nazi. Since this is not violence (see “Words are not violence” essay), he decided to respond in kind, by calling for violence against me (see “Words are violence” essay).

To anyone who knows a nerd or two over the age of 35, all of the above is unsurprising. This is not some grand heel turn, or some brainwashing that DHH suffered. This is straight up a midlife crisis turned fash speedrun.

Here’s a dude who barely had any time to confront the world before falling into an infinite money glitch in the form of Ruby on Rails, Jeff Bezos throwing him crazy money, Apple bundling his software as a highlighted feature, becoming a “new work” celebrity and Silicon Valley “Guru”. Is it any surprise that such a person later would find the most minuscule kind of opposition as an all-out attack on his self-image?

DHH has never had the “best” opinions on a range of things, and they have been dutifully documented by others, but neither have many other developers that are also ignorant of topics outside of software. Being insecure about your hairline and masculine aesthetic to the point of adopting the Charles Manson haircut to cover your balding is one thing. However, it is entirely different to become a drop-shipped version of Elon, tweeting all day and stopping only to write opinion pieces that come off as proving others wrong rather than original thoughts.

Case in point: DHH recently wrote about how “men who’d prefer to feel useful over being listened to”. The piece is unironically titled “Building competency is better than therapy”. It is an insane read, and I’ll speculate that it feels as if someone, who DHH can’t outright dismiss, suggested he goes to therapy. It’s a very “I’ll show you off in front of my audience” kind of text.

Add to that a three year speedrun decrying the “theocracy of DEI” and the seemingly authoritarian powers of “the wokes”, all coincidentally starting after he could not get over his employees disagreeing with him on racial sensitivities.

How can someone suggest his workers read Ta-Nehisi Coates’s “Between the World and Me” and Michelle Alexander’s “The New Jim Crow” in the aftermath of George Floyd’s killing and the BLM protests. While a couple of months later writing salivating blogposts after the EDL eugenics rally in England and giving the highest possible praise to Tommy Robinson?

Can these people be redeemed?

It is certainly not going to help that niche celebrities, like DHH, still hold clout and financial power and are able to spout the worst possible takes without any backlash because of their position.

A bunch of Ruby developers recently started a petition to get DHH distanced from the community, and it didn’t go far before getting brigaded by the worst people you didn’t need to know existed. This of course was amplified to oblivion by DHH and a bunch of sycophants chasing the clout provided by being retweeted by DHH. It would shortly be followed by yet another “I’m never wrong” piece.

Is there any chance for these people, who are shielded by their well-paying jobs, their exclusively occupational media diet, and stimuli all happen to reinforce the default world view?

I think there is hope, but it demands more voices in tech spaces to speak up about how having empathy for others, or valuing diversity is not some grand conspiracy but rather enrichment to our lives and spaces. This comes hand in hand with firmly shutting down concern trolling and ridiculous “extreme centrist” takes where someone is expected to find common ground with others advocating for their extermination.

One could argue that the true spirit of FLOSS, which attracted much of the current midlife crisis developers in the first place, is about diversity and empathy for the varied circumstances and opinions that enriched our space.

Conclusion

I do not know if his heart is filled with hate or if he is incredibly lost, but it makes little difference since this is his output in the world.

David, when you read this I hope it will be a wake-up call. It’s not too late, you only need to go offline and let people help you. Stop the pathetic TemuElon speedrun and go take care of your kids. Drop the anti-woke culture wars and pick up a Ta-Nehisi Coates book again.

To everyone else: Push back against their vile and misanthropic rhetoric at every turn. Don’t let their poisonous roots fester into the ground. There is no place for their hate here. Don’t let them find comfort and spew their vomit in any public space.

Crush Fascism. Free Palestine .

Sebastian Wick: Flatpak Happenings

Planet GNOME - Mar, 04/11/2025 - 10:28md

Yesterday I released Flatpak 1.17.0. It is the first version of the unstable 1.17 series and the first release in 6 months. There are a few things which didn’t make it for this release, which is why I’m planning to do another unstable release rather soon, and then a stable release still this year.

Back at LAS this year I talked about the Future of Flatpak and I started with the grim situation the project found itself in: Flatpak was stagnant, the maintainers left the project and PRs didn’t get reviewed.

Some good news: things are a bit better now. I have taken over maintenance, Alex Larsson and Owen Taylor managed to set aside enough time to make this happen and Boudhayan Bhattcharya (bbhtt) and Adrian Vovk also got more involved. The backlog has been reduced considerably and new PRs get reviewed in a reasonable time frame.

I also listed a number of improvements that we had planned, and we made progress on most of them:

  • It is now possible to define which Flatpak apps shall be pre-installed on a system, and Flatpak will automatically install and uninstall things accordingly. Our friends at Aurora and Bluefin already use this to ship core apps from Flathub on their bootc based systems (shout-out to Jorge Castro).
  • The OCI support in Flatpak has been enhanced to support pre-installing from OCI images and remotes, which will be used in RHEL 10
  • We merged the backwards-compatible permission system. This allows apps to use new, more restricting permissions, while not breaking compatibility when the app runs on older systems. Specifically access to input devices such as gamepads, and access to the USB portal can now be granted in this way. It will also help us to transition to PipeWire.
  • We have up-to-date docs for libflatpak again

Besides the changes directly in Flatpak, there are a lot of other things happening around the wider ecosystem:

  • bbhtt released a new version of flatpak-builder
  • Enhanced License Compliance Tools for Flathub
  • Adrian and I have made plans for a service which allows querying running app instances (systemd-appd). This provides a new way of authenticating Flatpak instances and is a prerequisite for nested sandboxing, PipeWire support, and getting rid of the D-Bus proxy. My previous blog post went into a few more details.
  • Our friends at KDE have started looking into the XDG Intents spec, which will hopefully allow us to implement deep-linking, thumbnailing in Flatpak apps, and other interesting features
  • Adrian made progress on the session save/restore Portal
  • Some rather big refactoring work in the Portals frontend, and GDBus and libdex integration work which will reduce the complexity of asynchronous D-Bus

What I have also talked about at my LAS talk is the idea of a Flatpak-Next project. People got excited about this, but I feel like I have to make something very clear:

If we redid Flatpak now, it would not be significantly better than the current Flatpak! You could still not do nested sandboxing, you would still need a D-Bus proxy, you would still have a complex permission system, and so on.

Those problems require work outside of Flatpak, but have to integrate with Flatpak and Flatpak-Next in the future. Some of the things we will be doing include:

  • Work on the systemd-appd concept
  • Make varlink a feasible alternative to D-Bus
  • D-Bus filtering in the D-Bus daemons
  • Network sandboxing via pasta
  • PipeWire policy for sandboxes
  • New Portals

So if you’re excited about Flatpak-Next, help us to improve the Flatpak ecosystem and make Flatpak-Next more feasible!

Rosanna Yuen: Farewell to these, but not adieu…

Planet GNOME - Mar, 04/11/2025 - 6:44md

– from Farewell to Malta
by Lord Byron

Friday was my last day at the GNOME Foundation. I was informed by the Board a couple weeks ago that my position has been eliminated due to budgetary shortfalls. Obviously, I am sad that the Board felt this decision was necessary. That being said, I wanted to write a little note to say goodbye and share some good memories.

It has been almost exactly twenty years since I started helping out at the GNOME Foundation. (My history with the GNOME Project is even older; I had code in GNOME 0.13, released in March 1998.) Our first Executive Director had just left, and my husband was Board Treasurer at the time. He inherited a large pile of paperwork and an unhappy IRS. I volunteered to help him figure out how to put the pieces together and get our paperwork in order to get the Foundation back in good standing. After several months of this, the Board offered to pay me to keep it organized.

Early on, I used to joke that my title should have been “General Dogsbody” as I often needed to help cover all the little things that needed doing. Over time, my responsibilities within the Foundation grew, but the sentiment remained. I was often responsible for making sure everything that needed doing was done, while putting in many of the processes and procedures Foundation uses to keep running.

People often under-estimate how much hard work it is to keep an international non-profit like the GNOME Foundation going. There is a ton of minutia to be dealt with from ever-changing regulations, requirements, and community needs. Even simple-sounding things like paying people is surprisingly hard the moment it crosses borders. It requires dealing with different payment systems, bank rules, currencies, export regulations, and tax regimes. However, it is a necessary quagmire we have to navigate as it is a crucial tool to further the Foundation’s mission.

Working a GNOME booth

Over time, I have filled a multitude of different roles and positions (and had four different official titles doing so). I am proud of all the things I have done.

  • I have been the assistant to six different Executive Directors helping them onboard as they’ve started. I’ve been the bookkeeper, accounts receivable, and accounts payable — keeping our books in order, making sure people are paid, and tracking down funds. I’ve been Vice Treasurer helping put together our budgets, and created the financial slides for the Treasurer, Board, and AGM. I spent countless nights for almost a decade keeping our accounts updated in GnuCash. And every year for the past nineteen years I was responsible for making sure our taxes are done and 990 filed to keep our non-profit status secure.
    As someone who has always been deeply entrenched in GNOME’s finances, I have always been a responsible steward, looking for ways to spend money more prudently while enforcing budgets.
  • When the Foundation expanded after the Endless Grants, I had to help make the Foundation scale. I have done the jobs of Human Resources, Recruiter, Benefits coordinator, and managed the staff. I made sure the Board, Foundation, and staff are insured, and take their legally required training. I have also had to make sure people and contractors are paid and with all the legal formalities taken care of in all the different countries we operate in , so they only have to concern themselves with supporting GNOME’s mission.
  • I have had to be the travel coordinator buying tickets for people (and approving community travel). I have also done the jobs of Project Manager, Project Liaison to all our fiscally sponsored projects and subprojects, Shipping, and Receiving. I have been to countless conferences and tradeshows, giving talks and working booths. I have enjoyed meeting so many users and contributors at these events. I even spent many a weekend at the post-office filling out customs forms and shipping out mouse pads, mugs, and t-shirts to donors (back when we tried to do that in-house.) I tended the Foundation mailbox, logging all the checks we get from our donors and schlepping them to the bank.
  • I have served on five GNOME committees providing stability and continuity as volunteers came and went (Travel, Finance, Engagement, Executive, and Code of Conduct). I was on the team that created GNOME’s Code of Conduct, spending countless hours working with community members to help craft the final draft. I am particularly proud of this work, and I believe it has had a positive impact on our community.
  • Over the past year, I have also focused on providing what stability I could to the staff and Foundation, getting us through our second financial review, and started preparing for our first audit planned for next March.

This was all while doing my best to hold to GNOME’s principles, vision, and commitment to free software.

But it is the great people within this community that kept me loyally working with y’all year after year, and the appreciation of the amazing project y’all create that matters. I am grateful to the many community members who volunteer their time so selflessly through the years. Old-timers like Sri and Federico that have been on this journey with me since the very beginning. Other folks that I met through the years like Matthias, Christian, Meg, PTomato, and German. And Marina, who we all still miss. So many newcomers that add enthusiasm into the community like Deepesha, Michael, and Aaditya. So many Board members. There have been so many more names I could mention that I apologize if your name isn’t listed. Please know that I am grateful for what everyone has brought into the community. I have truly been blessed to know you all.

I am also grateful for the folks on staff that have made GNOME such a wonderful place to work through the years. Our former Executive Directors Stormy, Karen, Neil, Holly, and Richard, all of whom have taught me so much. Other staff members that have come and gone through the years, such as Andrea (who is still volunteering), Molly, Caroline, Emmanuele, and Melissa. And, of course, the current staff of Anisa, Bart, and Kristi, in whose hands I know the Foundation will keep thriving.

As I said, my job has always been to make sure things go as smoothly as possible. In my mind, what I do should quiet any waves so that the waves the Foundation makes go into providing the best programming we can — which is why a moment from GUADEC 2015 still pops up in my head.

Picture this: we are all in Gothenburg, Sweden, in line registering for GUADEC. We start chatting in line as it was long. I introduce myself to the person behind me and he sputters, “Oh! You’re important!” That threw me for a loop. I had never seen myself that way. My intention has always been to make things work seamlessly for our community members behind the scenes, but it was always extremely gratifying to hear from folks who have been touched by my efforts.

GNOME things still to be transferred to the Board. Suitcase in front is full of items for staffing a GNOME Booth.

What’s next for me? I have not had the time to figure this out yet as I have been spending my time transferring what I can to the Board. First things first; I need to figure out how to write a resumé again. I would love to continue working in the nonprofit space, and obviously have a love of free software. But I am open to exploring new ideas. If anyone has any thoughts or opportunities, I would love to hear them!

This is not adieu; my heart will always be with GNOME. I still have my seat on the Code of Conduct committee and, while I plan on taking a month or so away to figure things out, do plan on returning to do my bit in keeping GNOME a safe place.

If you’d like to drop me a line, I’d love to hear from you. Unfortunately the Board has to keep my current GNOME email address for a few months for the transfer, but I can be reached at <rosanna at gnome> for my personal mail. (Thanks, Bart!)

Best of luck to the Foundation.

Andy Wingo: wastrel, a profligate implementation of webassembly

Planet GNOME - Enj, 30/10/2025 - 11:19md

Hey hey hey good evening! Tonight a quick note on wastrel, a new WebAssembly implementation.

a wasm-to-native compiler that goes through c

Wastrel compiles Wasm modules to standalone binaries. It does so by emitting C and then compiling that C.

Compiling Wasm to C isn’t new: Ben Smith wrote wasm2c back in the day and these days most people in this space use Bastien Müller‘s w2c2. These are great projects!

Wastrel has two or three minor differences from these projects. Let’s lead with the most important one, despite the fact that it’s as yet vaporware: Wastrel aims to support automatic memory managment via WasmGC, by embedding the Whippet garbage collection library. (For the wingolog faithful, you can think of Wastrel as a Whiffle for Wasm.) This is the whole point! But let’s come back to it.

The other differences are minor. Firstly, the CLI is more like wasmtime: instead of privileging the production of C, which you then incorporate into your project, Wastrel also compiles the C (by default), and even runs it, like wasmtime run.

Unlike wasm2c (but like w2c2), Wastrel implements WASI. Specifically, WASI 0.1, sometimes known as “WASI preview 1”. It’s nice to be able to take the wasi-sdk‘s C compiler, compile your program to a binary that uses WASI imports, and then run it directly.

In a past life, I once took a week-long sailing course on a 12-meter yacht. One thing that comes back to me often is the way the instructor would insist on taking in the bumpers immediately as we left port, that to sail with them was no muy marinero, not very seamanlike. Well one thing about Wastrel is that it emits nice C: nice in the sense that it avoids many useless temporaries. It does so with a lightweight effects analysis, in which as temporaries are produced, they record which bits of the world they depend on, in a coarse way: one bit for the contents of all global state (memories, tables, globals), and one bit for each local. When compiling an operation that writes to state, we flush all temporaries that read from that state (but only that state). It’s a small thing, and I am sure it has very little or zero impact after SROA turns locals into SSA values, but we are vessels of the divine, and it is important for vessels to be C worthy.

Finally, w2c2 at least is built in such a way that you can instantiate a module multiple times. Wastrel doesn’t do that: the Wasm instance is statically allocated, once. It’s a restriction, but that’s the use case I’m going for.

on performance

Oh buddy, who knows?!? What is real anyway? I would love to have proper perf tests, but in the meantime, I compiled coremark using my GCC on x86-64 (-02, no other options), then also compiled it with the current wasi-sdk and then ran with w2c2, wastrel, and wasmtime. I am well aware of the many pitfalls of benchmarking, and so I should not say anything because it is irresponsible to make conclusions from useless microbenchmarks. However, we’re all friends here, and I am a dude with hubris who also believes blogs are better out than in, and so I will give some small indications. Please obtain your own salt.

So on coremark, Wastrel is some 2-5% percent slower than native, and w2c2 is some 2-5% slower than that. Wasmtime is 30-40% slower than GCC. Voilà.

My conclusion is, Wastrel provides state-of-the-art performance. Like w2c2. It’s no wonder, these are simple translators that use industrial compilers underneath. But it’s neat to see that performance is close to native.

on wasi

OK this is going to sound incredibly arrogant but here it is: writing Wastrel was easy. I have worked on Wasm for a while, and on Firefox’s baseline compiler, and Wastrel is kinda like a baseline compiler in shape: it just has to avoid emitting boneheaded code, and can leave the serious work to someone else (Ion in the case of Firefox, GCC in the case of Wastrel). I just had to use the Wasm libraries I already had and make it emit some C for each instruction. It took 2 days.

WASI, though, took two and a half weeks of agony. Three reasons: One, you can be sloppy when implementing just wasm, but when you do WASI you have to implement an ABI using sticks and glue, but you have no glue, it’s all just i32. Truly excruciating, it makes you doubt everything, and I had to refactor Wastrel to use C’s meager type system to the max. (Basically, structs-as-values to avoid type confusion, but via inline functions to avoid overhead.)

Two, WASI is not huge but not tiny either. Implementing poll_oneoff is annoying. And so on. Wastrel’s WASI implementation is thin but it’s still a couple thousand lines of code.

Three, WASI is underspecified, and in practice what is “conforming” is a function of what the Rust and C toolchains produce. I used wasi-testsuite to burn down most of the issues, but it was a slog. I neglected email and important things but now things pass so it was worth it maybe? Maybe?

on wasi’s filesystem sandboxing

WASI preview 1 has this “rights” interface that associated capabilities with file descriptors. I think it was an attempt at replacing and expanding file permissions with a capabilities-oriented security approach to sandboxing, but it was only a veneer. In practice most WASI implementations effectively implement the sandbox via a permissions layer: for example the process has capabilities to access the parents of preopened directories via .., but the WASI implementation has to actively prevent this capability from leaking to the compiled module via run-time checks.

Wastrel takes a different approach, which is to use Linux’s filesystem namespaces to build a tree in which only the exposed files are accessible. No run-time checks are necessary; the system is secure by construction. He says. It’s very hard to be categorical in this domain but a true capabilities-based approach is the only way I can have any confidence in the results, and that’s what I did.

The upshot is that Wastrel is only for Linux. And honestly, if you are on MacOS or Windows, what are you doing with your life? I get that it’s important to meet users where they are but it’s just gross to build on a corporate-controlled platform.

The current versions of WASI keep a vestigial capabilities-based API, but given that the goal is to compile POSIX programs, I would prefer if wasi-filesystem leaned into the approach of WASI just having access to a filesystem instead of a small set of descriptors plus scoped openat, linkat, and so on APIs. The security properties would be the same, except with fewer bug possibilities and with a more conventional interface.

on wtf

So Wastrel is Wasm to native via C, but with an as-yet-unbuilt GC aim. Why?

This is hard to explain and I am still workshopping it.

Firstly I am annoyed at the WASI working group’s focus on shared-nothing architectures as a principle of composition. Yes, it works, but garbage collection also works; we could be building different, simpler systems if we leaned in to a more capable virtual machine. Many of the problems that WASI is currently addressing are ownership-related, and would be comprehensively avoided with automatic memory management. Nobody is really pushing for GC in this space and I would like for people to be able to build out counterfactuals to the shared-nothing orthodoxy.

Secondly there are quite a number of languages that are targetting WasmGC these days, and it would be nice for them to have a good run-time outside the browser. I know that Wasmtime is working on GC, but it needs competition :)

Finally, and selfishly, I have a GC library! I would love to spend more time on it. One way that can happen is for it to prove itself useful, and maybe a Wasm implementation is a way to do that. Could Wastrel on wasm_of_ocaml output beat ocamlopt? I don’t know but it would be worth it to find out! And I would love to get Guile programs compiled to native, and perhaps with Hoot and Whippet and Wastrel that is a possibility.

Welp, there we go, blog out, dude to bed. Hack at y’all later and wonderful wasming to you all!

Thibault Martin: From VS Code to Helix

Planet GNOME - Mër, 29/10/2025 - 1:00md

I created the website you're reading with VS Code. Behind the scenes I use Astro, a static site generator that gets out of the way while providing nice conveniences.

Using VS Code was a no-brainer: everyone in the industry seems to at least be familiar with it, every project can be opened with it, and most projects can get enhancements and syntactic helpers in a few clicks. In short: VS Code is free, easy to use, and widely adopted.

A Rustacean colleague kept singing Helix's praises. I discarded it because he's much smarter than I am, and I only ever use vim when I need to fiddle with files on a server. I like when things "Just Work" and didn't want to bother learning how to use Helix nor how to configure it.

Today it has become my daily driver. Why did I change my mind? What was preventing me from using it before? And how difficult was it to get there?

Automation is a double-edged sword

Automation and technology make work easier, this is why we produce technology in the first place. But it also means you grow more dependent on the tech you use. If the tech is produced transparently by an international team or a team you trust, it's fine. But if it's produced by a single large entity that can screw you over, it's dangerous.

VS Code might be open source, but in practice it's produced by Microsoft. Microsoft has a problematic relationship to consent and is shoving AI products down everyone's throat. I'd rather use tools that respect me and my decisions, and I'd rather not get my tools produced by already monopolistic organizations.

Microsoft is also based in the USA, and the political climate over there makes me want to depend as little as possible on American tools. I know that's a long, uphill battle, but we have to start somewhere.

I'm not advocating for a ban against American tech in general, but for more balance in our supply chain. I'm also not advocating for European tech either: I'd rather get open source tools from international teams competing in a race to the top, rather than from teams in a single jurisdiction. What is happening in the USA could happen in Europe too.

Why I feared using Helix

I've never found vim particularly pleasant to use but it's everywhere, so I figured I might just get used to it. But one of the things I never liked about vim is the number of moving pieces. By default, vim and neovim are very bare bones. They can be extended and completely modified with plugins, but I really don't like the idea of having extremely customize tools.

I'd rather have the same editor as everyone else, with a few knobs for minor preferences. I am subject to choice paralysis, so making me configure an editor before I've even started editing is the best way to tank my productivity.

When my colleague told me about Helix, two things struck me as improvements over vim.

  1. Helix's philosophy is that everything should work out of the box. There are a few configs and themes, but everything should work similarly from one Helix to another. All the language-specific logic is handled in Language Servers that implement the Language Server Protocol standard.
  2. In Helix, first you select text, and then you perform operations onto it. So you can visually tell what is going to be changed before you apply the change. It fits my mental model much better.

But there are major drawbacks to Helix too:

  1. After decades of vim, I was scared to re-learn everything. In practice this wasn't a problem at all because of the very visual way Helix works.
  2. VS Code "Just Works", and Helix sounded like more work than the few clicks from VS Code's extension store. This is true, but not as bad as I had anticipated.

After a single week of usage, Helix was already very comfortable to navigate. After a few weeks, most of the wrinkles have been ironed out and I use it as my primary editor. So how did I overcome those fears?

What Helped Just Do It

I tried Helix. It can sound silly, but the very first step to get into Helix was not to overthink it. I just installed it on my mac with brew install helix and gave it a go. I was not too familiar with it, so I looked up the official documentation and noticed there was a tutorial.

This tutorial alone is what convinced me to try harder. It's an interactive and well written way to learn how to move and perform basic operations in Helix. I quickly learned how to move around, select things, surround them with braces or parenthesis. I could see what I was about to do before doing it. This has been epiphany. Helix just worked the way I wanted.

Better: I could get things done faster than in VS Code after a few minutes of learning. Being a lazy person, I never bothered looking up VS Code shortcuts. Because the learning curve for Helix is slightly steeper, you have to learn those shortcuts that make moving around feel so easy.

Not only did I quickly get used to Helix key bindings: my vim muscle-memory didn't get in the way at all!

Better docs

The built-in tutorial is a very pragmatic way to get started. You get results fast, you learn hands on, and it's not that long. But if you want to go further, you have to look for docs. Helix has officials docs. They seem to be fairly complete, but they're also impenetrable as a new user. They focus on what the editor supports and not on what I will want to do with it.

After a bit of browsing online, I've stumbled upon this third-party documentation website. The domain didn't inspire me a lot of confidence, but the docs are really good. They are clearly laid out, use-case oriented, and they make the most of Astro Starlight to provide a great reading experience. The author tried to upstream these docs, but that won't happen. It looks like they are upstreaming their docs to the current website. I hope this will improve the quality of upstream docs eventually.

After learning the basics and finding my way through the docs, it was time to ensure Helix was set up to help me where I needed it most.

Getting the most of Markdown and Astro in Helix

In my free time, I mostly use my editor for three things:

  1. Write notes in markdown
  2. Tweak my website with Astro
  3. Edit yaml to faff around my Kubernetes cluster

Helix is a "stupid" text editor. It doesn't know much about what you're typing. But it supports Language Servers that implement the Language Server Protocol. Language Servers understand the document you're editing. They explain to Helix what you're editing, whether you're in a TypeScript function, typing a markdown link, etc. With that information, Helix and the Language Server can provide code completion hints, errors & warnings, and easier navigation in your code.

In addition to Language Servers, Helix also supports plugging code formatters. Those are pieces of software that will read the document and ensure that it is consistently formatted. It will check that all indentations use spaces and not tabs, that there is a consistent number of space when indenting, that brackets are on the same line as the function, etc. In short: it will make the code pretty.

Markdown

Markdown is not really a programming language, so it might seem surprising to configure a Language Server for it. But if you remember what we said earlier, Language Servers can provide code completion, which is useful when creating links for example. Marksman does exactly that!

Since Helix is pre-configured to use marksman for markdown files we only need to install marksman and make sure it's in our PATH. Installing it with homebrew is enough.

$ brew install marksman

We can check that Helix is happy with it with the following command

$ hx --health markdown Configured language servers: ✓ marksman: /opt/homebrew/bin/marksman Configured debug adapter: None Configured formatter: None Tree-sitter parser: ✓ Highlight queries: ✓ Textobject queries: ✘ Indent queries: ✘

But Language Servers can also help Helix display errors and warnings, and "code suggestions" to help fix the issues. It means Language Servers are a perfect fit for... grammar checkers! Several grammar checkers exist. The most notable are:

  • LTEX+, the Language Server used by Language Tool. It supports several languages but is quite resource hungry.
  • Harper, a grammar checker Language Server developed by Automattic, the people behind WordPress, Tumblr, WooCommerce, Beeper and more. Harper only support English and its variants, but they intend to support more languages in the future.

I mostly write in English and want to keep a minimalistic setup. Automattic is well funded, and I'm confident they will keep working on Harper to improve it. Since grammar checker LSPs can easily be changed, I've decided to go with Harper for now.

To install it, homebrew does the job as always:

$ brew install harper

Then I edited my ~/.config/helix/languages.toml to add Harper as a secondary Language Server in addition to marksman

[language-server.harper-ls] command = "harper-ls" args = ["--stdio"] [[language]] name = "markdown" language-servers = ["marksman", "harper-ls"]

Finally I can add a markdown linter to ensure my markdown is formatted properly. Several options exist, and markdownlint is one of the most popular. My colleagues recommended the new kid on the block, a Blazing Fast equivalent: rumdl.

Installing rumdl was pretty simple on my mac. I only had to add the repository of the maintainer, and install rumdl from it.

$ brew tap rvben/rumdl $ brew install rumdl

After that I added a new language-server to my ~/.config/helix/languages.toml and added it to the language servers to use for the markdown language.

[language-server.rumdl] command = "rumdl" args = ["server"] [...] [[language]] name = "markdown" language-servers = ["marksman", "harper-ls", "rumdl"] soft-wrap.enable = true text-width = 80 soft-wrap.wrap-at-text-width = true

Since my website already contained a .markdownlint.yaml I could import it to the rumdl format with

$ rumdl import .markdownlint.yaml Converted markdownlint config from '.markdownlint.yaml' to '.rumdl.toml' You can now use: rumdl check --config .rumdl.toml .

You might have noticed that I've added a little quality of life improvement: soft-wrap at 80 characters.

Now if you add this to your own config.toml you will notice that the text is completely left aligned. This is not a problem on small screens, but it rapidly gets annoying on wider screens.

Helix doesn't support centering the editor. There is a PR tackling the problem but it has been stale for most of the year. The maintainers are overwhelmed by the number of PRs making it their way, and it's not clear if or when this PR will be merged.

In the meantime, a workaround exists, with a few caveats. It is possible to add spaces to the left gutter (the column with the line numbers) so it pushes the content towards the center of the screen.

To figure out how many spaces are needed, you need to get your terminal width with stty

$ stty size 82 243

In my case, when in full screen, my terminal is 243 characters wide. I need to remove the content column with from it, and divide everything by 2 to get the space needed on each side. In my case for a 243 character wide terminal with a text width of 80 characters:

(243 - 80) / 2 = 81

As is, I would add 203 spaces to my left gutter to push the rest of the gutter and the content to the right. But the gutter itself has a width of 4 characters, that I need to remove from the total. So I need to subtract them from the total, which leaves me with 76 characters to add.

I can open my ~/.config/helix/config.toml to add a new key binding that will automatically add or remove those spaces from the left gutter when needed, to shift the content towards the center.

[keys.normal.space.t] z = ":toggle gutters.line-numbers.min-width 76 3"

Now when in normal mode, pressing <kbd>Space</kbd> then <kbd>t</kbd> then <kbd>z</kbd> will add/remove the spaces. Of course this workaround only works when the terminal runs in full screen mode.

Astro

Astro works like a charm in VS Code. The team behind it provides a Language Server and a TypeScript plugin to enable code completion and syntax highlighting.

I only had to install those globally with

$ pnpm install -g @astrojs/language-server typescript @astrojs/ts-plugin

Now we need to add a few lines to our ~/.config/helix/languages.toml to tell it how to use the language server

[language-server.astro-ls] command = "astro-ls" args = ["--stdio"] config = { typescript = { tsdk = "/Users/thibaultmartin/Library/pnpm/global/5/node_modules/typescript/lib" }} [[language]] name = "astro" scope = "source.astro" injection-regex = "astro" file-types = ["astro"] language-servers = ["astro-ls"]

We can check that the Astro Language Server can be used by helix with

$ hx --health astro Configured language servers: ✓ astro-ls: /Users/thibaultmartin/Library/pnpm/astro-ls Configured debug adapter: None Configured formatter: None Tree-sitter parser: ✓ Highlight queries: ✓ Textobject queries: ✘ Indent queries: ✘

I also like to get a formatter to automatically make my code consistent and pretty for me when I save a file. One of the most popular code formaters out there is Prettier. I've decided to go with the fast and easy formatter dprint instead.

I installed it with

$ brew install dprint

Then in the projects I want to use dprint in, I do

$ dprint init

I might edit the dprint.json file to my liking. Finally, I configure Helix to use dprint globally for all Astro projects by appending a few lines in my ~/.config/helix/languages.toml.

[[language]] name = "astro" scope = "source.astro" injection-regex = "astro" file-types = ["astro"] language-servers = ["astro-ls"] formatter = { command = "dprint", args = ["fmt", "--stdin", "astro"]} auto-format = true

One final check, and I can see that Helix is ready to use the formatter as well

$ hx --health astro Configured language servers: ✓ astro-ls: /Users/thibaultmartin/Library/pnpm/astro-ls Configured debug adapter: None Configured formatter: ✓ /opt/homebrew/bin/dprint Tree-sitter parser: ✓ Highlight queries: ✓ Textobject queries: ✘ Indent queries: ✘ YAML

For yaml, it's simple and straightforward: Helix is preconfigured to use yaml-language-server as soon as it's in the PATH. I just need to install it with

$ brew install yaml-language-server Is it worth it?

Helix really grew on me. I find it particularly easy and fast to edit code with it. It takes a tiny bit more work to get the language support than it does in VS Code, but it's nothing insurmountable. There is a slightly steeper learning curve than for VS Code, but I consider it to be a good thing. It forced me to learn how to move around and edit efficiently, because there is no way to do it inefficiently. Helix remains intuitive once you've learned the basics.

I am a GNOME enthusiast, and I adhere to the same principles: I like when my apps work out of the box, and when I have little to do to configure them. This is a strong stance that often attracts a vocal opposition. I like products that follow those principles better than those who don't.

With that said, Helix sometimes feels like it is maintained by one or two people who have a strong vision, but who struggle to onboard more maintainers. As of writing, Helix has more than 350 PRs open. Quite a few bring interesting features, but the maintainers don't have enough time to review them.

Those 350 PRs mean there is a lot of energy and goodwill around the project. People are willing to contribute. Right now, all that energy is gated, resulting in frustration both from the contributors who feel like they're working in the void, and the maintainers who feel like there at the receiving end of a fire hose.

A solution to make everyone happier without sacrificing the quality of the project would be to work on a Contributor Ladder. CHAOSS' Dr Dawn Foster published a blog post about it, listing interesting resources at the end.

Jakub Steiner: USB MIDI Controllers on the M8

Planet GNOME - Mar, 28/10/2025 - 12:04md

The M8 has extensive USB audio and MIDI capabilities, but it cannot be a USB MIDI host. So you can control other devices through USB MIDI, but cannot sent to it over USB.

Control Surface & Pots for M8

Controlling things via USB devices has to be done through the old TRS (A) jacks. There’s two devices that can aid in that. I’ve used the RK06 which is very featureful, but in a very clumsy plastic case and USB micro cable that has a splitter for the HOST part and USB Power in. It also sometimes doesn’t reset properly when having multiple USB devices attached through a hub. The last bit is why I even bother with this setup.

The Dirtywave M8 has amazing support for the Novation Launchpad Pro MK3. Majority of peolpe hook it up directly to the M8 using the TRS MIDI cables. The Launchpad lacks any sort of pots or encoders though. Thus the need to fuss with USB dongles. What you need is to use the Launchpad Pro as a USB controller and shun at the reliable MIDI connection. The RK06 allows to combine multiple USB devices attached through an unpowered USB hub. Because I am flabbergasted how I did things here’s a schema that works.

If it doesn’t work, unplug the RK06 and turn LPPro off and on in the M8. I hate this setup but it is the only compact one that works (after some fiddling that you absolutely hate when doing a gig).

Intech Knot

The Hungarians behind the Grid USB controlles (with first class Linux support) have a USB>MIDI device called Knot. It has one great feature of a switch between TRS A/B for the non-standard devices.

It is way less fiddly than the RK06, uses nice aluminium housing and is sturdier. Hoewer it doesn’t seem to work with the Launchpad Pro via USB and it seems to be completely confused by a USB hub, so it’s not useful for my use case of multiple USB controllers.

Non-compact but Reliable

Novation came out with the Launch Control XL, which sadly replaced pots in the old one with encoders (absolute vs relative movement), but added midi in/ou/through with a MIDI mixer even. That way you can avoid USB altogether and get a reliable setup with control surfaces and encoders and sliders.

One day someone comes up with a compact midi capable pots to play along with Launchpad Pro ;) This post has been brought to you by an old man who forgets things.

Colin Walters: Thoughts on agentic AI coding as of Oct 2025

Planet GNOME - Hën, 27/10/2025 - 10:08md
Sandboxed, reviewed parallel agents make sense

For coding and software engineering, I’ve used and experimented with various frontends (FOSS and proprietary) to multiple foundation models (mostly proprietary) trying to keep up with the state of the art. I’ve come to strongly believe in a few things:

  • Agentic AI for coding needs strongly sandboxed, reproducible environments
  • It makes sense to run multiple agents at once
  • AI output definitely needs human review
Why human review is necessary Prompt injection is a serious risk at scale

All AI is at risk of prompt injection to some degree, but it’s particularly dangerous with agentic coding. All the state of the art today knows how to do is mitigate it at best. I don’t think it’s a reason to avoid AI, but it’s one of the top reasons to use AI thoughtfully and carefully for products that have any level of criticality.

OpenAI’s Codex documentation has a simple and good example of this.

Disabling the tests and claiming success

Beyond that, I’ve experienced multiple times different models happily disabling the tests or adding a println!("TODO add testing here") and claim success. At least this one is easier to mitigate with a second agent doing code review before it gets to human review.

Sandboxing

The “can I do X” prompting model that various interfaces default to is seriously flawed. Anthropic has a recent blog post on Claude Code changes in this area.

My take here is that sandboxing is only part of the problem; the other part is ensuring the agent has a reproducible environment, and especially one that can be run in IaaS environments. I think devcontainers are a good fit.

I don’t agree with the statement from Anthropic’s blog

without the overhead of spinning up and managing a container.

I don’t think this is overhead for most projects because Where it feels like it has overhead, we should be working to mitigate it.

Running code as separate login users

In fact, one thing I think we should popularize more on Linux is the concept of running multiple unprivileged login users. Personally for the tasks I work on, it often involves building containers or launching local VMs, and isolating that works really well with a full separate “user” identity. An experiment I did was basically useradd ai and running delegated tasks there instead. To log in I added %wheel ALL=NOPASSWD: /usr/bin/machinectl shell ai@ to /etc/sudoers.d/ai-login so that my regular human user could easily get a shell in the ai user’s context.

I haven’t truly “operationalized” this one as juggling separate git repository clones was a bit painful, but I think I could automate it more. I’m interested in hearing from folks who are doing something similar.

Parallel, IaaS-ready agents…with review

I’m today often running 2-3 agents in parallel on different tasks (with different levels of success, but that’s its own story).

It makes total sense to support delegating some of these agents to work off my local system and into cloud infrastructure.

In looking around in this space, there’s quite a lot of stuff. One of them is Ona (formerly Gitpod). I gave it a quick try and I like where they’re going, but more on this below.

Github Copilot can also do something similar to this, but what I don’t like about it is that it pushes a model where all of one’s interaction is in the PR. That’s going to be seriously noisy for some repositories, and interaction with LLMs can feel too “personal” sometimes to have permanently recorded.

Credentials should be on demand and fine grained for tasks

To me a huge flaw with Ona and one shared with other things like Langchain Open-SWE is basically this:

Sorry but: no way I’m clicking OK on that button. I need a strong and clearly delineated barrier between tooling/AI agents acting “as me” and my ability to approve and push code or even do basic things like edit existing pull requests.

Github’s Copilot gets this more right because its bot runs as a distinct identity. I haven’t dug into what it’s authorized to do. I may play with it more, but I also want to use agents outside of Github and I also am not a fan of deepening dependence on a single proprietary forge either.

So I think a key thing agent frontends should help do here is in granting fine-grained ephemeral credentials for dedicated write access as an agent is working on a task. This “credential handling” should be a clearly distinct component. (This goes beyond just git forges of course but also other issue trackers or data sources that may be in context).

Conclusion

There’s so much out there on this, I can barely keep track while trying to do my real job. I’m sure I’m not alone – but I’m interested in other’s thoughts on this!

Sam Thursfield: Slow Fedora VMs

Planet GNOME - Hën, 27/10/2025 - 12:00md

Good morning!

I spent some time figuring out why my build PC was running so slowly today. Thanks to some help from my very smart colleagues I came up with this testcase in Nushell to measure CPU performance:

~: dd if=/dev/random of=./test.in bs=(1024 * 1024) count=10 10+0 records in 10+0 records out 10485760 bytes (10 MB, 10 MiB) copied, 0.0111184 s, 943 MB/s ~: time bzip2 test.in 0.55user 0.00system 0:00.55elapsed 99%CPU (0avgtext+0avgdata 8044maxresident)k 112inputs+20576outputs (0major+1706minor)pagefaults 0swap

We are copying 10MB of random data into a file and compressing it with bzip2. 0.55 seconds is a pretty good time to compress 10MB of data with bzip2.

But! As soon as I ran a virtual machine, this same test started to take 4 or 5 seconds, both on the host and in the virtual machine.

There is already a new Fedora kernel available and with that version (6.17.4-200.fc42.x86_64) I don’t see any problems. I guess some issue affecting AMD Ryzen virtualization that got fixed already.

Have a fun day!

edit: The problem came back with the new kernel as well. I guess this not going to be a fun day.

Cassidy James Blaede: I’ve Joined ROOST

Planet GNOME - Hën, 27/10/2025 - 1:00pd

A couple of months ago I shared that I was looking for what was next for me, and I’m thrilled to report that I’ve found it: I’m joining ROOST as OSS Community Manager!

What is ROOST?

I’ll let our website do most of the talking, but I can add some context based on my conversations with the rest of the incredible ROOSTers over the past few weeks. In a nutshell, ROOST is a relatively new nonprofit focused on building, distributing, and maintaining the open source building blocks for online trust and safety. It was founded by tech industry veterans who saw the need for truly open source tools in the space, and were sick of rebuilding the exact same internal tools across multiple orgs and teams.

The way I like to frame it is how you wouldn’t roll your own encryption; why would you roll your own trust and safety tooling? It turns out that currently every platform, service, and community has to reinvent all of the hard work to ensure people are safe and harmful content doesn’t spread. ROOST is teaming up with industry partners to release existing trust and safety tooling as open source and to build the missing pieces together, in the open. The result is that teams will no longer have to start from scratch and take on all of the effort (and risk!) of rolling their own trust and safety tools; instead, they can reach for the open source projects from ROOST to integrate into their own products and systems. And we know open source is the right approach for critical tooling: the tools themselves must be transparent and auditable, while organizations can customize and even help improve this suite of online safety tools to benefit everyone.

This Platformer article does a great job of digging into more detail; give it a read. :) Oh, and why the baby chick? ROOST has a habit of naming things after birds—and I’m a baby ROOSTer. :D

What is trust and safety?

I’ve used the term “trust and safety” a ton in this post; I’m no expert (I’m rapidly learning!), but think about protecting users from harm including unwanted sexual content, misinformation, violent/extremist content, etc. It’s a field that’s much larger in scope and scale than most people probably realize, and is becoming ever more important as it becomes easier to generate massive amounts of text and graphic content using LLMs and related generative “AI” technologies. Add in that those generative technologies are largely trained using opaque data sources that can themselves include harmful content, and you can imagine how we’re at a flash point for trust and safety; robust open online safety tools like those that ROOST is helping to build and maintain are needed more than ever.

What I’ll be doing

My role is officially “OSS Community Manager,” but “community manager” can mean ten different things to ten different people (which is why people in the role often don’t survive long at a company…). At ROOST, I feel like the team knows exactly what they need me to do—and importantly, they have a nice onramp and initial roadmap for me to take on! My work will mostly focus on building and supporting an active and sustainable contributor community around our tools, as well as helping improve the discourse and understanding of open source in the trust and safety world. It’s an exciting challenge to take on with an amazing team from ROOST as well as partner organizations.

My work with GNOME

I’ll continue to serve on the GNOME Foundation board of directors and contribute to both GNOME and Flathub as much as I can; there may be a bit of a transition time as I get settled into this role, but my open source contributions—both to trust and safety and the desktop Linux world—are super important to me. I’ll see you around!

Aryan Kaushik: Balancing Work and Open Source

Planet GNOME - Dje, 26/10/2025 - 1:00pd
Work pressure + Burnout == Low contributions?

Over the past few months, I’ve been struggling with a tough question. How do I balance my work commitments and personal life while still contributing to open source?

On the surface, it looks like a weird question. Like I really enjoy contributing and working with contributors, and when I was in college, I always thought... "Why do people ever step back? It is so fun!". It was the thing that brought a smile to my face and took off any "stress". But now that I have graduated, things have taken a turn.

It is now that when work pressure mounts, you use the little time you get to not focus on writing code and instead perform some kind of hobby, learn something new or spend time with family. Or, just endless video scroll and sleep.

This has led me to be on my lowest contributions streak and not able to work on all those cool things I imagined, like reworking the Pitivi timeline in Rust, finishing that one MR in GNOME Settings that is stuck for ages, or fixing some issues in GNOME Extensions website, or work on my own extension's feature request, or contributing to the committees I am a part of.

It’s reached a point where I’m genuinely unsure how to balance things anymore, and hence wanted to give all whom I might not have been able to reply to or have not seen me for a long time an update, that I'm there but just in a dilemma of how to return.

I believe I'm not the only one who faces this. After guiding my juniors for a long while on how to contribute and study at the same time and still manage time for other things, I now am at a road where I am in the same situation. So, if anyone has any insights on how they manage their time, or keep up the motivation and juggle between tasks, do let me know (akaushik [at] gnome [dot] org), I'd really appreciate any insights :)

One of them would probably be to take fewer things on my plate?

Perhaps this is just a new phase of learning? Not about code, but about balance.

Flathub Blog: Enhanced License Compliance Tools for Flathub

Planet GNOME - Pre, 24/10/2025 - 2:00pd

tl;dr: Flathub has improved tooling to make license compliance easier for developers. Distros should rebuild OS images with updated runtimes from Flathub; app developers should ensure they're using up-to-date runtimes and verify that licenses and copyright notices are properly included.

In early August, a concerned community member brought to our attention that copyright notices and license files were being omitted when software was bundled as Flatpaks and distributed via Flathub. This was a genuine oversight across multiple projects, and we're glad we've been able to take the opportunity to correct and improve this for runtimes and apps across the Flatpak ecosystem.

Over the past few months, we've been working to enhance our tooling and infrastructure to better support license compliance. With the support of the Flatpak, freedesktop-sdk, GNOME, and KDE teams, we've developed and deployed significant improvements that make it easier than ever for developers to ensure their applications properly include license and copyright notices.

What's New

In coordination with maintainers of the freedesktop-sdk, GNOME, and KDE runtimes, we've implemented enhanced license handling that automatically includes license and copyright notice files in the runtimes themselves, deduplicated to be as space-efficient as possible. This improvement has been applied to all supported freedesktop-sdk, GNOME, and KDE runtimes, plus backported to freedesktop-sdk 22.08 and newer, GNOME 45 and newer, KDE 5.15-22.08 and newer, and KDE 6.6 and newer. These updated runtimes cover over 90% of apps on Flathub and have already rolled out to users as regular Flatpak updates.

We've also worked with the Flatpak developers to add new functionality to flatpak-builder 1.4.5 that automatically recognizes and includes common license files. This enhancement, now deployed to the Flathub build service, helps ensure apps' own licenses as well as the licenses of any bundled libraries are retained and shipped to users along with the app itself.

These improvements represent an important milestone in the maturity of the Flatpak ecosystem, making license compliance easier and more automatic for the entire community.

Recommended Actions App Developers

We encourage you to rebuild your apps with flatpak-builder 1.4.5 or newer to take advantage of the new automatic license detection. You can verify that license and copyright notices are properly included in your Flatpak's /app/share/licenses, both for your app and any included dependencies. In most cases, simply rebuilding your app will automatically include the necessary licenses, but you can also fine-tune which license files are included using the license-files key in your app's Flatpak manifest if needed.

For apps with binary sources (e.g. debs or rpms), we encourage app maintainers to explicitly include relevant license files in the Flatpak itself for consistency and auditability.

End-of-life runtime transition: To focus our resources on maintaining high-quality, up-to-date runtimes, we'll be completing the removal of several end-of-life runtimes in January 2026. Apps using runtimes older than freedesktop-sdk 22.08, GNOME 45, KDE 5.15-22.08 or KDE 6.6 will be marked as EOL shortly. Once these older runtimes are removed, the apps will need to be updated to use a supported runtime to remain available on Flathub. While this won't affect existing app installations, after this date, new users will be unable to install these apps from Flathub until they're rebuilt against a current runtime. Flatpak manifests of any affected apps will remain on the Flathub GitHub organization to enable developers to update them at any time.

If your app currently targets an end-of-life runtime that did receive the backported license improvements, we still strongly encourage you to upgrade to a newer, supported runtime to benefit from ongoing security updates and platform improvements.

Distributors

If you redistribute binaries from Flathub, such as pre-installed runtimes or apps, you should rebuild your distributed images (ISOs, containers, etc.) with the updated runtimes and apps from Flathub. You can verify that appropriate licenses are included with the Flatpaks in the runtime filesystem at /usr/share/licenses inside each runtime.

Get in Touch

App developers, distributors, and community members are encouraged to connect with the team and other members of the community in our Discourse forum and Matrix chat room. If you are an app developer or distributor and have any questions or concerns, you may also reach out to us at admins@flathub.org.

Thank You!

We are grateful to Jef Spaleta from Fedora for his care and confidentiality in bringing this to our attention and working with us collaboratively throughout the process. Special thanks to Boudhayan Bhattcharya (bbhtt) for his tireless work across Flathub, Flatpak and freedesktop-sdk, on this as well as many other important areas. And thank you to Abderrahim Kitouni (akitouni), Adrian Vovk (AdrianVovk), Aleix Pol Gonzalez (apol), Bart Piotrowski (barthalion), Ben Cooksley (bcooksley), Javier Jardón (jjardon), Jordan Petridis (alatiera), Matthias Clasen (matthiasc), Rob McQueen (ramcq), Sebastian Wick (swick), Timothée Ravier (travier), and any others behind the scenes for their hard work and timely collaboration across multiple projects to deliver these improvements.

Our Linux app ecosystem is truly strongest when individuals from across companies and projects come together to collaborate and work towards shared goals. We look forward to continuing to work together to ensure app developers can easily ship their apps to users across all Linux distributions and desktop environments. ♥

Matthias Clasen: SVG in GTK

Planet GNOME - Enj, 23/10/2025 - 12:34md

GTK has been using SVG for symbolic icons since essentially forever. It hasn’t been a perfect relationship, though.

Pre-History

For the longest time (all through the GTK 3 era, and until recently), we’ve used librsvg indirectly, through gdk-pixbuf, to obtain rendered icons, and then we used some pixel tricks to recolor the resulting image according to the theme.

This works, but it gives up on the defining feature of SVG: its scalability.

Once you’ve rasterized your icon at a given size, all you’re left with is pixels. In the GTK 3 era, this wasn’t a problem, but in GTK 4, we have a scene graph and fractional scaling, so we could do *much* better if we don’t rasterize early.

Unfortunately, librsvg’s API isn’t set up to let us easily translate SVG into our own render nodes. And its rust nature makes for an inconvenient dependency, so we held off on depending on it for a long time.

History

Early this year, I grew tired of this situation, and decided to improve our story for icons, and symbolic ones in particular.

So I set out to see how hard it would be to parse the very limited subset of SVG used in symbolic icons myself. It turned out to be relatively easy. I quickly managed to parse 99% of the Adwaita symbolic icons, so I decided to merge this work for GTK 4.20.

There were some detours and complications along the way. Since my simple parser couldn’t handle 100% of Adwaita (let alone all of the SVGs out there), a fallback to a proper SVG parser was needed. So we added a librsvg dependency after all. Since our new Android backend has an even more difficult time with rust than our other backends, we needed to arrange for a non-rust librsvg branch to be used when necessary.

One thing that this hand-rolled SVG parser improved upon is that it allows stroking, in addition to filling. I documented the format for symbolic icons here.

Starting over

A bit later, I was inspired by Apple’s SF Symbols work to look into how hard it would be to extend my SVG parser with a few custom attributes to enable dynamic strokes.

It turned out to be easy again. With a handful of attributes, I could create plausible-looking animations and transitions. And it was fun to play with. When I showed this work to Jakub and Lapo at GUADEC, they were intrigued, so I decided to keep pushing this forward, and it landed in early GTK 4.21, as GtkPathPaintable.

https://blogs.gnome.org/gtk/files/2025/10/path-anim5.webm

To make experimenting with this easier, I made a quick editor.  It was invaluable to have Jakub as an early adopter play with the editor while I was improving the implementation. Some very good ideas came out of this rapid feedback cycle, for example dynamic stroke width.

You can get some impression of the new stroke-based icons Jakub has been working on here.

Recent happenings

As summer was turning to fall, I felt that I should attempt to support SVG more completely, including grouping and animations. GTK’s rendering infrastructure has most of the pieces that are required for SVG after all: transforms, filters, clips, paths, gradients are all supported.

This was *not* easy.

But eventually, things started to fall into place. And this week, I’ve replaced  GtkPathPaintable with GtkSvg, which is a GdkPaintable that supports SVG. At least, the subset of SVG that is most relevant for icons. And that includes animations.

https://blogs.gnome.org/gtk/files/2025/10/Screencast-From-2025-10-22-18-13-02.webm

 

This is still a subset of full SVG, but converting a few random lottie files to SVG animations gave me a decent success rate for getting things to display mostly ok.

The details are spelled out here.

Summary

GTK 4.22 will natively support SVG, including SVG animations.

If you’d like to help improve this further, here are some some suggestions.

If you would like to support the GNOME foundation, who’s infrastructure and hosting GTK relies on, please donate.

Jonathan Blandford: Crosswords 0.3.16: 2025 Internship Results

Planet GNOME - Enj, 23/10/2025 - 8:00pd

Time for another GNOME Crosswords release! This one highlights the features our interns did this past summer. We had three fabulous interns — two through GSoC and one through Outreachy. While this release really only has three big features — one from each — they were all fantastic.

Thanks goes to to my fellow GSoC mentors Federico and Tanmay. In addition, Tilda and the folks at Outreachy were extremely helpful. Mentorship is a lot of work, but it’s also super-rewarding. If you’re interested in participating as a mentor in the future and have any questions about the process, let me know. I’ll be happy to speak with you about them.

Dictionary pipeline improvements

First, our Outreachy intern Nancy spent the summer improving the build pipeline to generate the internal dictionaries we use. These dictionaries are used to provide autofill capabilities and add definitions to the Editor, as well as providing near-instant completions for both the Editor and Player. The old pipeline was buggy and hard to maintain. Once we had a cleaned it up, Nancy was able to use it to effortlessly produce a dictionary in her native tongue: Swahili.

A Grid in swahili

We have no distribution story yet, but it’s exciting to have it so much easier to create dictionaries in other languages. It opens the door to the Editor being more universally useful (and fulfills a core GNOME tenet).

You can read about it more details in Nancy’s final report.

Word List

Victor did a ton of research for Crosswords, almost acting like a Product Manager. He installed every crossword editor he could find and did a competitive analysis, noting possible areas for improvement. One of the areas he flagged was the word list in our editor. This list suggests words that could be used in a given spot in the grid. We started with a simplistic implementation that listed every possible word in our dictionary that could fit. This approach— while fast — provided a lot of dead words that would make the grid unsolvable. So he set about trying to narrow down that list.

New Word List showing possible options

It turns out that there’s a lot of tradeoffs to be made here (Victor’s post). It’s possible to find a really good set of words, at the cost of a lot of computational power. A much simpler list is quick but has dead words. In the end, we found a happy medium that let us get results fast and had a stable list across a clue. He’ll be blogging about this shortly.

Victor also cleaned up our development docs, and researched satsolve algorithms for the grid. He’s working on a lovely doc on the AC-3 algorithm, and we can use it to add additional functionality to the editor in the future.

Printing

Toluwaleke implemented printing support for GNOME Crosswords.

This was a tour de force, and a phenomenal addition to the Crosswords codebase. When I proposed it for a GSoC project, I had no idea how much work this project could involve. We already had code to produce an svg of the grid — I thought that we could just quickly add support for the clues and call it a day. Instead, we ended up going on a wild ride resulting in a significantly stronger feature and code base than we had going in.

His blog has more detail and it’s really quite cool (go read it!). But from my perspective, we ended up with a flexible and fast rendering system that can be used in a lot more places. Take a look:

https://blogs.gnome.org/jrb/files/2025/10/output_video.webm

The resulting PDFs are really high quality — they seem to look better than some of the newspaper puzzles I’ve seen. We’ll keep tweaking them as there are still a lot of improvements we’d like to add, such as taking the High Contrast / Large Text A11Y options into account. But it’s a tremendous basis for future work.

Increased Polish

There were a few other small things that happened

  • I hooked Crosswords up to Damned Lies. This led to an increase in our translation quality and count
  • This included a Polish translation, which came with a new downloader!
  • I ported all the dialogs to AdwDialog, and moved on from (most) of the deprecated Gtk4 widgets
  • A lot of code cleanups and small fixes

Now that these big changes have landed, it’s time to go back to working on the rest of the changes proposed for GNOME Circle.

Until next time, happy puzzling!

Toluwaleke Ogundipe: GSoC Final Report: Printing in GNOME Crosswords

Planet GNOME - Enj, 23/10/2025 - 12:50pd

A few months ago, I introduced my GSoC project: Adding Printing Support to GNOME Crosswords. Since June, I’ve been working hard on it, and I’m happy to share that printing puzzles is finally possible!

The Result

GNOME Crosswords now includes a Print option in its menu, which opens the system’s print dialog. After adjusting printer settings and page setup, the user is shown a preview dialog with a few crossword-specific options, such as ink-saving mode and whether (and how) to include the solution. The options are intentionally minimal, keeping the focus on a clean and straightforward printing experience.

Below is a short clip showing the feature in action:

The resulting file: output.pdf

Crosswords now also ships with a standalone command-line tool, ipuz2pdf, which converts any IPUZ puzzle file into a print-ready PDF. It offers a similarly minimal set of layout and crossword-specific options.

The Process
  • Studied and profiled the existing code and came up with an overall approach for the project.
  • Built a new grid rendering framework, resulting in a 10× speedup in rendering. Dealt with a ton of details around text placement and rendering, colouring, shapes, and more.
  • Designed and implemented a print layout engine with a templating system, adjusted to work with different puzzle kinds, grid sizes, and paper sizes.
  • Integrated the layout engine with the print dialog and added a live print preview.
  • Bonus: Created ipuz2pdf, a standalone command-line utility (originally for testing) that converts an IPUZ file into a printable PDF.
The Challenges

Working on a feature of this scale came with plenty of challenges. Getting familiar with a large codebase took patience, and understanding how everything fit together often meant careful study and experimentation. Balancing ideas with the project timeline and navigating code reviews pushed me to grow both technically and collaboratively.

On the technical side, rendering and layout had their own hurdles. Handling text metrics, scaling, and coordinate transformations required a mix of technical knowledge, critical thinking, and experimentation. Even small visual glitches could lead to hours of debugging. One notably difficult part was implementing the box layout system that powers the dynamic print layout engine.

The Lessons

This project taught me a lot about patience, focus, and iteration. I learned to approach large problems by breaking them into small, testable pieces, and to value clarity and simplicity in both code and design. Code reviews taught me to communicate ideas better, accept feedback gracefully, and appreciate different perspectives on problem-solving.

On the technical side, working with rendering and layout systems deepened my understanding of graphics programming. I also learned how small design choices can ripple through an entire codebase, and how careful abstraction and modularity can make complex systems easier to evolve.

Above all, I learned the value of collaboration, and that progress in open source often comes from many small, consistent improvements rather than big leaps.

The Conclusion

In the end, I achieved all the goals set out for the project, and even more. It was a long and taxing journey, but absolutely worth it.

The Gratitude

I’m deeply grateful to my mentors, Jonathan Blandford and Federico Mena Quintero, for their guidance, patience, and support throughout this project. I’ve learned so much from working with them. I’m also grateful to the GNOME community and Google Summer of Code for making this opportunity possible and for creating such a welcoming environment for new contributors.

What Comes After

No project is ever truly finished, and this one is no exception. There’s still plenty to be done, and some already have tracking issues. I plan to keep improving the printing system and related features in GNOME Crosswords.

I also hope to stay involved in the GNOME ecosystem and open-source development in general. I’m especially interested in projects that combine design, performance, and system-level programming. More importantly, I’m a recent CS graduate looking for a full-time role in the field of interest stated earlier. If you have or know of any opportunities, please reach out at feyidab01@gmail.com.

Finally, I plan to write a couple of follow-up posts diving into interesting parts of the process in more detail. Stay tuned!

Thank you!

Faqet

Subscribe to AlbLinux agreguesi