I created the website you're reading with VS Code. Behind the scenes I use Astro, a static site generator that gets out of the way while providing nice conveniences.
Using VS Code was a no-brainer: everyone in the industry seems to at least be familiar with it, every project can be opened with it, and most projects can get enhancements and syntactic helpers in a few clicks. In short: VS Code is free, easy to use, and widely adopted.
A Rustacean colleague kept singing Helix's praises. I discarded it because he's much smarter than I am, and I only ever use vim when I need to fiddle with files on a server. I like when things "Just Work" and didn't want to bother learning how to use Helix nor how to configure it.
Today it has become my daily driver. Why did I change my mind? What was preventing me from using it before? And how difficult was it to get there?
Automation is a double-edged swordAutomation and technology make work easier, this is why we produce technology in the first place. But it also means you grow more dependent on the tech you use. If the tech is produced transparently by an international team or a team you trust, it's fine. But if it's produced by a single large entity that can screw you over, it's dangerous.
VS Code might be open source, but in practice it's produced by Microsoft. Microsoft has a problematic relationship to consent and is shoving AI products down everyone's throat. I'd rather use tools that respect me and my decisions, and I'd rather not get my tools produced by already monopolistic organizations.
Microsoft is also based in the USA, and the political climate over there makes me want to depend as little as possible on American tools. I know that's a long, uphill battle, but we have to start somewhere.
I'm not advocating for a ban against American tech in general, but for more balance in our supply chain. I'm also not advocating for European tech either: I'd rather get open source tools from international teams competing in a race to the top, rather than from teams in a single jurisdiction. What is happening in the USA could happen in Europe too.
Why I feared using HelixI've never found vim particularly pleasant to use but it's everywhere, so I figured I might just get used to it. But one of the things I never liked about vim is the number of moving pieces. By default, vim and neovim are very bare bones. They can be extended and completely modified with plugins, but I really don't like the idea of having extremely customize tools.
I'd rather have the same editor as everyone else, with a few knobs for minor preferences. I am subject to choice paralysis, so making me configure an editor before I've even started editing is the best way to tank my productivity.
When my colleague told me about Helix, two things struck me as improvements over vim.
But there are major drawbacks to Helix too:
After a single week of usage, Helix was already very comfortable to navigate. After a few weeks, most of the wrinkles have been ironed out and I use it as my primary editor. So how did I overcome those fears?
What Helped Just Do ItI tried Helix. It can sound silly, but the very first step to get into Helix was not to overthink it. I just installed it on my mac with brew install helix and gave it a go. I was not too familiar with it, so I looked up the official documentation and noticed there was a tutorial.
This tutorial alone is what convinced me to try harder. It's an interactive and well written way to learn how to move and perform basic operations in Helix. I quickly learned how to move around, select things, surround them with braces or parenthesis. I could see what I was about to do before doing it. This has been epiphany. Helix just worked the way I wanted.
Better: I could get things done faster than in VS Code after a few minutes of learning. Being a lazy person, I never bothered looking up VS Code shortcuts. Because the learning curve for Helix is slightly steeper, you have to learn those shortcuts that make moving around feel so easy.
Not only did I quickly get used to Helix key bindings: my vim muscle-memory didn't get in the way at all!
Better docsThe built-in tutorial is a very pragmatic way to get started. You get results fast, you learn hands on, and it's not that long. But if you want to go further, you have to look for docs. Helix has officials docs. They seem to be fairly complete, but they're also impenetrable as a new user. They focus on what the editor supports and not on what I will want to do with it.
After a bit of browsing online, I've stumbled upon this third-party documentation website. The domain didn't inspire me a lot of confidence, but the docs are really good. They are clearly laid out, use-case oriented, and they make the most of Astro Starlight to provide a great reading experience. The author tried to upstream these docs, but that won't happen. It looks like they are upstreaming their docs to the current website. I hope this will improve the quality of upstream docs eventually.
After learning the basics and finding my way through the docs, it was time to ensure Helix was set up to help me where I needed it most.
Getting the most of Markdown and Astro in HelixIn my free time, I mostly use my editor for three things:
Helix is a "stupid" text editor. It doesn't know much about what you're typing. But it supports Language Servers that implement the Language Server Protocol. Language Servers understand the document you're editing. They explain to Helix what you're editing, whether you're in a TypeScript function, typing a markdown link, etc. With that information, Helix and the Language Server can provide code completion hints, errors & warnings, and easier navigation in your code.
In addition to Language Servers, Helix also supports plugging code formatters. Those are pieces of software that will read the document and ensure that it is consistently formatted. It will check that all indentations use spaces and not tabs, that there is a consistent number of space when indenting, that brackets are on the same line as the function, etc. In short: it will make the code pretty.
MarkdownMarkdown is not really a programming language, so it might seem surprising to configure a Language Server for it. But if you remember what we said earlier, Language Servers can provide code completion, which is useful when creating links for example. Marksman does exactly that!
Since Helix is pre-configured to use marksman for markdown files we only need to install marksman and make sure it's in our PATH. Installing it with homebrew is enough.
$ brew install marksmanWe can check that Helix is happy with it with the following command
$ hx --health markdown Configured language servers: ✓ marksman: /opt/homebrew/bin/marksman Configured debug adapter: None Configured formatter: None Tree-sitter parser: ✓ Highlight queries: ✓ Textobject queries: ✘ Indent queries: ✘But Language Servers can also help Helix display errors and warnings, and "code suggestions" to help fix the issues. It means Language Servers are a perfect fit for... grammar checkers! Several grammar checkers exist. The most notable are:
I mostly write in English and want to keep a minimalistic setup. Automattic is well funded, and I'm confident they will keep working on Harper to improve it. Since grammar checker LSPs can easily be changed, I've decided to go with Harper for now.
To install it, homebrew does the job as always:
$ brew install harperThen I edited my ~/.config/helix/languages.toml to add Harper as a secondary Language Server in addition to marksman
[language-server.harper-ls] command = "harper-ls" args = ["--stdio"] [[language]] name = "markdown" language-servers = ["marksman", "harper-ls"]Finally I can add a markdown linter to ensure my markdown is formatted properly. Several options exist, and markdownlint is one of the most popular. My colleagues recommended the new kid on the block, a Blazing Fast equivalent: rumdl.
Installing rumdl was pretty simple on my mac. I only had to add the repository of the maintainer, and install rumdl from it.
$ brew tap rvben/rumdl $ brew install rumdlAfter that I added a new language-server to my ~/.config/helix/languages.toml and added it to the language servers to use for the markdown language.
[language-server.rumdl] command = "rumdl" args = ["server"] [...] [[language]] name = "markdown" language-servers = ["marksman", "harper-ls", "rumdl"] soft-wrap.enable = true text-width = 80 soft-wrap.wrap-at-text-width = trueSince my website already contained a .markdownlint.yaml I could import it to the rumdl format with
$ rumdl import .markdownlint.yaml Converted markdownlint config from '.markdownlint.yaml' to '.rumdl.toml' You can now use: rumdl check --config .rumdl.toml .You might have noticed that I've added a little quality of life improvement: soft-wrap at 80 characters.
Now if you add this to your own config.toml you will notice that the text is completely left aligned. This is not a problem on small screens, but it rapidly gets annoying on wider screens.
Helix doesn't support centering the editor. There is a PR tackling the problem but it has been stale for most of the year. The maintainers are overwhelmed by the number of PRs making it their way, and it's not clear if or when this PR will be merged.
In the meantime, a workaround exists, with a few caveats. It is possible to add spaces to the left gutter (the column with the line numbers) so it pushes the content towards the center of the screen.
To figure out how many spaces are needed, you need to get your terminal width with stty
$ stty size 82 243In my case, when in full screen, my terminal is 243 characters wide. I need to remove the content column with from it, and divide everything by 2 to get the space needed on each side. In my case for a 243 character wide terminal with a text width of 80 characters:
(243 - 80) / 2 = 81As is, I would add 203 spaces to my left gutter to push the rest of the gutter and the content to the right. But the gutter itself has a width of 4 characters, that I need to remove from the total. So I need to subtract them from the total, which leaves me with 76 characters to add.
I can open my ~/.config/helix/config.toml to add a new key binding that will automatically add or remove those spaces from the left gutter when needed, to shift the content towards the center.
[keys.normal.space.t] z = ":toggle gutters.line-numbers.min-width 76 3"Now when in normal mode, pressing <kbd>Space</kbd> then <kbd>t</kbd> then <kbd>z</kbd> will add/remove the spaces. Of course this workaround only works when the terminal runs in full screen mode.
AstroAstro works like a charm in VS Code. The team behind it provides a Language Server and a TypeScript plugin to enable code completion and syntax highlighting.
I only had to install those globally with
$ pnpm install -g @astrojs/language-server typescript @astrojs/ts-pluginNow we need to add a few lines to our ~/.config/helix/languages.toml to tell it how to use the language server
[language-server.astro-ls] command = "astro-ls" args = ["--stdio"] config = { typescript = { tsdk = "/Users/thibaultmartin/Library/pnpm/global/5/node_modules/typescript/lib" }} [[language]] name = "astro" scope = "source.astro" injection-regex = "astro" file-types = ["astro"] language-servers = ["astro-ls"]We can check that the Astro Language Server can be used by helix with
$ hx --health astro Configured language servers: ✓ astro-ls: /Users/thibaultmartin/Library/pnpm/astro-ls Configured debug adapter: None Configured formatter: None Tree-sitter parser: ✓ Highlight queries: ✓ Textobject queries: ✘ Indent queries: ✘I also like to get a formatter to automatically make my code consistent and pretty for me when I save a file. One of the most popular code formaters out there is Prettier. I've decided to go with the fast and easy formatter dprint instead.
I installed it with
$ brew install dprintThen in the projects I want to use dprint in, I do
$ dprint initI might edit the dprint.json file to my liking. Finally, I configure Helix to use dprint globally for all Astro projects by appending a few lines in my ~/.config/helix/languages.toml.
[[language]] name = "astro" scope = "source.astro" injection-regex = "astro" file-types = ["astro"] language-servers = ["astro-ls"] formatter = { command = "dprint", args = ["fmt", "--stdin", "astro"]} auto-format = trueOne final check, and I can see that Helix is ready to use the formatter as well
$ hx --health astro Configured language servers: ✓ astro-ls: /Users/thibaultmartin/Library/pnpm/astro-ls Configured debug adapter: None Configured formatter: ✓ /opt/homebrew/bin/dprint Tree-sitter parser: ✓ Highlight queries: ✓ Textobject queries: ✘ Indent queries: ✘ YAMLFor yaml, it's simple and straightforward: Helix is preconfigured to use yaml-language-server as soon as it's in the PATH. I just need to install it with
$ brew install yaml-language-server Is it worth it?Helix really grew on me. I find it particularly easy and fast to edit code with it. It takes a tiny bit more work to get the language support than it does in VS Code, but it's nothing insurmountable. There is a slightly steeper learning curve than for VS Code, but I consider it to be a good thing. It forced me to learn how to move around and edit efficiently, because there is no way to do it inefficiently. Helix remains intuitive once you've learned the basics.
I am a GNOME enthusiast, and I adhere to the same principles: I like when my apps work out of the box, and when I have little to do to configure them. This is a strong stance that often attracts a vocal opposition. I like products that follow those principles better than those who don't.
With that said, Helix sometimes feels like it is maintained by one or two people who have a strong vision, but who struggle to onboard more maintainers. As of writing, Helix has more than 350 PRs open. Quite a few bring interesting features, but the maintainers don't have enough time to review them.
Those 350 PRs mean there is a lot of energy and goodwill around the project. People are willing to contribute. Right now, all that energy is gated, resulting in frustration both from the contributors who feel like they're working in the void, and the maintainers who feel like there at the receiving end of a fire hose.
A solution to make everyone happier without sacrificing the quality of the project would be to work on a Contributor Ladder. CHAOSS' Dr Dawn Foster published a blog post about it, listing interesting resources at the end.
For coding and software engineering, I’ve used and experimented with various frontends (FOSS and proprietary) to multiple foundation models (mostly proprietary) trying to keep up with the state of the art. I’ve come to strongly believe in a few things:
All AI is at risk of prompt injection to some degree, but it’s particularly dangerous with agentic coding. All the state of the art today knows how to do is mitigate it at best. I don’t think it’s a reason to avoid AI, but it’s one of the top reasons to use AI thoughtfully and carefully for products that have any level of criticality.
OpenAI’s Codex documentation has a simple and good example of this.
Disabling the tests and claiming successBeyond that, I’ve experienced multiple times different models happily disabling the tests or adding a println!("TODO add testing here") and claim success. At least this one is easier to mitigate with a second agent doing code review before it gets to human review.
SandboxingThe “can I do X” prompting model that various interfaces default to is seriously flawed. Anthropic has a recent blog post on Claude Code changes in this area.
My take here is that sandboxing is only part of the problem; the other part is ensuring the agent has a reproducible environment, and especially one that can be run in IaaS environments. I think devcontainers are a good fit.
I don’t agree with the statement from Anthropic’s blog
without the overhead of spinning up and managing a container.
I don’t think this is overhead for most projects because Where it feels like it has overhead, we should be working to mitigate it.
Running code as separate login usersIn fact, one thing I think we should popularize more on Linux is the concept of running multiple unprivileged login users. Personally for the tasks I work on, it often involves building containers or launching local VMs, and isolating that works really well with a full separate “user” identity. An experiment I did was basically useradd ai and running delegated tasks there instead. To log in I added %wheel ALL=NOPASSWD: /usr/bin/machinectl shell ai@ to /etc/sudoers.d/ai-login so that my regular human user could easily get a shell in the ai user’s context.
I haven’t truly “operationalized” this one as juggling separate git repository clones was a bit painful, but I think I could automate it more. I’m interested in hearing from folks who are doing something similar.
Parallel, IaaS-ready agents…with reviewI’m today often running 2-3 agents in parallel on different tasks (with different levels of success, but that’s its own story).
It makes total sense to support delegating some of these agents to work off my local system and into cloud infrastructure.
In looking around in this space, there’s quite a lot of stuff. One of them is Ona (formerly Gitpod). I gave it a quick try and I like where they’re going, but more on this below.
Github Copilot can also do something similar to this, but what I don’t like about it is that it pushes a model where all of one’s interaction is in the PR. That’s going to be seriously noisy for some repositories, and interaction with LLMs can feel too “personal” sometimes to have permanently recorded.
Credentials should be on demand and fine grained for tasksTo me a huge flaw with Ona and one shared with other things like Langchain Open-SWE is basically this:
Sorry but: no way I’m clicking OK on that button. I need a strong and clearly delineated barrier between tooling/AI agents acting “as me” and my ability to approve and push code or even do basic things like edit existing pull requests.
Github’s Copilot gets this more right because its bot runs as a distinct identity. I haven’t dug into what it’s authorized to do. I may play with it more, but I also want to use agents outside of Github and I also am not a fan of deepening dependence on a single proprietary forge either.
So I think a key thing agent frontends should help do here is in granting fine-grained ephemeral credentials for dedicated write access as an agent is working on a task. This “credential handling” should be a clearly distinct component. (This goes beyond just git forges of course but also other issue trackers or data sources that may be in context).
ConclusionThere’s so much out there on this, I can barely keep track while trying to do my real job. I’m sure I’m not alone – but I’m interested in other’s thoughts on this!
A couple of months ago I shared that I was looking for what was next for me, and I’m thrilled to report that I’ve found it: I’m joining ROOST as OSS Community Manager!
What is ROOST?I’ll let our website do most of the talking, but I can add some context based on my conversations with the rest of the incredible ROOSTers over the past few weeks. In a nutshell, ROOST is a relatively new nonprofit focused on building, distributing, and maintaining the open source building blocks for online trust and safety. It was founded by tech industry veterans who saw the need for truly open source tools in the space, and were sick of rebuilding the exact same internal tools across multiple orgs and teams.
The way I like to frame it is how you wouldn’t roll your own encryption; why would you roll your own trust and safety tooling? It turns out that currently every platform, service, and community has to reinvent all of the hard work to ensure people are safe and harmful content doesn’t spread. ROOST is teaming up with industry partners to release existing trust and safety tooling as open source and to build the missing pieces together, in the open. The result is that teams will no longer have to start from scratch and take on all of the effort (and risk!) of rolling their own trust and safety tools; instead, they can reach for the open source projects from ROOST to integrate into their own products and systems. And we know open source is the right approach for critical tooling: the tools themselves must be transparent and auditable, while organizations can customize and even help improve this suite of online safety tools to benefit everyone.
This Platformer article does a great job of digging into more detail; give it a read. :) Oh, and why the baby chick? ROOST has a habit of naming things after birds—and I’m a baby ROOSTer. :D
What is trust and safety?I’ve used the term “trust and safety” a ton in this post; I’m no expert (I’m rapidly learning!), but think about protecting users from harm including unwanted sexual content, misinformation, violent/extremist content, etc. It’s a field that’s much larger in scope and scale than most people probably realize, and is becoming ever more important as it becomes easier to generate massive amounts of text and graphic content using LLMs and related generative “AI” technologies. Add in that those generative technologies are largely trained using opaque data sources that can themselves include harmful content, and you can imagine how we’re at a flash point for trust and safety; robust open online safety tools like those that ROOST is helping to build and maintain are needed more than ever.
What I’ll be doingMy role is officially “OSS Community Manager,” but “community manager” can mean ten different things to ten different people (which is why people in the role often don’t survive long at a company…). At ROOST, I feel like the team knows exactly what they need me to do—and importantly, they have a nice onramp and initial roadmap for me to take on! My work will mostly focus on building and supporting an active and sustainable contributor community around our tools, as well as helping improve the discourse and understanding of open source in the trust and safety world. It’s an exciting challenge to take on with an amazing team from ROOST as well as partner organizations.
My work with GNOMEI’ll continue to serve on the GNOME Foundation board of directors and contribute to both GNOME and Flathub as much as I can; there may be a bit of a transition time as I get settled into this role, but my open source contributions—both to trust and safety and the desktop Linux world—are super important to me. I’ll see you around!
Over the past few months, I’ve been struggling with a tough question. How do I balance my work commitments and personal life while still contributing to open source?
On the surface, it looks like a weird question. Like I really enjoy contributing and working with contributors, and when I was in college, I always thought... "Why do people ever step back? It is so fun!". It was the thing that brought a smile to my face and took off any "stress". But now that I have graduated, things have taken a turn.
It is now that when work pressure mounts, you use the little time you get to not focus on writing code and instead perform some kind of hobby, learn something new or spend time with family. Or, just endless video scroll and sleep.
This has led me to be on my lowest contributions streak and not able to work on all those cool things I imagined, like reworking the Pitivi timeline in Rust, finishing that one MR in GNOME Settings that is stuck for ages, or fixing some issues in GNOME Extensions website, or work on my own extension's feature request, or contributing to the committees I am a part of.
It’s reached a point where I’m genuinely unsure how to balance things anymore, and hence wanted to give all whom I might not have been able to reply to or have not seen me for a long time an update, that I'm there but just in a dilemma of how to return.
I believe I'm not the only one who faces this. After guiding my juniors for a long while on how to contribute and study at the same time and still manage time for other things, I now am at a road where I am in the same situation. So, if anyone has any insights on how they manage their time, or keep up the motivation and juggle between tasks, do let me know (akaushik [at] gnome [dot] org), I'd really appreciate any insights :)
One of them would probably be to take fewer things on my plate?
Perhaps this is just a new phase of learning? Not about code, but about balance.
tl;dr: Flathub has improved tooling to make license compliance easier for developers. Distros should rebuild OS images with updated runtimes from Flathub; app developers should ensure they're using up-to-date runtimes and verify that licenses and copyright notices are properly included.
In early August, a concerned community member brought to our attention that copyright notices and license files were being omitted when software was bundled as Flatpaks and distributed via Flathub. This was a genuine oversight across multiple projects, and we're glad we've been able to take the opportunity to correct and improve this for runtimes and apps across the Flatpak ecosystem.
Over the past few months, we've been working to enhance our tooling and infrastructure to better support license compliance. With the support of the Flatpak, freedesktop-sdk, GNOME, and KDE teams, we've developed and deployed significant improvements that make it easier than ever for developers to ensure their applications properly include license and copyright notices.
What's NewIn coordination with maintainers of the freedesktop-sdk, GNOME, and KDE runtimes, we've implemented enhanced license handling that automatically includes license and copyright notice files in the runtimes themselves, deduplicated to be as space-efficient as possible. This improvement has been applied to all supported freedesktop-sdk, GNOME, and KDE runtimes, plus backported to freedesktop-sdk 22.08 and newer, GNOME 45 and newer, KDE 5.15-22.08 and newer, and KDE 6.6 and newer. These updated runtimes cover over 90% of apps on Flathub and have already rolled out to users as regular Flatpak updates.
We've also worked with the Flatpak developers to add new functionality to flatpak-builder 1.4.5 that automatically recognizes and includes common license files. This enhancement, now deployed to the Flathub build service, helps ensure apps' own licenses as well as the licenses of any bundled libraries are retained and shipped to users along with the app itself.
These improvements represent an important milestone in the maturity of the Flatpak ecosystem, making license compliance easier and more automatic for the entire community.
Recommended Actions App DevelopersWe encourage you to rebuild your apps with flatpak-builder 1.4.5 or newer to take advantage of the new automatic license detection. You can verify that license and copyright notices are properly included in your Flatpak's /app/share/licenses, both for your app and any included dependencies. In most cases, simply rebuilding your app will automatically include the necessary licenses, but you can also fine-tune which license files are included using the license-files key in your app's Flatpak manifest if needed.
For apps with binary sources (e.g. debs or rpms), we encourage app maintainers to explicitly include relevant license files in the Flatpak itself for consistency and auditability.
End-of-life runtime transition: To focus our resources on maintaining high-quality, up-to-date runtimes, we'll be completing the removal of several end-of-life runtimes in January 2026. Apps using runtimes older than freedesktop-sdk 22.08, GNOME 45, KDE 5.15-22.08 or KDE 6.6 will be marked as EOL shortly. Once these older runtimes are removed, the apps will need to be updated to use a supported runtime to remain available on Flathub. While this won't affect existing app installations, after this date, new users will be unable to install these apps from Flathub until they're rebuilt against a current runtime. Flatpak manifests of any affected apps will remain on the Flathub GitHub organization to enable developers to update them at any time.
If your app currently targets an end-of-life runtime that did receive the backported license improvements, we still strongly encourage you to upgrade to a newer, supported runtime to benefit from ongoing security updates and platform improvements.
DistributorsIf you redistribute binaries from Flathub, such as pre-installed runtimes or apps, you should rebuild your distributed images (ISOs, containers, etc.) with the updated runtimes and apps from Flathub. You can verify that appropriate licenses are included with the Flatpaks in the runtime filesystem at /usr/share/licenses inside each runtime.
Get in TouchApp developers, distributors, and community members are encouraged to connect with the team and other members of the community in our Discourse forum and Matrix chat room. If you are an app developer or distributor and have any questions or concerns, you may also reach out to us at admins@flathub.org.
Thank You!We are grateful to Jef Spaleta from Fedora for his care and confidentiality in bringing this to our attention and working with us collaboratively throughout the process. Special thanks to Boudhayan Bhattcharya (bbhtt) for his tireless work across Flathub, Flatpak and freedesktop-sdk, on this as well as many other important areas. And thank you to Abderrahim Kitouni (akitouni), Adrian Vovk (AdrianVovk), Aleix Pol Gonzalez (apol), Bart Piotrowski (barthalion), Ben Cooksley (bcooksley), Javier Jardón (jjardon), Jordan Petridis (alatiera), Matthias Clasen (matthiasc), Rob McQueen (ramcq), Sebastian Wick (swick), Timothée Ravier (travier), and any others behind the scenes for their hard work and timely collaboration across multiple projects to deliver these improvements.
Our Linux app ecosystem is truly strongest when individuals from across companies and projects come together to collaborate and work towards shared goals. We look forward to continuing to work together to ensure app developers can easily ship their apps to users across all Linux distributions and desktop environments. ♥
GTK has been using SVG for symbolic icons since essentially forever. It hasn’t been a perfect relationship, though.
Pre-HistoryFor the longest time (all through the GTK 3 era, and until recently), we’ve used librsvg indirectly, through gdk-pixbuf, to obtain rendered icons, and then we used some pixel tricks to recolor the resulting image according to the theme.
This works, but it gives up on the defining feature of SVG: its scalability.
Once you’ve rasterized your icon at a given size, all you’re left with is pixels. In the GTK 3 era, this wasn’t a problem, but in GTK 4, we have a scene graph and fractional scaling, so we could do *much* better if we don’t rasterize early.
Unfortunately, librsvg’s API isn’t set up to let us easily translate SVG into our own render nodes. And its rust nature makes for an inconvenient dependency, so we held off on depending on it for a long time.
HistoryEarly this year, I grew tired of this situation, and decided to improve our story for icons, and symbolic ones in particular.
So I set out to see how hard it would be to parse the very limited subset of SVG used in symbolic icons myself. It turned out to be relatively easy. I quickly managed to parse 99% of the Adwaita symbolic icons, so I decided to merge this work for GTK 4.20.
There were some detours and complications along the way. Since my simple parser couldn’t handle 100% of Adwaita (let alone all of the SVGs out there), a fallback to a proper SVG parser was needed. So we added a librsvg dependency after all. Since our new Android backend has an even more difficult time with rust than our other backends, we needed to arrange for a non-rust librsvg branch to be used when necessary.
One thing that this hand-rolled SVG parser improved upon is that it allows stroking, in addition to filling. I documented the format for symbolic icons here.
Starting overA bit later, I was inspired by Apple’s SF Symbols work to look into how hard it would be to extend my SVG parser with a few custom attributes to enable dynamic strokes.
It turned out to be easy again. With a handful of attributes, I could create plausible-looking animations and transitions. And it was fun to play with. When I showed this work to Jakub and Lapo at GUADEC, they were intrigued, so I decided to keep pushing this forward, and it landed in early GTK 4.21, as GtkPathPaintable.
https://blogs.gnome.org/gtk/files/2025/10/path-anim5.webmTo make experimenting with this easier, I made a quick editor. It was invaluable to have Jakub as an early adopter play with the editor while I was improving the implementation. Some very good ideas came out of this rapid feedback cycle, for example dynamic stroke width.
You can get some impression of the new stroke-based icons Jakub has been working on here.
Recent happeningsAs summer was turning to fall, I felt that I should attempt to support SVG more completely, including grouping and animations. GTK’s rendering infrastructure has most of the pieces that are required for SVG after all: transforms, filters, clips, paths, gradients are all supported.
This was *not* easy.
But eventually, things started to fall into place. And this week, I’ve replaced GtkPathPaintable with GtkSvg, which is a GdkPaintable that supports SVG. At least, the subset of SVG that is most relevant for icons. And that includes animations.
https://blogs.gnome.org/gtk/files/2025/10/Screencast-From-2025-10-22-18-13-02.webm
This is still a subset of full SVG, but converting a few random lottie files to SVG animations gave me a decent success rate for getting things to display mostly ok.
The details are spelled out here.
SummaryGTK 4.22 will natively support SVG, including SVG animations.
If you’d like to help improve this further, here are some some suggestions.
If you would like to support the GNOME foundation, who’s infrastructure and hosting GTK relies on, please donate.
Time for another GNOME Crosswords release! This one highlights the features our interns did this past summer. We had three fabulous interns — two through GSoC and one through Outreachy. While this release really only has three big features — one from each — they were all fantastic.
Thanks goes to to my fellow GSoC mentors Federico and Tanmay. In addition, Tilda and the folks at Outreachy were extremely helpful. Mentorship is a lot of work, but it’s also super-rewarding. If you’re interested in participating as a mentor in the future and have any questions about the process, let me know. I’ll be happy to speak with you about them.
Dictionary pipeline improvementsFirst, our Outreachy intern Nancy spent the summer improving the build pipeline to generate the internal dictionaries we use. These dictionaries are used to provide autofill capabilities and add definitions to the Editor, as well as providing near-instant completions for both the Editor and Player. The old pipeline was buggy and hard to maintain. Once we had a cleaned it up, Nancy was able to use it to effortlessly produce a dictionary in her native tongue: Swahili.
A Grid in swahiliWe have no distribution story yet, but it’s exciting to have it so much easier to create dictionaries in other languages. It opens the door to the Editor being more universally useful (and fulfills a core GNOME tenet).
You can read about it more details in Nancy’s final report.
Word ListVictor did a ton of research for Crosswords, almost acting like a Product Manager. He installed every crossword editor he could find and did a competitive analysis, noting possible areas for improvement. One of the areas he flagged was the word list in our editor. This list suggests words that could be used in a given spot in the grid. We started with a simplistic implementation that listed every possible word in our dictionary that could fit. This approach— while fast — provided a lot of dead words that would make the grid unsolvable. So he set about trying to narrow down that list.
New Word List showing possible optionsIt turns out that there’s a lot of tradeoffs to be made here (Victor’s post). It’s possible to find a really good set of words, at the cost of a lot of computational power. A much simpler list is quick but has dead words. In the end, we found a happy medium that let us get results fast and had a stable list across a clue. He’ll be blogging about this shortly.
Victor also cleaned up our development docs, and researched satsolve algorithms for the grid. He’s working on a lovely doc on the AC-3 algorithm, and we can use it to add additional functionality to the editor in the future.
PrintingToluwaleke implemented printing support for GNOME Crosswords.
This was a tour de force, and a phenomenal addition to the Crosswords codebase. When I proposed it for a GSoC project, I had no idea how much work this project could involve. We already had code to produce an svg of the grid — I thought that we could just quickly add support for the clues and call it a day. Instead, we ended up going on a wild ride resulting in a significantly stronger feature and code base than we had going in.
His blog has more detail and it’s really quite cool (go read it!). But from my perspective, we ended up with a flexible and fast rendering system that can be used in a lot more places. Take a look:
https://blogs.gnome.org/jrb/files/2025/10/output_video.webmThe resulting PDFs are really high quality — they seem to look better than some of the newspaper puzzles I’ve seen. We’ll keep tweaking them as there are still a lot of improvements we’d like to add, such as taking the High Contrast / Large Text A11Y options into account. But it’s a tremendous basis for future work.
Increased PolishThere were a few other small things that happened
Now that these big changes have landed, it’s time to go back to working on the rest of the changes proposed for GNOME Circle.
Until next time, happy puzzling!
A few months ago, I introduced my GSoC project: Adding Printing Support to GNOME Crosswords. Since June, I’ve been working hard on it, and I’m happy to share that printing puzzles is finally possible!
The ResultGNOME Crosswords now includes a Print option in its menu, which opens the system’s print dialog. After adjusting printer settings and page setup, the user is shown a preview dialog with a few crossword-specific options, such as ink-saving mode and whether (and how) to include the solution. The options are intentionally minimal, keeping the focus on a clean and straightforward printing experience.
Below is a short clip showing the feature in action:
The resulting file: output.pdf
Crosswords now also ships with a standalone command-line tool, ipuz2pdf, which converts any IPUZ puzzle file into a print-ready PDF. It offers a similarly minimal set of layout and crossword-specific options.
The ProcessWorking on a feature of this scale came with plenty of challenges. Getting familiar with a large codebase took patience, and understanding how everything fit together often meant careful study and experimentation. Balancing ideas with the project timeline and navigating code reviews pushed me to grow both technically and collaboratively.
On the technical side, rendering and layout had their own hurdles. Handling text metrics, scaling, and coordinate transformations required a mix of technical knowledge, critical thinking, and experimentation. Even small visual glitches could lead to hours of debugging. One notably difficult part was implementing the box layout system that powers the dynamic print layout engine.
The LessonsThis project taught me a lot about patience, focus, and iteration. I learned to approach large problems by breaking them into small, testable pieces, and to value clarity and simplicity in both code and design. Code reviews taught me to communicate ideas better, accept feedback gracefully, and appreciate different perspectives on problem-solving.
On the technical side, working with rendering and layout systems deepened my understanding of graphics programming. I also learned how small design choices can ripple through an entire codebase, and how careful abstraction and modularity can make complex systems easier to evolve.
Above all, I learned the value of collaboration, and that progress in open source often comes from many small, consistent improvements rather than big leaps.
The ConclusionIn the end, I achieved all the goals set out for the project, and even more. It was a long and taxing journey, but absolutely worth it.
The GratitudeI’m deeply grateful to my mentors, Jonathan Blandford and Federico Mena Quintero, for their guidance, patience, and support throughout this project. I’ve learned so much from working with them. I’m also grateful to the GNOME community and Google Summer of Code for making this opportunity possible and for creating such a welcoming environment for new contributors.
What Comes AfterNo project is ever truly finished, and this one is no exception. There’s still plenty to be done, and some already have tracking issues. I plan to keep improving the printing system and related features in GNOME Crosswords.
I also hope to stay involved in the GNOME ecosystem and open-source development in general. I’m especially interested in projects that combine design, performance, and system-level programming. More importantly, I’m a recent CS graduate looking for a full-time role in the field of interest stated earlier. If you have or know of any opportunities, please reach out to me.
Finally, I plan to write a couple of follow-up posts diving into interesting parts of the process in more detail. Stay tuned!
Thank you!If you’re working with Laravel and using Laravel Mix to manage your CSS and JavaScript assets, you may have come across an error like this:
Spatie\LaravelIgnition\Exceptions\ViewException Message: Unable to locate Mix file: /assets/vendor/css/rtl/core.cssOr in some cases:
Illuminate\Foundation\MixFileNotFoundException Unable to locate Mix file: /assets/vendor/fonts/boxicons.cssThis error can be frustrating, especially when your project works perfectly on one machine but fails on another. Let’s break down what’s happening and how to solve it.
What Causes This Error?Laravel Mix is a wrapper around Webpack, designed to compile your resources/ assets (CSS, JS, images) into the public/ directory. The mix() helper in Blade templates references these compiled assets using a special file: mix-manifest.json.
This error occurs when Laravel cannot find the compiled asset. Common reasons include:
Here’s a step-by-step solution:
1. Install NPM dependenciesMake sure you have Node.js installed, then run:
npm install 2. Compile your assetsThis will generate your CSS and JS files in the public folder and update mix-manifest.json.
3. Check mix-manifest.jsonEnsure the manifest contains the file Laravel is looking for:
"/assets/vendor/css/rtl/core.css": "/assets/vendor/css/rtl/core.css" 4. Adjust Blade template pathsIf you don’t use RTL, you can set:
$configData['rtlSupport'] = '';so the code doesn’t try to load /rtl/core.css unnecessarily.
5. Clear cachesLaravel may cache old views and configs. Clear them:
php artisan view:clear php artisan config:clear php artisan cache:clear Pro TipsThe “Unable to locate Mix file” error is not a bug in Laravel, but a result of missing compiled assets or misconfigured paths. Once you: