Over the past few months, I’ve been struggling with a tough question. How do I balance my work commitments and personal life while still contributing to open source?
On the surface, it looks like a weird question. Like I really enjoy contributing and working with contributors, and when I was in college, I always thought... "Why do people ever step back? It is so fun!". It was the thing that brought a smile to my face and took off any "stress". But now that I have graduated, things have taken a turn.
It is now that when work pressure mounts, you use the little time you get to not focus on writing code and instead perform some kind of hobby, learn something new or spend time with family. Or, just endless video scroll and sleep.
This has led me to be on my lowest contributions streak and not able to work on all those cool things I imagined, like reworking the Pitivi timeline in Rust, finishing that one MR in GNOME Settings that is stuck for ages, or fixing some issues in GNOME Extensions website, or work on my own extension's feature request, or contributing to the committees I am a part of.
It’s reached a point where I’m genuinely unsure how to balance things anymore, and hence wanted to give all whom I might not have been able to reply to or have not seen me for a long time an update, that I'm there but just in a dilemma of how to return.
I believe I'm not the only one who faces this. After guiding my juniors for a long while on how to contribute and study at the same time and still manage time for other things, I now am at a road where I am in the same situation. So, if anyone has any insights on how they manage their time, or keep up the motivation and juggle between tasks, do let me know (akaushik [at] gnome [dot] org), I'd really appreciate any insights :)
One of them would probably be to take fewer things on my plate?
Perhaps this is just a new phase of learning? Not about code, but about balance.
tl;dr: Flathub has improved tooling to make license compliance easier for developers. Distros should rebuild OS images with updated runtimes from Flathub; app developers should ensure they're using up-to-date runtimes and verify that licenses and copyright notices are properly included.
In early August, a concerned community member brought to our attention that copyright notices and license files were being omitted when software was bundled as Flatpaks and distributed via Flathub. This was a genuine oversight across multiple projects, and we're glad we've been able to take the opportunity to correct and improve this for runtimes and apps across the Flatpak ecosystem.
Over the past few months, we've been working to enhance our tooling and infrastructure to better support license compliance. With the support of the Flatpak, freedesktop-sdk, GNOME, and KDE teams, we've developed and deployed significant improvements that make it easier than ever for developers to ensure their applications properly include license and copyright notices.
What's NewIn coordination with maintainers of the freedesktop-sdk, GNOME, and KDE runtimes, we've implemented enhanced license handling that automatically includes license and copyright notice files in the runtimes themselves, deduplicated to be as space-efficient as possible. This improvement has been applied to all supported freedesktop-sdk, GNOME, and KDE runtimes, plus backported to freedesktop-sdk 22.08 and newer, GNOME 45 and newer, KDE 5.15-22.08 and newer, and KDE 6.6 and newer. These updated runtimes cover over 90% of apps on Flathub and have already rolled out to users as regular Flatpak updates.
We've also worked with the Flatpak developers to add new functionality to flatpak-builder 1.4.5 that automatically recognizes and includes common license files. This enhancement, now deployed to the Flathub build service, helps ensure apps' own licenses as well as the licenses of any bundled libraries are retained and shipped to users along with the app itself.
These improvements represent an important milestone in the maturity of the Flatpak ecosystem, making license compliance easier and more automatic for the entire community.
Recommended Actions App DevelopersWe encourage you to rebuild your apps with flatpak-builder 1.4.5 or newer to take advantage of the new automatic license detection. You can verify that license and copyright notices are properly included in your Flatpak's /app/share/licenses, both for your app and any included dependencies. In most cases, simply rebuilding your app will automatically include the necessary licenses, but you can also fine-tune which license files are included using the license-files key in your app's Flatpak manifest if needed.
For apps with binary sources (e.g. debs or rpms), we encourage app maintainers to explicitly include relevant license files in the Flatpak itself for consistency and auditability.
End-of-life runtime transition: To focus our resources on maintaining high-quality, up-to-date runtimes, we'll be completing the removal of several end-of-life runtimes in January 2026. Apps using runtimes older than freedesktop-sdk 22.08, GNOME 45, KDE 5.15-22.08 or KDE 6.6 will be marked as EOL shortly. Once these older runtimes are removed, the apps will need to be updated to use a supported runtime to remain available on Flathub. While this won't affect existing app installations, after this date, new users will be unable to install these apps from Flathub until they're rebuilt against a current runtime. Flatpak manifests of any affected apps will remain on the Flathub GitHub organization to enable developers to update them at any time.
If your app currently targets an end-of-life runtime that did receive the backported license improvements, we still strongly encourage you to upgrade to a newer, supported runtime to benefit from ongoing security updates and platform improvements.
DistributorsIf you redistribute binaries from Flathub, such as pre-installed runtimes or apps, you should rebuild your distributed images (ISOs, containers, etc.) with the updated runtimes and apps from Flathub. You can verify that appropriate licenses are included with the Flatpaks in the runtime filesystem at /usr/share/licenses inside each runtime.
Get in TouchApp developers, distributors, and community members are encouraged to connect with the team and other members of the community in our Discourse forum and Matrix chat room. If you are an app developer or distributor and have any questions or concerns, you may also reach out to us at admins@flathub.org.
Thank You!We are grateful to Jef Spaleta from Fedora for his care and confidentiality in bringing this to our attention and working with us collaboratively throughout the process. Special thanks to Boudhayan Bhattcharya (bbhtt) for his tireless work across Flathub, Flatpak and freedesktop-sdk, on this as well as many other important areas. And thank you to Abderrahim Kitouni (akitouni), Adrian Vovk (AdrianVovk), Aleix Pol Gonzalez (apol), Bart Piotrowski (barthalion), Ben Cooksley (bcooksley), Javier Jardón (jjardon), Jordan Petridis (alatiera), Matthias Clasen (matthiasc), Rob McQueen (ramcq), Sebastian Wick (swick), Timothée Ravier (travier), and any others behind the scenes for their hard work and timely collaboration across multiple projects to deliver these improvements.
Our Linux app ecosystem is truly strongest when individuals from across companies and projects come together to collaborate and work towards shared goals. We look forward to continuing to work together to ensure app developers can easily ship their apps to users across all Linux distributions and desktop environments. ♥
GTK has been using SVG for symbolic icons since essentially forever. It hasn’t been a perfect relationship, though.
Pre-HistoryFor the longest time (all through the GTK 3 era, and until recently), we’ve used librsvg indirectly, through gdk-pixbuf, to obtain rendered icons, and then we used some pixel tricks to recolor the resulting image according to the theme.
This works, but it gives up on the defining feature of SVG: its scalability.
Once you’ve rasterized your icon at a given size, all you’re left with is pixels. In the GTK 3 era, this wasn’t a problem, but in GTK 4, we have a scene graph and fractional scaling, so we could do *much* better if we don’t rasterize early.
Unfortunately, librsvg’s API isn’t set up to let us easily translate SVG into our own render nodes. And its rust nature makes for an inconvenient dependency, so we held off on depending on it for a long time.
HistoryEarly this year, I grew tired of this situation, and decided to improve our story for icons, and symbolic ones in particular.
So I set out to see how hard it would be to parse the very limited subset of SVG used in symbolic icons myself. It turned out to be relatively easy. I quickly managed to parse 99% of the Adwaita symbolic icons, so I decided to merge this work for GTK 4.20.
There were some detours and complications along the way. Since my simple parser couldn’t handle 100% of Adwaita (let alone all of the SVGs out there), a fallback to a proper SVG parser was needed. So we added a librsvg dependency after all. Since our new Android backend has an even more difficult time with rust than our other backends, we needed to arrange for a non-rust librsvg branch to be used when necessary.
One thing that this hand-rolled SVG parser improved upon is that it allows stroking, in addition to filling. I documented the format for symbolic icons here.
Starting overA bit later, I was inspired by Apple’s SF Symbols work to look into how hard it would be to extend my SVG parser with a few custom attributes to enable dynamic strokes.
It turned out to be easy again. With a handful of attributes, I could create plausible-looking animations and transitions. And it was fun to play with. When I showed this work to Jakub and Lapo at GUADEC, they were intrigued, so I decided to keep pushing this forward, and it landed in early GTK 4.21, as GtkPathPaintable.
https://blogs.gnome.org/gtk/files/2025/10/path-anim5.webmTo make experimenting with this easier, I made a quick editor. It was invaluable to have Jakub as an early adopter play with the editor while I was improving the implementation. Some very good ideas came out of this rapid feedback cycle, for example dynamic stroke width.
You can get some impression of the new stroke-based icons Jakub has been working on here.
Recent happeningsAs summer was turning to fall, I felt that I should attempt to support SVG more completely, including grouping and animations. GTK’s rendering infrastructure has most of the pieces that are required for SVG after all: transforms, filters, clips, paths, gradients are all supported.
This was *not* easy.
But eventually, things started to fall into place. And this week, I’ve replaced GtkPathPaintable with GtkSvg, which is a GdkPaintable that supports SVG. At least, the subset of SVG that is most relevant for icons. And that includes animations.
https://blogs.gnome.org/gtk/files/2025/10/Screencast-From-2025-10-22-18-13-02.webm
This is still a subset of full SVG, but converting a few random lottie files to SVG animations gave me a decent success rate for getting things to display mostly ok.
The details are spelled out here.
SummaryGTK 4.22 will natively support SVG, including SVG animations.
If you’d like to help improve this further, here are some some suggestions.
If you would like to support the GNOME foundation, who’s infrastructure and hosting GTK relies on, please donate.
Time for another GNOME Crosswords release! This one highlights the features our interns did this past summer. We had three fabulous interns — two through GSoC and one through Outreachy. While this release really only has three big features — one from each — they were all fantastic.
Thanks goes to to my fellow GSoC mentors Federico and Tanmay. In addition, Tilda and the folks at Outreachy were extremely helpful. Mentorship is a lot of work, but it’s also super-rewarding. If you’re interested in participating as a mentor in the future and have any questions about the process, let me know. I’ll be happy to speak with you about them.
Dictionary pipeline improvementsFirst, our Outreachy intern Nancy spent the summer improving the build pipeline to generate the internal dictionaries we use. These dictionaries are used to provide autofill capabilities and add definitions to the Editor, as well as providing near-instant completions for both the Editor and Player. The old pipeline was buggy and hard to maintain. Once we had a cleaned it up, Nancy was able to use it to effortlessly produce a dictionary in her native tongue: Swahili.
A Grid in swahiliWe have no distribution story yet, but it’s exciting to have it so much easier to create dictionaries in other languages. It opens the door to the Editor being more universally useful (and fulfills a core GNOME tenet).
You can read about it more details in Nancy’s final report.
Word ListVictor did a ton of research for Crosswords, almost acting like a Product Manager. He installed every crossword editor he could find and did a competitive analysis, noting possible areas for improvement. One of the areas he flagged was the word list in our editor. This list suggests words that could be used in a given spot in the grid. We started with a simplistic implementation that listed every possible word in our dictionary that could fit. This approach— while fast — provided a lot of dead words that would make the grid unsolvable. So he set about trying to narrow down that list.
New Word List showing possible optionsIt turns out that there’s a lot of tradeoffs to be made here (Victor’s post). It’s possible to find a really good set of words, at the cost of a lot of computational power. A much simpler list is quick but has dead words. In the end, we found a happy medium that let us get results fast and had a stable list across a clue. He’ll be blogging about this shortly.
Victor also cleaned up our development docs, and researched satsolve algorithms for the grid. He’s working on a lovely doc on the AC-3 algorithm, and we can use it to add additional functionality to the editor in the future.
PrintingToluwaleke implemented printing support for GNOME Crosswords.
This was a tour de force, and a phenomenal addition to the Crosswords codebase. When I proposed it for a GSoC project, I had no idea how much work this project could involve. We already had code to produce an svg of the grid — I thought that we could just quickly add support for the clues and call it a day. Instead, we ended up going on a wild ride resulting in a significantly stronger feature and code base than we had going in.
His blog has more detail and it’s really quite cool (go read it!). But from my perspective, we ended up with a flexible and fast rendering system that can be used in a lot more places. Take a look:
https://blogs.gnome.org/jrb/files/2025/10/output_video.webmThe resulting PDFs are really high quality — they seem to look better than some of the newspaper puzzles I’ve seen. We’ll keep tweaking them as there are still a lot of improvements we’d like to add, such as taking the High Contrast / Large Text A11Y options into account. But it’s a tremendous basis for future work.
Increased PolishThere were a few other small things that happened
Now that these big changes have landed, it’s time to go back to working on the rest of the changes proposed for GNOME Circle.
Until next time, happy puzzling!
A few months ago, I introduced my GSoC project: Adding Printing Support to GNOME Crosswords. Since June, I’ve been working hard on it, and I’m happy to share that printing puzzles is finally possible!
The ResultGNOME Crosswords now includes a Print option in its menu, which opens the system’s print dialog. After adjusting printer settings and page setup, the user is shown a preview dialog with a few crossword-specific options, such as ink-saving mode and whether (and how) to include the solution. The options are intentionally minimal, keeping the focus on a clean and straightforward printing experience.
Below is a short clip showing the feature in action:
The resulting file: output.pdf
Crosswords now also ships with a standalone command-line tool, ipuz2pdf, which converts any IPUZ puzzle file into a print-ready PDF. It offers a similarly minimal set of layout and crossword-specific options.
The ProcessWorking on a feature of this scale came with plenty of challenges. Getting familiar with a large codebase took patience, and understanding how everything fit together often meant careful study and experimentation. Balancing ideas with the project timeline and navigating code reviews pushed me to grow both technically and collaboratively.
On the technical side, rendering and layout had their own hurdles. Handling text metrics, scaling, and coordinate transformations required a mix of technical knowledge, critical thinking, and experimentation. Even small visual glitches could lead to hours of debugging. One notably difficult part was implementing the box layout system that powers the dynamic print layout engine.
The LessonsThis project taught me a lot about patience, focus, and iteration. I learned to approach large problems by breaking them into small, testable pieces, and to value clarity and simplicity in both code and design. Code reviews taught me to communicate ideas better, accept feedback gracefully, and appreciate different perspectives on problem-solving.
On the technical side, working with rendering and layout systems deepened my understanding of graphics programming. I also learned how small design choices can ripple through an entire codebase, and how careful abstraction and modularity can make complex systems easier to evolve.
Above all, I learned the value of collaboration, and that progress in open source often comes from many small, consistent improvements rather than big leaps.
The ConclusionIn the end, I achieved all the goals set out for the project, and even more. It was a long and taxing journey, but absolutely worth it.
The GratitudeI’m deeply grateful to my mentors, Jonathan Blandford and Federico Mena Quintero, for their guidance, patience, and support throughout this project. I’ve learned so much from working with them. I’m also grateful to the GNOME community and Google Summer of Code for making this opportunity possible and for creating such a welcoming environment for new contributors.
What Comes AfterNo project is ever truly finished, and this one is no exception. There’s still plenty to be done, and some already have tracking issues. I plan to keep improving the printing system and related features in GNOME Crosswords.
I also hope to stay involved in the GNOME ecosystem and open-source development in general. I’m especially interested in projects that combine design, performance, and system-level programming. More importantly, I’m a recent CS graduate looking for a full-time role in the field of interest stated earlier. If you have or know of any opportunities, please reach out to me.
Finally, I plan to write a couple of follow-up posts diving into interesting parts of the process in more detail. Stay tuned!
Thank you!If you’re working with Laravel and using Laravel Mix to manage your CSS and JavaScript assets, you may have come across an error like this:
Spatie\LaravelIgnition\Exceptions\ViewException Message: Unable to locate Mix file: /assets/vendor/css/rtl/core.cssOr in some cases:
Illuminate\Foundation\MixFileNotFoundException Unable to locate Mix file: /assets/vendor/fonts/boxicons.cssThis error can be frustrating, especially when your project works perfectly on one machine but fails on another. Let’s break down what’s happening and how to solve it.
What Causes This Error?Laravel Mix is a wrapper around Webpack, designed to compile your resources/ assets (CSS, JS, images) into the public/ directory. The mix() helper in Blade templates references these compiled assets using a special file: mix-manifest.json.
This error occurs when Laravel cannot find the compiled asset. Common reasons include:
Here’s a step-by-step solution:
1. Install NPM dependenciesMake sure you have Node.js installed, then run:
npm install 2. Compile your assetsThis will generate your CSS and JS files in the public folder and update mix-manifest.json.
3. Check mix-manifest.jsonEnsure the manifest contains the file Laravel is looking for:
"/assets/vendor/css/rtl/core.css": "/assets/vendor/css/rtl/core.css" 4. Adjust Blade template pathsIf you don’t use RTL, you can set:
$configData['rtlSupport'] = '';so the code doesn’t try to load /rtl/core.css unnecessarily.
5. Clear cachesLaravel may cache old views and configs. Clear them:
php artisan view:clear php artisan config:clear php artisan cache:clear Pro TipsThe “Unable to locate Mix file” error is not a bug in Laravel, but a result of missing compiled assets or misconfigured paths. Once you:
As a follow up of the Hackweek 24 project, I've continued working on the gnome-tour fork for openSUSE with custom pages to replace the welcome application for openSUSE distributions.
GNOME Tour modificationsAll the modifications are on top of upstream gnome-tour and stored in the openSUSE/gnome-tour repo
The original opensuse-welcome is a qt application, and this one is used for all desktop environments, but it's more or less unmaintained and looking for a replacement, we can use the gnome-tour fork as the default welcome app for all desktop without a custom app.
To do a minimal desktop agnostic opensuse-welcome application, I've modified the gnome-tour to also generate a second binary but just with the last page.
The new opensuse-welcome rpm package is built as a subpackage of gnome-tour. This new application is minimal and it doesn't have lots of requirements, but as it's a gtk4 application, it requires gtk and libadwaita, and also depends on gnome-tour-data to get the resoures of the app.
To improve this welcome app we need to review the translations, because I added three new pages to the gnome-tour and that specific pages are not translated, so I should regenerate the .po files for all languages and upload to openSUSE Weblate for translations.
Many security standards such as CIS and STIG require to protect information at rest. For example, NIST SP 800-53r5 SC-28 advocate to use cryptographic protection, offline storage and TPMs to enhance protection of information confidentiality and/or integrity.
Traditionally to satisfy such controls on portable devices such as laptops one would utilize software based Full Disk Encryption - Mac OS X FileVault, Windows Bitlocker, Linux cryptsetup LUKS2. In cases when FIPS cryptography is required, additional burden would be placed onto these systems to operate their kernels in FIPS mode.
Trusted Computing Group works on establishing many industry standards and specifications, which are widely adopted to improve safety and security of computing whilst keeping it easy to use. One of their most famous specifications them is TCG TPM 2.0 (Trusted Platform Module). TPMs are now widely available on most devices and help to protect secret keys and attest systems. For example, most software full disk encryption solutions can utilise TCG TPM to store full disk encryption keys providing passwordless, biometric or pin-base ways to unlock the drives as well as attesting that system have not been modified or compromised whilst offline.
TCG Storage Security Subsystem Class: Opal Specification is a set of specifications for features of data storage devices. The authors and contributors to OPAL are leading and well trusted storage manufacturers such as Samsung, Western Digital, Seagate Technologies, Dell, Google, Lenovo, IBM, Kioxia, among others. One of the features that Opal Specification enables is self-encrypting drives which becomes very powerful when combined with pre-boot authentication. Out of the box, such drives always and transparently encrypt all disk data using hardware acceleration. To protect data one can enter UEFI firmware setup (BIOS) to set NVMe single user password (or user + administrator/recovery passwords) to encrypt the disk encryption key. If one's firmware didn't come with such features, one can also use SEDutil to inspect and configure all of this. Latest release of major Linux distributions have SEDutil already packaged.
Once password is set, on startup, pre-boot authentication will request one to enter password - prior to booting any operating systems. It means that full disk is actually encrypted, including the UEFI ESP and all operating systems that are installed in case of dual or multi-boot installations. This also prevents tampering with ESP, UEFI bootloaders and kernels which with traditional software-based encryption often remain unencrypted and accessible. It also means one doesn't have to do special OS level repartitioning, or installation steps to ensure all data is encrypted at rest.
What about FIPS compliance? Well, the good news is that majority of the OPAL compliant hard drives and/or security sub-chips do have FIPS 140-3 certification. Meaning they have been tested by independent laboratories to ensure they do in-fact encrypt data. On the CMVP website one can search for module name terms "OPAL" or "NVMe" or name of hardware vendor to locate FIPS certificates.
Are such drives widely available? Yes. For example, a common Thinkpad X1 gen 11 has OPAL NVMe drives as standard, and they have FIPS certification too. Thus, it is likely in your hardware fleet these are already widely available. Use sedutil to check if MediaEncrypt and LockingSupported features are available.
Well, this is great for laptops and physical servers, but you may ask - what about public or private cloud? Actually, more or less the same is already in-place in both. On CVMP website all major clouds have their disk encryption hardware certified, and all of them always encrypt all Virtual Machines with FIPS certified cryptography without an ability to opt-out. One is however in full control of how the encryption keys are managed: cloud-provider or self-managed (either with a cloud HSM or KMS or bring your own / external). See these relevant encryption options and key management docs for GCP, Azure, AWS. But the key takeaway without doing anything, at rest, VMs in public cloud are always encrypted and satisfy NIST SP 800-53 controls.
What about private cloud? Most Linux based private clouds ultimately use qemu typically with qcow2 virtual disk images. Qemu supports user-space encryption of qcow2 disk, see this manpage. Such encryption encrypts the full virtual machine disk, including the bootloader and ESP. And it is handled entirely outside of the VM on the host - meaning the VM never has access to the disk encryption keys. Qemu implements this encryption entirely in userspace using gnutls, nettle, libgcrypt depending on how it was compiled. This also means one can satisfy FIPS requirements entirely in userspace without a Linux kernel in FIPS mode. Higher level APIs built on top of qemu also support qcow2 disk encryption, as in projects such as libvirt and OpenStack Cinder.
If you carefully read the docs, you may notice that agent support is explicitly sometimes called out as not supported or not mentioned. Quite often agents running inside the OS may not have enough observability to them to assess if there is external encryption. It does mean that monitoring above encryption options require different approaches - for example monitor your cloud configuration using tools such as Wiz and Orca, rather than using agents inside individual VMs. For laptop / endpoint security agents, I do wish they would start gaining capability to report OPAL SED availability and status if it is active or not.
What about using software encryption none-the-less on top of the above solutions? It is commonly referred to double or multiple encryption. There will be an additional performance impact, but it can be worthwhile. It really depends on what you define as data at rest for yourself and which controls you need. If one has a dual-boot laptop, and wants to keep one OS encrypted whilst booted into the other, it can perfectly reasonable to encrypted the two using separate software encryption keys. In addition to the OPAL encryption of the ESP. For more targeted per-file / per-folder encryption, one can look into using gocryptfs which is the best successor to the once popular, but now deprecated eCryptfs (amazing tool, but has fallen behind in development and can lead to data loss).
All of the above mostly talks about cryptographic encryption, which only provides confidentially but not data integrity. To protect integrity, one needs to choose how to maintain that. dm-verity is a good choice for read-only and rigid installations. For read-write workloads, it may be easier to deploy ZFS or Btrfs instead. If one is using filesystems without a built-in integrity support such as XFS or Ext4, one can retrofit integrity layer to them by using dm-integrity (either standalone, or via dm-luks/cryptsetup --integrity option).
If one has a lot of estate and a lot of encryption keys to keep track off a key management solution is likely needed. The most popular solution is likely the one from Thales Group marketed under ChiperTrust Data Security Platform (previously Vormetric), but there are many others including OEM / Vendor / Hardware / Cloud specific or agnostic solutions.
I hope this crash course guide piques your interest to learn and discover modern confidentially and integrity solutions, and to re-affirm or change your existing controls w.r.t. to data protection at rest.
Full disk encryption, including UEFI ESP /boot/efi is now widely achievable by default on both baremetal machines and in VMs including with FIPS certification. To discuss more let's connect on Linkedin.
The last two weeks I attended DebConf and DebCamp in Brest, France.
Usually, I like to do a more detailed write-up of DebConf, but I was already quite burnt out when I got here, so I’ll circle back to a few things that were important to me in later posts.
In the meantime, thanks to everyone who made this DebConf possible, whether you volunteered for one task or were part of the organisation team. Also a special thanks to the wonderful sponsors who made this entire event possible!
See you next year in Argentina!
Jellyfish, taken during daytrip at aquarium.