Working on rebasing and finishing an "old" patch from Michael Gorse that implement accessibility in AbiWord. While the patch is a few years old, it's perfectly rebasable.
Pushed a lot of code modernisation on master, as well as various memory leaks and crashes on stable.
Released 3.0.7 (tag, and flatpak build). 3.0.8 might come soon as I'm backporting more crashers that are already fixed on master. Until I have a 3.1.90 version ready for testing.
libopenrawFinally I have Fujifilm X-Trans demosaicking, which needs more work as it is still a crude port of the dcraw C code. A also apply the white balance. Added few new cameras and some benchmarks.
Finally I released 0.4.0-alpha.11.
Also I did update libopenraw-view which is my GUI for testing libopenraw. It now renders asynchronously.
Supporting castSome other various stuff.
glycinThe merge request to update libopenraw was merged. Thanks to Sophie!
gegl-rsUpdated gegl-rs to the new glib-rs.
lrcat-extractorReleased a new version after updating rusqlite.
NiepcePorted to the latest gtk4-rs, hence the update to gegl-rs. Some small API changes in the object subclassing needed to be handled.
For the past two weeks, I’ve been debugging the json module. I hooked up the JSON module into the codebase hierarchy by modifying valagsignalmodule.vala to extend the JSON module, which, before extended the GObject module. Running the test case called json.vala, crashes the program.
In the beginning, I was having quite the difficulty trying to use gdb and coredumpctl to investigate the crash. I kept doing:
./autogen.sh --enable-debug make/
./configure --enable-debug makeThen I’d run commands like:
coredumpctl gdb coredumpctl infoIt simply wasn’t working when I built it this way with the following coredumpctl commands. It wasn’t showing the debug symbols that I needed to be able to see the functions that were causing the program to crash. When I built Vala using GNOME Builder’s build button, it also didn’t work.
Lorenz, my mentor, helped me a lot on this issue. How we were able to fix this was first, I needed to build Vala by doing
./configure --enable-debug makeThen I needed to run the test in the ‘build terminal’ in GNOME Builder compiler/valac --pkg json-glib-1.0 tests/annotations/json.vala
Then, in a regular terminal, I ran:
gdb compiler/.libs/lt-valac (gdb) run --pkg json-glib-1.0 tests/annotations/json.vala (gdb) btOnce I ran these commands, I was finally able to see the functions causing the crash to happen.
123456#6 0x00007ffff7a1ef37 in vala_ccode_constant_construct_string (object_type=Python Exception <class 'gdb.error'>: value has been optimized out , _name=0x5555563ef1c0 "anything") at /home/alley/Desktop/vala/ccode/valaccodeconstant.vala:41 #7 0x00007ffff7a1f9f7 in vala_ccode_constant_new_string (_name=0x5555563ef1c0 "anything") at /home/alley/Desktop/vala/ccode/valaccodeconstant.vala:40 #8 0x00007ffff7a10918 in vala_json_module_json_builder (self=0x55555558c810) at /home/alley/Desktop/vala/codegen/valajsonmodule.vala:292 #9 0x00007ffff7a0f07d in vala_json_module_generate_class_to_json (self=0x55555558c810, cl=0x555557199120) at /home/alley/Desktop/vala/codegen/valajsonmodule.vala:191 #10 0x00007ffff7a127f4 in vala_json_module_real_generate_class_init (base=0x55555558c810, cl=0x555557199120) at /home/alley/Desktop/vala/codegen/valajsonmodule.vala:410This snippet of the backtrace shows that the function vala_json_module_json_builder () on line 292 was the actual crash culprit.
After I fixed the debug symbols, my git push decided not to work properly for a few days. So I was manually editing my changes on GitLab. My theory for git push not working is that Kwalletmanager had a problem, and so the credentials stopped working, which hanged the git push. Either way, I fixed it by switching my repo to SSH. I’ll investigate why the HTTP side of git stopped working, and I’ll fix it.
GUADECThis was my first-ever GUADEC event that I’ve ever watched. I watched it online, and I particularly found the lightning talks to be my favourite parts. They were all short, sweet, and to the point. It was also kind of comedic how fast people talked to fit everything they wanted to say in a short time span.
Some talks I particularly found interesting are:
The open source game called Threadbare by Endless Access. As someone who is a game programming graduate, it instantly caught my eye and had my attention. I’ll definitely be checking it out and trying to contribute to it.
Carlos Garnacho’s talk about GNOME on our TVs. The idea of GNOME expanding onto smart TVs opens up a whole new area of usability and user experience. It got me thinking about the specs of a regular TV set and how GNOME can adapt to and enhance that experience. The possibilities are exciting, and I’m curious to see how far this concept goes.
Overall, GUADEC made me feel more connected to the GNOME community even though I joined remotely. I’d love to have GAUDEC hosted in Toronto :)
I have wanted to write this blog post for quite some time, but been unsure about the exact angle of it. I think I found that angle now where I will root the post in a very tangible concrete example.
So the reason I wanted to write this was because I do feel there is a palpable skepticism and negativity towards AI in the Linux community, and I understand that there are societal implications that worry us all, like how deep fakes have the potential to upend a lot of things from news disbursement to court proceedings. Or how malign forces can use AI to drive narratives in social media etc., is if social media wasn’t toxic enough as it is. But for open source developers like us in the Linux community there is also I think deep concerns about tooling that deeply incurs into something that close to the heart of our community, writing code and being skilled at writing code. I hear and share all those concerns, but at the same time having spent time the last weeks using Claude.ai I do feel it is not something we can afford not to engage with. So I know people have probably used a lot of different AI tools in the last year, some being more cute than useful others being somewhat useful and others being interesting improvements to your Google search for instance. I think I shared a lot of those impressions, but using Claude this last week has opened my eyes to what AI enginers are going to be capable of going forward.
So my initial test was writing a python application for internal use at Red Hat, basically connecting to a variety of sources and pulling data and putting together reports, typical management fare. How simple it was impressed me though, I think most of us having to deal with pulling data from a new source know how painful it can be, with issues ranging from missing, outdated or hard to parse API documentation. I think a lot of us also then spend a lot of time experimenting to figure out the right API calls to make in order to pull the data we need. Well Claude was able to give me python scripts that pulled that data right away, I still had to spend some time with it to fine tune the data being pulled and ensuring we pulled the right data, but I did it in a fraction of the time I would have spent figuring that stuff out on my own. The one data source Claude struggled with Fedora’s Bohdi, well once I pointed it to the URL with the latest documentation for that it figured out that it would be better to use the bohdi client library to pull data and once it had that figured out it was clear sailing.
So coming of pretty impressed by that experience I wanted to understand if Claude would be able to put together something programmatically more complex, like a GTK+ application using Vulkan. [Note: should have checked the code better, but thanks to the people who pointed this out. I told the AI to use Vulkan, which it did, but not in the way I expected, I expected it to render the globe using Vulkan, but it instead decided to ensure GTK used its Vulkan backend, an important lesson in both prompt engineering and checking the code afterwards).]So I thought what would be a good example of such an application and I also figured it would be fun if I found something really old and asked Claude to help me bring it into the current age. So I suddenly remembered xtraceroute, which is an old application orginally written in GTK1 and OpenGL showing your traceroute on a 3d Globe.
Screenshot of the original Xtraceroute application
I went looking for it and found that while it had been updated to GTK2 since last I looked at it, it had not been touched in 20 years. So I thought, this is a great testcase. So I grabbed the code and fed it into Claude, asking Claude to give me a modern GTK4 version of this application using Vulkan. Ok so how did it go? Well it ended up being an iterative effort, with a lot of back and forth between myself and Claude. One nice feature Claude has is that you can upload screenshots of your application and Claude will use it to help you debug. Thanks to that I got a long list of screenshots showing how this application evolved over the course of the day I spent on it.
This screenshot shows Claudes first attempt of transforming the 20 year old xtraceroute application into a modern one using GTK4, Vulkan and also adding a Meson build system. My prompt to create this was feeding in the old code and asking Claude to come up with a GTK4 and Vulkan equivalent. As you can see the GTK4 UI is very simple, but ok as it is. The rendered globe leaves something to be desired though. I assume the old code had some 2d fall backcode, so Claude latched onto that and focused on trying to use the Cairo API to recreate this application, despite me telling it I wanted a Vulkan application. What what we ended up with was a 2d circle that I could spin around like a wheel of fortuen. The code did have some Vulkan stuff, but defaulted to the Cairo code.
Second attempt at updating this application Anyway, I feed the screenshot of my first version back into Claude and said that the image was not a globe, it was missing the texture and the interaction model was more like a wheel of fortune. As you can see the second attempt did not fare any better, in fact we went from circle to square. This was also the point where I realized that I hadn’t uploaded the textures into Claude, so I had to tell it to load the earth.png from the local file repository.
Third attempt from Claude.Ok, so I feed my second screenshot back into Claude and pointed out that it was no globe, in fact it wasn’t even a circle and the texture was still missing. With me pointing out it needed to load the earth.png file from disk it came back with the texture loading. Well, I really wanted it to be a globe, so I said thank you for loading the texture, now do it on a globe.
This is the output of the 4th attempt. As you can see, it did bring back a circle, but the texture was gone again. At this point I also decided I didn’t want Claude to waste anymore time on the Cairo code, this was meant to be a proper 3d application. So I told Claude to drop all the Cairo code and instead focus on making a Vulkan application.
So now we finally had something that started looking like something, although it was still a circle, not a globe and it got that weird division of 4 thing on the globe. Anyway, I could see it using Vulkan now and it was loading the texture. So I was feeling like we where making some decent forward movement. So I wrote a longer prompt describing the globe I wanted and how I wanted to interact with it and this time Claude did come back with Vulkan code that rendered this as a globe, thus I didn’t end up screenshoting it unfortunately.
So with the working globe now in place, I wanted to bring in the day/night cycle from the original application. So I asked Claude to load the night texture and use it as an overlay to get that day/night effect. I also asked it to calculate the position of the sun to earth at the current time, so that it could overlay the texture in the right location. As you can see Claude did a decent job of it, although the colors was broken.
So I kept fighting with the color for a bit, Claude could see it was rendering it brown, but could not initally figure out why. I could tell the code was doing things mostly right so I also asked it to look at some other things, like I realized that when I tried to spin the globe it just twisted the texture. We got that fixed and also I got Claude to create some tests scripts that helped us figure out that the color issue was a RGB vs BRG issue, so as soon as we understood that then Claude was able to fix the code to render colors correctly. I also had a few iterations trying to get the scaling and mouse interaction behaving correctly.
So at this point I had probably worked on this for 4-5 hours, the globe was rendering nicely and I could interact with it using the mouse. Next step was adding the traceroute lines back. By default Claude had just put in code to render some small dots on the hop points, not draw the lines. Also the old method for getting the geocoordinates, but I asked Claude to help me find some current services which it did and once I picked one it on first try gave me code that was able to request the geolocation of the ip addresses it got back. To polish it up I also asked Claude to make sure we drew the lines following the globes curvature instead of just drawing straight lines.
Final version of the updated Xtraceroute application. It mostly works now, but I did realize why I always thought this was a fun idea, but less interesting in practice, you often don’t get very good traceroutes back, probably due to websites being cached or hosted globally. But I felt that I had proven that with a days work Claude was able to help me bring this old GTK application into the modern world.
So I am not going to argue that Xtraceroute is an important application that deserved to be saved, in fact while I feel the current version works and proves my point I also lost motivation to try to polish it up due to the limitations of tracerouting, but the code is available for anyone who finds it worthwhile.
But this wasn’t really about Xtraceroute, what I wanted to show here is how someone lacking C and Vulkan development skills can actually use a tool like Claude to put together a working application even one using more advanced stuff like Vulkan, which I know many more than me would feel daunting. I also found Claude really good at producing documentation and architecture documents for your application. It was also able to give me a working Meson build system and create all the desktop integration files for me, like the .desktop file, the metainfo file and so on. For the icons I ended up using Gemini as Claude do not do image generation at this point, although it was able to take a png file and create a SVG version of it (although not a perfect likeness to the original png).
Another thing I want to say is that the way I think about this, it is not that it makes coding skills less valuable, AIs can do amazing things, but you need to keep a close eye on them to ensure the code they create actually do what you want and that it does it in a sensible manner. For instance in my reporting application I wanted to embed a pdf file and Claude initial thought was to bring in webkit to do the rendering. That would have worked, but would have added a very big and complex dependency to my application, so I had to tell it that it could just use libpoppler to do it, something Claude agreed was a much better solution. The bigger the codebase the harder it also becomes for the AI to deal with it, but I think it hose circumstances what you can do is use the AI to give you sample code for the functionality you want in the programming language you want and then you can just work on incorporating that into your big application.
The other part here if course in terms of open source is how should contributors and projects deal with this? I know there are projects where AI generated CVEs or patches are drowning them and that helps nobody. But I think if we see AI as a developers tool and that the developer using the tool is responsible for the code generated, then I think that mindset can help us navigate this. So if you used an AI tool to create a patch for your favourite project, it is your responsibility to verify that patch before sending it in, and with that I don’t mean just verifying the functionality it provides, but that the code is clean and readable and following the coding standards of said upstream project. Maintainers on the other hand can use AI to help them review and evaluate patches quicker and thus this can be helpful on both sides of the equation. I also found Claude and other AI tools like Gemini pretty good at generating test cases for the code they make, so this is another area where open source patch contributions can improve, by improving test coverage for the code.
I do also believe there are many areas where projects can greatly benefit from AI, for instance in the GNOME project a constant challenge for extension developers have been keeping their extensions up-to-date, well I do believe a tool like Claude or Gemini should be able to update GNOME Shell extensions quite easily. So maybe having a service which tries to provide a patch each time there is a GNOME Shell update might be a great help there. At the same time having a AI take a look at updated extensions and giving an first review of the update might help reduce the load on people doing code reviews on extensions and help flag problematic extensions.
I know for a lot of cases and situations uploading your code to a webservice like Claude, Gemini or Copilot is not something you want or can do. I know privacy is a big concern for many people in the community. My team at Red Hat has been working on a code assistant tool using the IBM Granite model, called Granite.code. What makes Granite different is that it relies on having the model run locally on your own system, so you don’t send your code or data of somewhere else. This of course have great advantages in terms of improving privacy and security, but it has challenges too. The top end AI models out there at the moment, of which Claude is probably the best at the time of writing this blog post, are running on hardware with vast resources in terms of computing power and memory available. Most of us do not have those kind of capabilities available at home, so the model size and performance will be significantly lower. So at the moment if you are looking for a great open source tool to use with VS Code to do things like code completion I recommend giving Granite.code a look. If you on the other hand want to do something like I have described here you need to use something like Claude, Gemini or ChatGPT. I do recommend Claude, not just because I believe them to be the best at it at the moment, but they also are a company trying to hold themselves to high ethical standards. Over time we hope to work with IBM and others in the community to improve local models, and I am also sure local hardware will keep improving, so over time the experience you can get with a local model on your laptop at least has less of a gap than what it does today compared to the big cloud hosted models. There is also the middle of the road option that will become increasingly viable, where you have a powerful server in your home or at your workplace that can at least host a midsize model, and then you connect to that on your LAN. I know IBM is looking at that model for the next iteration of Granite models where you can choose from a wide variety of sizes, some small enough to be run on a laptop, others of a size where a strong workstation or small server can run them or of course the biggest models for people able to invest in top of the line hardware to run their AI.
Also the AI space is moving blazingly fast, if you are reading this 6 Months from now I am sure the capabilities of online and local models will have changed drastically already.
So to all my friends in the Linux community I ask you to take a look at AI and what it can do and then lets work together on improving it, not just in terms of capabilities, but trying to figure out things like societal challenges around it and sustainability concerns I also know a lot of us got.
Whats next for this code
As I mentioned I while I felt I got it to a point where I proved to myself it worked, I am not planning on working anymore on it. But I did make a cute little application for internal use that shows a spinning globe with all global Red Hat offices showing up as little red lights and where it pulls Red Hat news at the bottom. Not super useful either, but I was able to use Claude to refactor the globe rendering code from xtraceroute into this in just a few hours.
Red Hat Offices Globe and news.
I recently found, under the rain, next to a book swap box, a pile of 90's “software magazines” which I spent my evening cleaning, drying, and sorting in the days afterwards.
Magazine cover CDs with nary a magazine
Those magazines are a peculiar thing in France, using the mechanism of “Commission paritaire des publications et des agences de presse” or “Commission paritaire” for short. This structure exists to assess whether a magazine can benefit from state subsidies for the written press (whether on paper at the time, and also the internet nowadays), which include a reduced VAT charge (2.1% instead of 20%), reduced postal rates, and tax exemptions.
In the 90s, this was used by Diamond Editions[1] (a publisher related to tech shop Pearl, which French and German computer enthusiasts probably know) to publish magazines with just enough original text to qualify for those subsidies, bundled with the really interesting part, a piece of software on CD.
If you were to visit a French newsagent nowadays, you would be able to find other examples of this: magazines bundled with music CDs, DVDs or Blu-rays, or even toys or collectibles. Some publishers (including the infamous and now shuttered Éditions Atlas) will even get you a cheap kickstart to a new collection, with the first few issues (and collectibles) available at very interesting prices of a couple of euros, before making that “magazine” subscription-only, with each issue being increasingly more expensive (article from a consumer protection association).
Other publishers have followed suite.
I guess you can only imagine how much your scale model would end up costing with that business model (50 eurocent for the first part, 4.99€ for the second), although I would expect them to have given up the idea of being categorised as “written press”.
To go back to Diamond Editions, this meant the eventual birth of 3 magazines: Presqu'Offert, BestSellerGames and StratéJ. I remember me or my dad buying a few of those, an older but legit and complete version of ClarisWorks, CorelDraw or a talkie version of a LucasArt point'n'click was certainly a more interesting proposition than a cut-down warez version full of viruses when budget was tight.
3 of the magazines I managed to rescue from the rain
You might also be interested in the UK “covertape wars”.
Don't stress the technique
This brings us back to today and while the magazines are still waiting for scanning, I tried to get a wee bit organised and digitising the CDs.
Some of them will have printing that covers the whole of the CD, a fair few use the foil/aluminium backing of the CD as a blank surface, which will give you pretty bad results when scanning them with a flatbed scanner: the light source keeps moving with the sensor, and what you'll be scanning is the sensor's reflection on the CD.
My workaround for this is to use a digital camera (my phone's 24MP camera), with a white foam board behind it, so the blank parts appear more light grey. Of course, this means that you need to take the picture from an angle, and that the CD will appear as an oval instead of perfectly circular.
I tried for a while to use GIMP perspective tools, and “Multimedia” Mike Melanson's MobyCAIRO rotation and cropping tool. In the end, I settled on Darktable, which allowed me to do 4-point perspective deskewing, I just had to have those reference points.
So I came up with a simple "deskew" template, which you can print yourself, although you could probably achieve similar results with grid paper.
My janky setup The resulting pictureAfter opening your photo with Darktable, and selecting the “darkroom” tab, go to the “rotate and perspective tool”, select the “manually defined rectangle” structure, and adjust the rectangle to match the centers of the 4 deskewing targets. Then click on “horizontal/vertical fit”. This will give you a squished CD, don't worry, and select the “specific” lens model and voilà.
Tools at the ready
Targets acquired
You can now export the processed image (I usually use PNG to avoid data loss at each step), open things up in GIMP and use the ellipse selection tool to remove the background (don't forget the center hole), the rotate tool to make the writing straight, and the crop tool to crop it to size.
And we're done!
[1]: Full disclosure, I wrote a couple of articles for Linux Pratique and Linux Magazine France in the early 2000s, that were edited by that same company.
Greetings readers of the future from my favourite open technology event of the year. I am hanging out with the people who develop the GNOME platform talking about interesting stuff.
Being realistic, I won’t have time to make a readable writeup of the event. So I’m going to set myself a challenge: how much can I write up of the event so far, in 15 minutes?
Let’s go!
Conversations and knowledgeConferences involve a series of talks, usually monologues on different topics, with slides and demos. A good talk leads to multi-way conversations.
One thing I love about open source is: it encourages you to understand how things work. Big tech companies want you to understand nothing about your devices beyond how to put in your credit card details and send them money. Sharing knowledge is cool, though. If you know how things work then you can fix it yourself.
StructuresLast year, I also attended the conference and was left with a big question for the GNOME project: “What is our story?” (Inspired by an excellent keynote from Ryan Sipes about the Thunderbird email app, and how it’s supported by donations).
We didn’t answer that directly, but I have some new thoughts.
Open source desktops are more popular than ever. Apparently we have like 5% of the desktop market share now. Big tech firms are nowadays run as huge piles of cash, whose story is that they need to make more cash, in order to give it to shareholders, so that one day you can, allegedly, have a pension. Their main goal isn’t to make computers do interesting things. The modern for-profit corporation is a super complex institution, with great power, which is often abused.
Open communities like GNOME are an antidote to that. With way fewer people, they nevertheless manage to produce better software in many cases, but in a way that’s demanding, fun, chaotic, mostly leaderless and which frequently burns out volunteers who contribute.
Is the GNOME project’s goal to make computers do interesting things? For me, the most interesting part of the conference so far was the focus on project structure. I think we learned some things about how independent, non-profit communities can work, and how they can fail, and how we can make things better.
In a world where political structures are being heavily tested and, in many cases, are crumbling, we would do well to talk more about structures, and to introspect a bit more on what works and what doesn’t. And to highlight the amazing work that the GNOME Foundation’s many volunteer directors have achieved over the last 30 years to create an institution that still functions today, and in many ways functions a lot better than organizations with significantly more resources.
Relevant talks
Emmanuele Bassi tried, in a friendly way, to set fire to long-standing structures around how the GNOME community agrees and disagrees changes to the platform. Based on ideas from other successful projects that are driven by independent, non-profit communities such as the Rust and Python programming languages.
Part of this idea is to create well-defined teams of people who collaborate on different parts of the GNOME platform.
I’ve been contributing to GNOME in different ways for a loooong time, partly due to my day job, where I sometimes work with the technology stack, and partly because its a great group of people, we get to meet around the world once a year, and make software that’s a little more independent from the excesses and the exploitation of modern capitalism, or technofuedalism.
And I think it’s going to be really helpful to organize my contributions according to a team structure with a defined form.
SearchI really hope we’ll have a search team.
I don’t have much news about search. GNOME’s indexer (localsearch) might start indexing the whole home directory soon. Carlos Garnacho continues to heroically make it work really well.
QA / Testing / Developer ExperienceI did a talk at the conference (and half of another one with Martín Abente Lahaye) , about end-to-end testing using openQA.
The talks were pretty successful, they lead to some interesting conversations with new people. I hope we’ll continue to grow the Linux QA call and try to keep these conversations going, and try to share knowledge and create better structures so that paid QA engineers who are testing products built with GNOME can collaborate on testing upstream.
Freeform notesI’m 8 minutes over time already so the rest of this will be freeform notes from my notepad.
Live-coding streams aren’t something I watch or create. It’s an interesting way to share knowledge with the new generation of people who have grown up with internet videos as a primary knowledge source. I don’t have age stats for this blog, but I’m curious how many readers under 30 have read this far down. (Leave a comment if you read this and prove me wrong! : -)
systemd-sysexts for development are going to catch on.
There should be karaoke every year.
Fedora Silverblue isn’t actively developed at the moment. bootc is something to keep an eye on.
GNOME Shell Extensions are really popular and are a good “gateway drug” to get newcomers involved. Nobody figured out a good automated testing story for these yet. I wonder if there’s a QA project there? I wonder if there’s a low-cost way to allow extension developers to test extensions?
Legacy code is “code without tests”. I’m not sure I agree with that.
“Toolkits are transient, apps are forever”. That’s spot-on.
There is a spectrum between being a user and a developer. It’s not a black-and-white distinction.
BuildStream is still difficult to learn and the documentation isn’t a helpful getting started guide for newcomers.
We need more live demos of accessibility tools. I still don’t know how you use the screen reader. I’d like to have the computer read to me.
That’s it for now. It took 34 minutes to empty my brain into my blog, more than planned, but a necessary step. Hope some of it was interesting. See you soon!
I’m on the Octopus Agile electricity tariff, where the price changes every half hour based on wholesale costs. This is great for saving money and using less carbon intensive energy, provided you can shift your heavy usage to cheaper times. With a family that insists on eating at a normal hour, that mostly means scheduling the dishwasher and washing machine.
The snag was not having an easy way to see upcoming prices on my Linux laptop. To scratch that itch, I built a small GTK app: Octopus Agile Energy. You can use it yourself if you’re in the UK and have this electricity tarriff. Either install it directly from Flathub or download the source code and ‘press play’ in GNOME Builder. The app is heavily inspired by the excellent Octopus Compare for mobile but I stripped the concept back to a single job: what’s the price now and for the next 24 hours? This felt right for a simple desktop utility and was achievable with a bit of JSON parsing and some hand waving.
I wrote a good chunk of the Python for it with the gemini-cli, which was a pleasant surprise. My workflow was running Gemini in a Toolbx container, building on my Silverblue desktop with GNOME Builder, and manually feeding back any errors. I kept myself in the loop, taking my own screenshots of visual issues rather than letting the model run completely free and using integrations like gnome-mcp-server to inspect itself.
It’s genuinely fun to make apps with GTK 4, libadwaita, and Python. The modern stack has a much lower barrier to entry than the GTK-based frameworks I’ve worked on in the past. And while I have my reservations about cloud-hosted AI, using this kind of technology feels like a step towards giving users more control over their computing, not less. Of course, the 25 years of experience I have in software development helped bridge the gap between a semi-working prototype that only served one specific pricing configuration, didn’t cache anything and was constantly re-rendering; and an actual app. The AI isn’t quite there yet at all, but the potential is there and a locally hosted system by and for the free software ecosystem would be super handy.
I hope the app is useful. Whilst I may well make some tweaks or changes this does exactly what I want and I’d encourage anyone interested to fork the code and build something that makes them happy.
As of July 10, 2025, all flavors of Ubuntu 24.10, including Ubuntu Studio 24.10, codenamed “Oracular Oriole”, have reached end-of-life (EOL). There will be no more updates of any kind, including security updates, for this release of Ubuntu.
If you have not already done so, please upgrade to Ubuntu Studio 25.10 via the instructions provided here. If you do not do so as soon as possible, you will lose the ability without additional advanced configuration.
No single release of any operating system can be supported indefinitely, and Ubuntu Studio has no exception to this rule.
Regular Ubuntu releases, meaning those that are between the Long-Term Support releases, are supported for 9 months and users are expected to upgrade after every release with a 3-month buffer following each release.
Long-Term Support releases are identified by an even numbered year-of-release and a month-of-release of April (04). Hence, the most recent Long-Term Support release is 24.04 (YY.MM = 2024.April), and the next Long-Term Support release will be 26.04 (2026.April). LTS releases for official Ubuntu flavors (not Desktop or Server which are supported for five years) are three years, meaning LTS users are expected to upgrade after every LTS release with a one-year buffer.
Another post in what is slowly becoming a series, after describing how to make a Discord bot with PHP; today we're looking at how to make a Discord activity the same way.
An activity is simpler than a bot; Discord activities are basically a web page which loads in an iframe, and can do what it likes in there. You're supposed to use them for games and the like, but I suspect that it might be useful to do quite a few bot-like tasks with activities instead; they take up more of your screen while you're using them, but it's much, much easier to create a user-friendly experience with an activity than it is with a bot. The user interface for bots tends to look a lot like the command line, which appeals to nerds, but having to type !mybot -opt 1 -opt 2 is incomprehensible gibberish to real people. Build a little web UI, you know it makes sense.
Anyway, I have not yet actually published one of these activities, and I suspect that there is a whole bunch of complexity around that which I'm not going to get into yet. So this will get you up and running with a Discord activity that you can test, yourself. Making it available to others is step 2: keep an eye out for a post on that.
There are lots of "frameworks" out there for building Discord activities, most of which are all about "use React!" and "have this complicated build environment!" and "deploy a node.js server!", when all you actually need is an SPA web page1, a JS library, a small PHP file, and that's it. No build step required, no deploying a node.js server, just host it in any web space that does PHP (i.e., all of them). Keep it simple, folks. Much nicer.
Step 1: set up a Discord appTo have an activity, it's gotta be tied to a Discord app. Get one of these as follows:
And this will then launch your activity in a window in your Discord app. It won't do anything yet because you haven't written it, but it's now loading.
Step 2: write an activityYou will see that in the middle of this, we call token.php to get an access token from the code that discordSdk.commands.authorize gives you. While the URL is /.proxy/token.php, that's just a token.php file right next to index.php; the .proxy stuff is because Discord puts all your requests through their proxy, which is OK. So you need this file to exist. Following the Discord instructions for authenticating users with OAuth, it should look something like this:
<?php require_once("secrets.php"); $postdata = http_build_query( array( "client_id" => $clientid, "client_secret" => $clientsecret, "grant_type" => "authorization_code", "code" => $_GET["code"] ) ); $opts = array('http' => array( 'method' => 'POST', 'header' => [ 'Content-Type: application/x-www-form-urlencoded', 'User-Agent: mybot/1.00' ], 'content' => $postdata, 'ignore_errors' => true ) ); $context = stream_context_create($opts); $result_json = file_get_contents('https://discord.com/api/oauth2/token', false, $context); if ($result_json == FALSE) { echo json_encode(array("error"=>"no response")); die(); } $result = json_decode($result_json, true); if (!array_key_exists("access_token", $result)) { error_log("Got JSON response from /token without access_token $result_json"); echo json_encode(array("error"=>"no token")); die(); } $access_token = $result["access_token"]; echo json_encode(array("access_token" => $access_token));And... that's all. At this point, if you Launch your activity from Discord, it should load, and should work out who the running user is (and which channel and server they're in) and that's pretty much all you need. Hopefully that's a relatively simple way to get started.
So, Jake Archibald wrote that we should "give footnotes the boot", and... I do not wholly agree. So, here are some arguments against, or at least perpendicular to. Whether this is in grateful thanks of or cold-eyed revenge about him making me drink a limoncello and Red Bull last week can remain a mystery.
Commentary about footnotes on the web tends to boil down into two categories: that they're foot, and that they're notes. Everybody1 agrees that being foot is a problem. Having a meaningless little symbol in some text which you then have to scroll down to the end of a document to understand is stupid. But, and here's the point, nobody does this. Unless a document on the web was straight up machine-converted from its prior life as a printed thing, any "footnotes" therein will have had some effort made to conceptually locate the content of the footnote inline with the text that it's footnoting. That might be a link which jumps you down to the bottom, or it might be placed at the side, or it might appear inline when clicked on, or it might appear in a popover, but the content of a "footnote" can be reached without your thread of attention being diverted from the point where you were previously2.
He's right about the numbers3 being meaningless, though, and that they're bad link text; the number "3" gives no indication of what's hidden behind it, and the analogy with "click here" as link text is a good one. We'll come back to this, but it is a correct objection.
What is a footnote, anyway?The issue with footnotes being set off this way (that is: that they're notes) isn't, though, that it's bad (which it is), it's that the alternatives are worse, at least in some situations. A footnote is an extra bit of information which is relevant to what you're reading, but not important enough that you need to read it right now. That might be because it's explanatory (that is: it expands and enlarges on the main point being made, but isn't directly required), or because it's a reference (a citation, or a link out to where this information was found so it can be looked up later and to prove that the author didn't just make this up), or because it's commentary (where you don't want to disrupt the text that's written with additions inline, maybe because you didn't write it). Or, and this is important, because it's funnier to set it off like this. A footnote used this way is like the voice of the narrator in The Perils of Penelope Pitstop being funny about the situation. Look, I'll choose a random book from my bookshelf4, Reaper Man by Terry Pratchett.
This is done because it's funny. Alternatives... would not be funny.5
If this read:
Even the industrial-grade crystal ball was only there as a sop to her customers. Mrs Cake could actually read the future in a bowl of porridge. (It would say, for example, that you would shortly undergo a painful bowel movement.) She could have a revelation in a panful of frying bacon.then it's too distracting, isn't it? That's giving the thing too much prominence; it derails the point and then you have to get back on board after reading it. Similarly with making it a long note via <details> or via making it <section role="aside">, and Jake does make the point that that's for longer notes.
Even the industrial-grade crystal ball was only there as a sop to her customers. Mrs Cake could actually read the future in a bowl of porridge.
NoteIt would say, for example, that you would shortly undergo a painful bowel movement.She could have a revelation in a panful of frying bacon.
Now, admittedly, half the reason Pratchett's footnotes are funny is because they're imitating the academic use. But the other half is that there is a place for that "voice of the narrator" to make snarky asides, and we don't really have a better way to do it.
Sometimes the parenthesis is the best way to do it. Look at the explanations of "explanatory", "reference", and "commentary" in the paragraph above about what a footnote is. They needed to be inline; the definition of what I mean by "explanatory" should be read along with the word, and you need to understand my definition to understand why I think it's important. It's directly relevant. So it's inline; you must not proceed without having read it. It's not a footnote. But that's not always the case; sometimes you want to expand on what's been written without requiring the reader to read that expansion in order to proceed. It's a help; an addition; something relevant but not too relevant. (I think this is behind the convention that footnotes are in smaller text, personally; it's a typographic convention that this represents the niggling or snarky or helpful "voice in your head", annotating the ongoing conversation. But I haven't backed this up with research or anything.)
What's the alternative?See, this is the point. Assume for the moment that I'm right6 and that there is some need for this type of annotation -- something which is important enough to be referenced here but not important enough that you must read it to proceed. How do we represent that in a document?
Jake's approaches are all reasonable in some situations. A note section (a "sidebar", I think newspaper people would call it?) works well for long clarifying paragraphs, little biographies of a person you've referenced, or whatever. If that content is less obviously relevant then hiding it behind a collapsed revealer triangle is even better. Short stuff which is that smidge more relevant gets promoted to be entirely inline and put in brackets. Stuff which is entirely reference material (citations, for example) doesn't really need to be in text in the document at all; don't footnote your point and then make a citation which links to the source, just link the text you wrote directly to the source. That certainly is a legacy of print media. There are annoying problems with most of the alternatives (a <details> can't go in a <p> even if inline, which is massively infuriating; sidenotes are great on wide screens but you still need to solve this problem on narrow, so they can't be the answer alone.) You can even put the footnote text in a tooltip as well, which helps people with mouse pointers or (maybe) keyboard navigation, and is what I do right here on this site.
But... if you've got a point which isn't important enough to be inline and isn't long enough to get its own box off to the side, then it's gotta go somewhere, and if that somewhere isn't "right there inline" then it's gotta be somewhere else, and... that's what a footnote is, right? Some text elsewhere that you link to.
We can certainly take advantage of being a digital document to display the annotation inline if the user chooses to (by clicking on it or similar), or to show a popover (which paper can't do). But if the text isn't displayed to you up front, then you have to click on something to show it, and that thing you click on must not itself be distracting. That means the thing you click on must be small, and not contentful. Numbers (or little symbols) are not too bad an approach, in that light. The technical issues here are dispensed with easily enough, as Lea Verou points out: yes, put a bigger hit target around your little undistracting numbers so they're not too hard to tap on, that's important.
But as Lea goes on to say, and Jake mentioned above... how do we deal with the idea that "3" needs to be both "small and undistracting" but also "give context so it's not just a meaningless number"? This is a real problem; pretty much by definition, if your "here is something which will show you extra information" marker gives you context about what that extra information is, then it's long enough that you actually have to read it to understand the context, and therefore it's distracting.7 This isn't really a circle that can be squared: these two requirements are in opposition, and so a compromise is needed.
Lea makes the same point with "How to provide context without increasing prominence? Wrapping part of the text with a link could be a better anchor, but then how to distinguish from actual links? Perhaps we need a convention." And I agree. I think we need a convention for this. But... I think we've already got a convention, no? A little superscript number or symbol means "this is a marker for additional information, which you need to interact with8 to get that additional information". Is it a perfect convention? No: the numbers are semantically meaningless. Is there a better convention? I'm not sure there is.
An end on'tSo, Jake's right: a whole bunch of things that are currently presented on the web as "here's a little (maybe clickable) number, click it to jump to the end of the document to read a thing" could be presented much better with a little thought. We web authors could do better at this. But should footnotes go away? I don't think so. Once all the cases of things that should be done better are done better, there'll still be some left. I don't hate footnotes. I do hate limoncello and Red Bull, though.
So, I’ve been in the job market for a bit over a year. I was part of a layoff cycle in my last company, and finding a new gig has been difficult. I haven’t been able to find something as of yet, but it’s been a learning curve. The market is not what it has been in the last couple of years. With AI in the mix, lots of roles have been eliminated, or have shifted towards where human intervention is needed to interpret or verify the data AI is interpreting. Job hunting is a job in an of itself, and may even take a 9 to 5 role. I know of a lot of people who have gone through the same process as myself, and wanted to share some of insights and tips from what I’ve learned throughout the last year.
Leveraging your networkFirst, and I think most important, is to understand that there’s a lot of great people around that you might have worked with. You can always ask for recommendations, touch base, or even have a small chat to see how things are going on their end. Conversations can be very refreshing, and can help you get a new perspective as how the industries are shifting, where you might want to learn new skills, or how to improve your positioning in the market. Folks can talk around and see if there’s additional positions where you might be a good fit, and it’s always good to have a helping hand (or a few). At the end of the day, these folks are your own community. I’ve gotten roles in the past by being referred, and these connections have been critical for my understanding of how different businesses may approach the same problem, or even to solve internal conflicts. So, reach out to people you know!
Understanding the marketLike I mentioned in the opening paragraph, the market is evolving constantly. AI has taken a very solid role nowadays, and lots of companies ask about how you’ve used AI recently. Part of understanding the market is understanding the bleeding edge tools that are used to improve workflows and day-to-day efficiency. Research tools that are coming up, and that are shaping the market.
To give you an example. Haven’t tried AI yet? Give it a spin, even for simple questions. Understand where it works, where it fails, and how you, as a human, can make it work for you. Get a sense of the pitfalls, and where human intervention is needed to interpret or verify the data that’s in there. Like one of my former managers said, “trust, but verify”. Or, you can even get to the point of not trusting the data, and sharing that as a story!
Apply thoughtfullySomeone gave me the recommendation to apply to everything that I see where I “could be a fit”. While this might have its upsides, you might also end up in situations where you are not actually a fit, or where you don’t know the company and what it does. Always take the time, at least a few minutes, to understand the company that you’re applying for, research their values, and how they align to yours. Read about the product they’re creating, selling, or offering, and see if it’s a product where you could contribute your skills. Then, you can make the decision of applying. While doing this you may discover that you are applying to a position in a sector that you’re not interested in, or where your skillset might not be used to its full potential. And you might be missing out on some other opportunities that are significantly more aligned to you.
Also take the time to fully review the job description. JDs are pretty descriptive, and you might stumble upon certain details that don’t align with yourself, such as the salary, hours, location, or certain expectations that you might feel don’t fit within the role or that you are not ready for.
Prepare for your interviewsYou landed an interview – congratulations! Make sure that you’ve researched the company before heading in. If you’ve taken a look at the company and the role before applying, take a glimpse again. You might find more interesting things, and it will demonstrate that you are actually preparing yourself for the interview. Also, interviewing is a two-way street. Make sure that you have some questions at the end. Double-check the role of your interviewer in the company, and ensure that you have questions that are tailored to their particular roles. Think about what you want to get from the interview (other than the job!).
Job sourcingThere are many great job sources today – LinkedIn being the biggest of all of them. Throughout my searches I’ve also found weworkremotely.com and hnhiring.com are great sources. I strongly advise that you expand your search and find sources that are relevant to your particular role or industry. This has opened up a lot of opportunities for me!
Take some time for yourselfI know that having a job is important. However, it’s also important to take time for yourself. Your mental health is important. You can use this time to develop some skills, play some games, take care of your garden, or even reorganize your home. Find a hobby and distract yourself every now and then. Take breaks, and ensure you’re not over-stressing yourself. Read a bit about burnout, and take care of yourself, as burnout can also happen from job hunting. And if you need a breather, make sure you take one, but don’t overdo it! Time is valuable, so it’s all about finding the right balance.
Hopefully this is helpful for some folks that are going through my same situation. What other things have worked for you? Do you have any other tips you could share? I’d be happy to read about them! Share them with me on LinkedIn. I’m also happy to chat – you can always find me at jose@ubuntu.com.
This isn’t a tech-related post, so if you’re only here for the tech, feel free to skip over.
Any of y’all hate spiders? If you had asked me that last week, I would have said “no”. Turns out you just need to get in a fight with the wrong spider to change that. I’m in the central United States, so thankfully I don’t have to deal with the horror spiders places like Australia have. But even in my not-intrinsically-hostile-to-human-life area of the world, we have some horror spiders of our own turns out. The two most common ones (the Brown Recluse and Black Widow) are basically memes at this point because they get mentioned so often; I’ve been bitten by both so far. The Brown Recluse bite wasn’t really that dramatic before, during, or after treatment, so there’s not really a story to tell there. The Black Widow bite on the other hand… oh boy. Holy moly.
I woke up last Saturday since the alternative was to sleep for 24 hours straight and that sounded awful. There’s lots of good things to do with a Sabbath, why waste the day on sleep? Usually I spend (or at least am supposed to spend) this day with my family, generally doing Bible study and board games. Over the last few weeks though, I had been using the time to clean up various areas of the house that needed it, and this time I decided to clean up a room that had been flooded some time back. I entered the Room of Despair, with the Sword of Paper Towels in one hand and the Shield of Trash Bags in the other. In front of me stood the vast armies of UghYuck-Hai. (LotR fans will get the joke.1) Convinced that I was effectively invulnerable to anything the hoards could do to me, I entered the fray, and thus was the battle joined in the land of MyHome.
Fast forward two hours of sorting, scrubbing, and hauling. I had made a pretty decent dent in the mess. I was also pretty tired at that point, and our family’s dog needed me to take him outside, so I decided it was time to take a break. I put the leash on the dog, and headed into the great outdoors for a much-needed breath of fresh air.
It was at about that time I realized there was something that felt weird on my left hip. In my neck of the woods, we have to deal with pretty extreme concentrations of mosquitoes, so I figured I probably just had some of my blood repurposed by a flying mini-vampire. Upon closer inspection though, I didn’t see localized swelling indicating a mosquito bite (or any other bite for that matter). The troubled area was just far enough toward my back that I couldn’t see if it had a bite hole or not, and I didn’t notice any kind of discoloration to give me a heads-up either. All I knew is that there was a decent-sized patch of my left hip that HURT if I poked it lightly. I’d previously had random areas of my body hurt when poked (probably from minor bruises), so I just lumped this event in with the rest of the mystery injuries I’ve been through and went on with my day.
Upon coming back from helping the dog out, I still felt pretty winded. I chalked that up to doing strenuous work in an area with bad air for too long, and decided to spend some time in bed to recover. One hour in bed turned into two. Two turned into three. Regardless of how long I laid there, I still just felt exhausted. “Did I really work that hard?”, I wondered. It didn’t seem like I had done enough work to warrant this level of tiredness. Thankfully I did get to chat with my mom about Bible stuff for a good portion of that time, so I thought the day had been pretty successful nonetheless.
The sun went down. I was still unreasonably tired. Usually this was when me and my mom would play a board game together, but I just wasn’t up for it. I ended up needing to use the restroom, so I went to do that, and that’s when I noticed my now-even-sorer hip wasn’t the only thing that was wrong.
While in the restroom, I felt like my digestive system was starting to get sick. This too was pretty easily explainable, I had just worked in filth and probably got exposed to too much yuck for my system to handle. My temperature was a bit higher than normal. Whatever, not like I hadn’t had fevers before. My head felt sore and stuffed up, which again just felt like I was getting sick in general. My vision also wasn’t great, but for all I know that could have just been because I was focusing more on feeling bad and less on the wall of the bathroom I was looking at. At this point, I didn’t think that the sore hip and the sudden onset fever might be related.
After coming out of the bathroom, I huddled in bed to try to help the minor fever burn out whatever crud I had gotten into. My mom came to help take care of me while I was sick. To my surprise, the fever didn’t stay minor for long - I suddenly started shivering like crazy even though I wasn’t even remotely cold. My temperature skyrocketed, getting to the point where I was worried it could be dangerously high. I started aching all over and my muscles felt like they got a lot weaker. My heart started pounding furiously, and I felt short of breath. We always keep colloidal silver in the house since it helps with immunity, so my mom gave me some sprays of it and had me hold it under my tongue. I noticed I was salivating a bunch for absolutely no reason while trying to hold the silver spray there as long as I could. Things weren’t really improving, and I noticed my hip was starting to hurt more. I mentioned the sore hip issue to my mom, and we chose to put some aloe vera lotion and colloidal silver on it, just in case I had been bitten by a spider of some sort.
That turned out to be a very good, very very VERY painful idea. After rubbing in the lotion, the bitten area started experiencing severe, relentless stabbing pains, gradually growing in intensity as time progressed. For the first few minutes, I was thinking “wow, this really hurts, what in the world bit me?”, but that pretty quickly gave way to “AAAAA! AAAAA! AAAAAAAAAAAAAA!” I kept most of the screaming in my mind, but after a while it got so bad I just rocked back and forth and groaned for what felt like forever. I’d never had pain like this just keep going and going, so I thought if I just toughed it out for long enough it would eventually go away. This thing didn’t seem to work like that though. After who-knows-how-long, I finally realized this wasn’t going to go away on its own, and so, for reasons only my pain-deranged mind could understand, I tried rolling over on my left side to see if squishing the area would get it to shut up. Beyond all logic, that actually seemed to work, so I just stayed there for quite some time.
At this point, my mom realized the sore hip and the rest of my sickness might be related (I never managed to put the two together). The symptoms I had originally looked like scarlet fever plus random weirdness, but they turned out to match extremely well with the symptoms of a black widow bite (I didn’t have the sweating yet but that ended up happening too). The bite area also started looking discolored, so something was definitely not right. At about this point my kidneys started hurting pretty badly, not as badly as the bite but not too far from it.
I’ll try to go over the rest of the mess relatively quickly. In summary:
I passed out and fell over while trying to walk back from the restroom at one point. From what I remember, I had started blacking out while in the restroom, realized I needed to get back to bed ASAP, managed to clumsily walk out of the bathroom and most of the way into the bed, then felt myself fall, bump into a lamp, and land on the bed back-first (which was weird, my back wasn’t facing the bed yet). My mom on the other hand, who was not virtually unconscious, reports that I came around the corner, proceeded to fall face first into the lamp with arms outstretched like a zombie, had a minor seizure, and she had to pull me off the lamp and flip me around. All I can think is my brain must have still been active but lost all sensory input and motor control.
I couldn’t get out of bed for over 48 hours straight thereafter. I’d start blacking out if I tried to stand up for very long.
A dime-sized area around the bite turned purple, then black. So, great, I guess I can now say a part of me is dead :P At this point we were also able to clearly see dual fang marks, confirming that this was indeed a spider bite.
I ended up drinking way more water than usual. I usually only drink three or four cups a day, but I drank more like nine or ten cups the day after the bite.
I had some muscle paralysis that made it difficult to urinate. Thankfully that went away after a day.
My vision got very, very blurry, and my eyes had tons of pus coming out of them for no apparent reason. This was more of an annoyance than anything, I was keeping my eyes shut most of the time anyway, but the crud kept drying and gluing my eyes shut! It was easy enough to just pick off when that happened, but it was one of those things that makes you go “come on, really?”
On the third day of recovery, my whole body broke out in a rash that looked like a bunch of purple freckles. They didn’t hurt, didn’t bump up, didn’t even hardly itch, but they looked really weird. Patches of the rash proceeded to go away and come back every so often, which they’re still doing now.
I ended up missing three days of work while laid up.
We kept applying peppermint oil infused aloe vera lotion and colloidal silver to the bite, which helped reduce pain (well, except for the first time anyway :P) and seems to have helped keep the toxins from spreading too much.
A couple of questions come to mind at this point. For one, how do I know that it was a black widow that bit me? Unfortunately, I never saw or felt the spider, so I can’t know for an absolute certainty that I was bitten by a black widow (some people report false widows can cause similar symptoms if they inject you with enough venom). But false widows don’t live anywhere even remotely close to where I live, and black widows are both known to live here and we’ve seen them here before. The symptoms certainly aren’t anything remotely close to a brown recluse bite, and while I am not a medical professional, they seem to match the symptoms of black widow bites very, very well. So even if by some chance this wasn’t a black widow, whatever bit me had just as bad of an effect on me as a black widow would have.
For two, why didn’t I go to a hospital? Number one, everything I looked up said the most they could do is give you antivenom (which can cause anaphylaxis, no thank you), or painkillers like fentanyl (which I don’t want anywhere near me, I’d rather feel like I’m dying from a spider bite than take a narcotic painkiller, thanks anyway). Number two, last time a family member had to go to the hospital, the ambulance just about killed him trying to get him there in the first place. I lost most of my respect for my city’s medical facilities that day; if I’m not literally dying, I don’t need a hospital, and if I am dying, my hospitals will probably just kill me off quicker.
I’m currently on day 4 of recovery (including the day I was bitten). I’m still lightheaded, but I can stand without passing out finally. The kidney pain went away, as did the stabbing pain in the bite area (though it still aches a bit, and hurts if you poke it). The fever is mostly gone, my eyes are working normally again and aren’t constantly trying to superglue themselves closed, and my breathing is mostly fine again. I’m definitely still feeling the effects of the bite, but they aren’t crippling anymore. I’ll probably be able to work from home in the morning (I’d try to do household chores too but my mom would probably have a heart attack since I just about killed myself trying to get out of the bathroom).
Speaking of working from home, it’s half past midnight here, I should be going to bed. Thanks for reading!
1The army of Saruman sent against the fortress of Helm’s Deep was made up of half-elven, half-orc creatures known as Uruk-Hai. “Ugh, yuck!” and “Uruk” sounded humorously similar, so I just went with it.
The Internet has changed a lot in the last 40+ years. Fads have come and gone. Network protocols have been designed, deployed, adopted, and abandoned. Industries have come and gone. The types of people on the internet have changed a lot. The number of people on the internet has changed a lot, creating an information medium unlike anything ever seen before in human history. There’s a lot of good things about the Internet as of 2025, but there’s also an inescapable hole in what it used to be, for me.
I miss being able to throw a site up to send around to friends to play with without worrying about hordes of AI-feeding HTML combine harvesters DoS-ing my website, costing me thousands in network transfer for the privilege. I miss being able to put a lightly authenticated game server up and not worry too much at night – wondering if that process is now mining bitcoin. I miss being able to run a server in my home closet. Decades of cat and mouse games have rendered running a mail server nearly impossible. Those who are “brave” enough to try are met with weekslong stretches of delivery failures and countless hours yelling ineffectually into a pipe that leads from the cheerful lobby of some disinterested corporation directly into a void somewhere 4 layers below ground level.
I miss the spirit of curiosity, exploration, and trying new things. I miss building things for fun without having to worry about being too successful, after which “security” offices start demanding my supplier paperwork in triplicate as heartfelt thanks from their engineering teams. I miss communities that are run because it is important to them, not for ad revenue. I miss community operated spaces and having more than four websites that are all full of nothing except screenshots of each other.
Every other page I find myself on now has an AI generated click-bait title, shared for rage-clicks all brought-to-you-by-our-sponsors–completely covered wall-to-wall with popup modals, telling me how much they respect my privacy, with the real content hidden at the bottom bracketed by deceptive ads served by companies that definitely know which new coffee shop I went to last month.
This is wrong, and those who have seen what was know it.
I can’t keep doing it. I’m not doing it any more. I reject the notion that this is as it needs to be. It is wrong. The hole left in what the Internet used to be must be filled. I will fill it.
What comes before part b?Throughout the 2000s, some of my favorite memories were from LAN parties at my friends’ places. Dragging your setup somewhere, long nights playing games, goofing off, even building software all night to get something working—being able to do something fiercely technical in the context of a uniquely social activity. It wasn’t really much about the games or the projects—it was an excuse to spend time together, just hanging out. A huge reason I learned so much in college was that campus was a non-stop LAN party – we could freely stand up servers, talk between dorms on the LAN, and hit my dorm room computer from the lab. Things could go from individual to social in the matter of seconds. The Internet used to work this way—my dorm had public IPs handed out by DHCP, and my workstation could serve traffic from anywhere on the internet. I haven’t been back to campus in a few years, but I’d be surprised if this were still the case.
In December of 2021, three of us got together and connected our houses together in what we now call The Promised LAN. The idea is simple—fill the hole we feel is gone from our lives. Build our own always-on 24/7 nonstop LAN party. Build a space that is intrinsically social, even though we’re doing technical things. We can freely host insecure game servers or one-off side projects without worrying about what someone will do with it.
Over the years, it’s evolved very slowly—we haven’t pulled any all-nighters. Our mantra has become “old growth”, building each layer carefully. As of May 2025, the LAN is now 19 friends running around 25 network segments. Those 25 networks are connected to 3 backbone nodes, exchanging routes and IP traffic for the LAN. We refer to the set of backbone operators as “The Bureau of LAN Management”. Combined decades of operating critical infrastructure has driven The Bureau to make a set of well-understood, boring, predictable, interoperable and easily debuggable decisions to make this all happen. Nothing here is exotic or even technically interesting.
Applications of trusting trustThe hardest part, however, is rejecting the idea that anything outside our own LAN is untrustworthy—nearly irreversible damage inflicted on us by the Internet. We have solved this by not solving it. We strictly control membership—the absolute hard minimum for joining the LAN requires 10 years of friendship with at least one member of the Bureau, with another 10 years of friendship planned. Members of the LAN can veto new members even if all other criteria is met. Even with those strict rules, there’s no shortage of friends that meet the qualifications—but we are not equipped to take that many folks on. It’s hard to join—-both socially and technically. Doing something malicious on the LAN requires a lot of highly technical effort upfront, and it would endanger a decade of friendship. We have relied on those human, social, interpersonal bonds to bring us all together. It’s worked for the last 4 years, and it should continue working until we think of something better.
We assume roommates, partners, kids, and visitors all have access to The Promised LAN. If they’re let into our friends' network, there is a level of trust that works transitively for us—I trust them to be on mine. This LAN is not for “security”, rather, the network border is a social one. Benign “hacking”—in the original sense of misusing systems to do fun and interesting things—is encouraged. Robust ACLs and firewalls on the LAN are, by definition, an interpersonal—not technical—failure. We all trust every other network operator to run their segment in a way that aligns with our collective values and norms.
Over the last 4 years, we’ve grown our own culture and fads—around half of the people on the LAN have thermal receipt printers with open access, for printing out quips or jokes on each other’s counters. It’s incredible how much network transport and a trusting culture gets you—there’s a 3-node IRC network, exotic hardware to gawk at, radios galore, a NAS storage swap, LAN only email, and even a SIP phone network of “redphones”.
DIYWe do not wish to, nor will we, rebuild the internet. We do not wish to, nor will we, scale this. We will never be friends with enough people, as hard as we may try. Participation hinges on us all having fun. As a result, membership will never be open, and we will never have enough connected LANs to deal with the technical and social problems that start to happen with scale. This is a feature, not a bug.
This is a call for you to do the same. Build your own LAN. Connect it with friends’ homes. Remember what is missing from your life, and fill it in. Use software you know how to operate and get it running. Build slowly. Build your community. Do it with joy. Remember how we got here. Rebuild a community space that doesn’t need to be mediated by faceless corporations and ad revenue. Build something sustainable that brings you joy. Rebuild something you use daily.
Bring back what we’re missing.
A gentleman by the name of Arif Ali reached out to me on LinkedIn. I won’t share the actual text of the message, but I’ll paraphrase:
“I hope everything is going well with you. I’m applying to be an Ubuntu ‘Per Package Uploader’ for the SOS package, and I was wondering if you could endorse my application.”
Arif, thank you! I have always appreciated our chats, and I truly believe you’re doing great work. I don’t want to interfere with anything by jumping on the wiki, but just know you have my full backing.
“So, who actually lets Arif upload new versions of SOS to Ubuntu, and what is it?”
Great question!
Firstly, I realized that I needed some more info on what SOS is, so I can explain it to you all. On a quick search, this was the first result.
Okay, so genuine question…
Why does the first DuckDuckGo result for “sosreport” point to an article for a release of Red Hat Enterprise Linux that is two versions old? In other words, hey DuckDuckGo, your grass is starting to get long. Or maybe Red Hat? Can’t tell, I give you both the benefit of the doubt, in good faith.
So, I clarified the search and found this. Canonical, you’ve done a great job. Red Hat, you could work on your SEO so I can actually find the RHEL 10 docs quicker, but hey… B+ for effort. ;)
Anyway, let me tell you about Arif. Just from my own experiences.
He’s incredible. He shows love to others, and whenever I would sponsor one of his packages during my time in Ubuntu, he was always incredibly receptive to feedback. I really appreciate the way he reached out to me, as well. That was really kind, and to be honest, I needed it.
As for character, he has my +1. In terms of the members of the DMB (aside from one person who I will not mention by name, who has caused me immense trouble elsewhere), here’s what I’d tell you if you asked me privately…
“It’s just PPU. Arif works on SOS as part of his job. Please, do still grill him. The test, and ensuring people know that they actually need to pass a test to get permissions, that’s pretty important.”
That being said, I think he deserves it.
Good luck, Arif. I wish you well in your meeting. I genuinely hope this helps. :)
And to my friends in Ubuntu, I miss you. Please reach out. I’d be happy to write you a public letter, too. Only if you want. :)
Theodore Roosevelt is someone I have admired for a long time. I especially appreciate what has been coined the Man in the Arena speech.
A specific excerpt comes to mind after reading world news over the last twelve hours:
“It is well if a large proportion of the leaders in any republic, in any democracy, are, as a matter of course, drawn from the classes represented in this audience to-day; but only provided that those classes possess the gifts of sympathy with plain people and of devotion to great ideals. You and those like you have received special advantages; you have all of you had the opportunity for mental training; many of you have had leisure; most of you have had a chance for enjoyment of life far greater than comes to the majority of your fellows. To you and your kind much has been given, and from you much should be expected. Yet there are certain failings against which it is especially incumbent that both men of trained and cultivated intellect, and men of inherited wealth and position should especially guard themselves, because to these failings they are especially liable; and if yielded to, their- your- chances of useful service are at an end. Let the man of learning, the man of lettered leisure, beware of that queer and cheap temptation to pose to himself and to others as a cynic, as the man who has outgrown emotions and beliefs, the man to whom good and evil are as one. The poorest way to face life is to face it with a sneer. There are many men who feel a kind of twister pride in cynicism; there are many who confine themselves to criticism of the way others do what they themselves dare not even attempt. There is no more unhealthy being, no man less worthy of respect, than he who either really holds, or feigns to hold, an attitude of sneering disbelief toward all that is great and lofty, whether in achievement or in that noble effort which, even if it fails, comes to second achievement. A cynical habit of thought and speech, a readiness to criticise work which the critic himself never tries to perform, an intellectual aloofness which will not accept contact with life’s realities — all these are marks, not as the possessor would fain to think, of superiority but of weakness. They mark the men unfit to bear their part painfully in the stern strife of living, who seek, in the affection of contempt for the achievements of others, to hide from others and from themselves in their own weakness. The rôle is easy; there is none easier, save only the rôle of the man who sneers alike at both criticism and performance.”
The riots in LA are seriously concerning to me. If something doesn’t happen soon, this is going to get out of control.
If you are participating in these events, or know someone who is, tell them to calm down. Physical violence is never the answer, no matter your political party.
De-escalate immediately.
Be well. Show love to one another!
Bazaar is a distributed revision control system, originally developed by Canonical. It provides similar functionality compared to the now dominant Git.
Bazaar code hosting is an offering from Launchpad to both provide a Bazaar backend for hosting code, but also a web frontend for browsing the code. The frontend is provided by the Loggerhead application on Launchpad.
Sunsetting BazaarBazaar passed its peak a decade ago. Breezy is a fork of Bazaar that has kept a form of Bazaar alive, but the last release of Bazaar was in 2016. Since then the impact has declined, and there are modern replacements like Git.
Just keeping Bazaar running requires a non-trivial amount of development, operations time, and infrastructure resources – all of which could be better used elsewhere.
Launchpad will now begin the process of discontinuing support for Bazaar.
TimelinesWe are aware that the migration of the repositories and updating workflows will take some time, that is why we planned sunsetting in two phases.
Phase 1Loggerhead, the web frontend, which is used to browse the code in a web browser, will be shut down imminently. Analyzing access logs showed that there are hardly any more requests from legit users, but almost the entire traffic comes from scrapers and other abusers. Sunsetting Loggerhead will not affect the ability to pull, push and merge changes.
Phase 2From September 1st, 2025, we do not intend to have Bazaar, the code hosting backend, any more. Users need to migrate all repositories from Bazaar to Git between now and this deadline.
Migration pathsThe following blog post describes all the necessary steps on how to convert a Bazaar repository hosted on Launchpad to Git.
Migrate a Repository From Bazaar to Git Call for actionOur users are extremely important to us. Ubuntu, for instance, has a long history of Bazaar usage, and we will need to work with the Ubuntu Engineering team to find ways to move forward to remove the reliance on the integration with Bazaar for the development of Ubuntu. If you are also using Bazaar and you have a special use case, or you do not see a clear way forward, please reach out to us to discuss your use case and how we can help you.
You can reach us in #launchpad:ubuntu.com on Matrix, or submit a question or send us an e-mail via feedback@launchpad.net.
It is also recommended to join the ongoing discussion at https://discourse.ubuntu.com/t/phasing-out-bazaar-code-hosting/62189.
It’s amazing how quickly public opinion changes. Or, how quickly
people cave. Remember just 10 years ago, how people felt about
google glass?
Or how they felt when they found out target was analyzing their
purchases and perhaps knew them better than themselves or their
family?
People used to worry about being tracked by companies and government.
Today, they feel insecure unless they are certain they are being
tracked. For their own well-being of course. If an online store does
*not* send an email 10 hours after you’ve “left items in your cart,
don’t miss out!”, and another the next day, you feel disappointed. I
believe they’re now seen as sub-par.
As you may recall from previous posts and elsewhere I have been busy writing a new solver for APT. Today I want to share some of the latest changes in how to approach solving.
The idea for the solver was that manually installed packages are always protected from removals – in terms of SAT solving, they are facts. Automatically installed packages become optional unit clauses. Optional clauses are solved after manual ones, they don’t partake in normal unit propagation.
This worked fine, say you had
A # install request for A B # manually installed, keep it A depends on: conflicts-B | CInstalling A on a system with B installed installed C, as it was not allowed to install the conflicts-B package since B is installed.
However, I also introduced a mode to allow removing manually installed packages, and that’s where it broke down, now instead of B being a fact, our clauses looked like:
A # install request for A A depends on: conflicts-B | C Optional: B # try to keep B installedAs a result, we installed conflicts-B and removed B; the steps the solver takes are:
This isn’t correct: Just because we allow removing manually installed packages doesn’t mean that we should remove manually installed packages if we don’t need to.
Fixing this turns out to be surprisingly easy. In addition to adding our optional (soft) clauses, let’s first assume all of them!
But to explain how this works, we first need to explain some terminology:
To illustrate this in pseudo Python code:
We introduce all our facts, and if they conflict, we are unsat:
for fact in facts: enqueue(fact) if not propagate(): return FalseFor each optional literal, we register a soft clause and assume it. If the assumption fails, we ignore it. If it succeeds, but propagation fails, we undo the assumption.
for optionalLiteral in optionalLiterals: registerClause(SoftClause([optionalLiteral])) if assume(optionalLiteral) and not propagate(): undo()Finally we enter the main solver loop:
while True: if not propagate(): if not backtrack(): return False elif <all clauses are satisfied>: return True elif it := find("best unassigned literal satisfying a hard clause"): assume(it) elif it := find("best literal satisfying a soft clause"): assume(it)The key point to note is that the main loop will undo the assumptions in order; so if you assume A,B,C and B is not possible, we will have also undone C. But since C is also enqueued as a soft clause, we will then later find it again:
This is not (correct) MaxSAT, because we actually do not guarantee that we satisfy as many soft clauses as possible. Consider you have the following clauses:
Optional: A Optional: B Optional: C B Conflicts with A C Conflicts with AThere are two possible results here:
The question to ponder though is whether we actually need a global maximum or whether a local maximum is satisfactory in practice for a dependency solver If you look at it, a naive MaxSAT solver needs to run the SAT solver 2**n times for n soft clauses, whereas our heuristic only needs n runs.
For dependency solving, it seems we do not seem have a strong need for a global maximum: There are various other preferences between our literals, say priorities; and empirically, from evaluating hundreds of regressions without the initial assumptions, I can say that the assumptions do fix those cases and the result is correct.
Further improvements exist, though, and we can look into them if they are needed, such as:
Use a better heuristic:
If we assume 1 clause and solve, and we cause 2 or more clauses to become unsatisfiable, then that clause is a local minimum and can be skipped. This is a more common heuristical MaxSAT solver. This gives us a better local maximum, but not a global one.
This is more or less what the Smart package manager did, except that in Smart, all packages were optional, and the entire solution was scored. It calculated a basic solution without optimization and then toggled each variable and saw if the score improved.
Implement an actual search for a global maximum:
This involves reading the literature. There are various versions of this, for example:
Find unsatisfiable cores and use those to guide relaxation of clauses.
A bounds-based search, where we translate sum(satisifed clauses) > k into SAT, and then search in one of the following ways:
Actually we do not even need to calculate sum constraints into CNF, because we can just add a specialized new type of constraint to our code.
Are you using Kubuntu 25.04 Plucky Puffin, our current stable release? Or are you already running our development builds of the upcoming 25.10 (Questing Quokka)?
We currently have Plasma 6.3.90 (Plasma 6.4 Beta1) available in our Beta PPA for Kubuntu 25.04 and for the 25.10 development series.
However this is a Beta release, and we should re-iterate the disclaimer:
DISCLAIMER: This release contains untested and unstable software. It is highly recommended you do not use this version in a production environment and do not use it as your daily work environment. You risk crashes and loss of data.
6.4 Beta1 packages and required dependencies are available in our Beta PPA. The PPA should work whether you are currently using our backports PPA or not. If you are prepared to test via the PPA, then add the beta PPA and then upgrade:
sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt full-upgrade -yThen reboot.
In case of issues, testers should be prepared to use ppa-purge to remove the PPA and revert/downgrade packages.
Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.
Please review the planned feature list, release announcement and changelog.
[Test Case]
* General tests:
– Does plasma desktop start as normal with no apparent regressions over 6.3?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
* Specific tests:
– Identify items with front/user facing changes capable of specific testing.
– Test the ‘fixed’ functionality or ‘new’ feature.
Testing may involve some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.
Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.
We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.
Thanks!
Please stop by the Kubuntu-devel Matrix channel on if you need clarification of any of the steps to follow.
[1] – https://matrix.to/#/#kubuntu-devel:ubuntu.com
[2] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
We are pleased to announce that the Plasma 6.3.5 bugfix update is now available for Kubuntu 25.04 Plucky Puffin in our backports PPA.
As usual with our PPAs, there is the caveat that the PPA may receive additional updates and new releases of KDE Plasma, Gear (Apps), and Frameworks, plus other apps and required libraries. Users should always review proposed updates to decide whether they wish to receive them.
To upgrade:
Add the following repository to your software sources list:
ppa:kubuntu-ppa/backports
or if it is already added, the updates should become available via your preferred update method.
The PPA can be added manually in the Konsole terminal with the command:
sudo add-apt-repository ppa:kubuntu-ppa/backports
and packages then updated with
sudo apt full-upgrade
We hope you enjoy using Plasma 6.3.5!
Issues with Plasma itself can be reported on the KDE bugtracker [1]. In the case of packaging or other issues, please provide feedback on our mailing list [2], and/or file a bug against our PPA packages [3].
1. KDE bugtracker::https://bugs.kde.org
2. Kubuntu-devel mailing list: https://lists.u
3. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa