You are here

Agreguesi i feed

Cybersecurity and Digital Authority: The New Pillars of Online Trust

LinuxSecurity.com - Pre, 12/09/2025 - 11:00pd
Cybersecurity is no longer just a technical concern. It has become a business survival priority. A single data breach doesn't only expose data, it can erase years of hard-earned trust. Studies show that 75% of consumers won't engage with companies that have experienced a security incident. That means reputation is now on the line just as much as revenue.

Gravitational Waves Finally Prove Stephen Hawking's Black Hole Theorem

Slashdot - Pre, 12/09/2025 - 9:00pd
Physicists have confirmed Stephen Hawking's 1971 black hole area theorem with near-absolute certainty, thanks to gravitational waves from an exceptionally loud black hole collision detected by upgraded LIGO instruments. New Scientist reports: Hawking proposed his black hole area theorem in 1971, which states that when two black holes merge, the resulting black hole's event horizon -- the boundary beyond which not even light can escape the clutches of a black hole -- cannot have an area smaller than the sum of the two original black holes. The theorem echoes the second law of thermodynamics, which states that the entropy, or disorder within an object, never decreases. Black hole mergers warp the fabric of the universe, producing tiny fluctuations in space-time known as gravitational waves, which cross the universe at the speed of light. Five gravitational wave observatories on Earth hunt for waves 10,000 times smaller than the nucleus of an atom. They include the two US-based detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) plus the Virgo detector in Italy, KAGRA in Japan and GEO600 in Germany, operated by an international collaboration known as LIGO-Virgo-KAGRA (LVK). The recent collision, named GW250114, was almost identical to the one that created the first gravitational waves ever observed in 2015. Both involved black holes with masses between 30 and 40 times the mass of our sun and took place about 1.3 billion light years away. This time, the upgraded LIGO detectors had three times the sensitivity they had in 2015, so they were able to capture waves emanating from the collision in unprecedented detail. This allowed researchers to verify Hawking's theorem by calculating that the area of the event horizon was indeed larger after the merger. The findings have been published in the journal Physical Review Letters.

Read more of this story at Slashdot.

next-20250912: linux-next

Kernel Linux - Pre, 12/09/2025 - 7:15pd
Version:next-20250912 (linux-next) Released:2025-09-12

AI Use At Large Companies Is In Decline, Census Bureau Says

Slashdot - Pre, 12/09/2025 - 5:30pd
An anonymous reader quotes a report from Gizmodo: [D]espite the AI industry's attempts to make itself seem omnipresent, a new report this week shows that adoption at large U.S. companies has declined. The report comes from the Census Bureau and shows that the rate of AI adoption by large companies -- that is, firms with over 250 employees -- has been declining slightly in recent weeks. The report is based on a biweekly survey, dubbed Business Trends and Outlook (or BTOS), of some 1.2 million U.S. firms. The survey, which asks businesses about their use of AI tools, such as machine learning and agents, found that -- between June and now -- the rate of adoption had declined from 14 to 12 percent. Futurism notes that this is the largest drop-off in the adoption rate since the survey first began in 2023, although the survey also showed a slight increase in AI use among smaller companies. The moderate drop off comes after the rate of adoption had climbed precipitously over the last few years. When the survey first began, in September of 2023, the AI adoption rate hovered around 3.7 percent (PDF), while the adoption rate in December 2024 was around 5.7 percent. In the second quarter of this year, the rate also rose significantly, climbing from 7.4 percent to 9.2. The new drop-off in reported usage comes not long after another study, this one published by MIT, found that a vast majority of corporate AI pilot programs had failed to produce any material benefit to the companies involved.

Read more of this story at Slashdot.

Windows Developers Can Now Publish Apps To Microsoft's Store Without Fees

Slashdot - Pre, 12/09/2025 - 3:30pd
Microsoft has eliminated the one-time fee for publishing apps on its Windows Store. According to The Verge, "Individual developers in nearly 200 countries can now sign up to publish apps on the Microsoft Store with just a personal Microsoft account, and no more one-time fees." From the report: Microsoft started cutting its $19 one-time fee to publish apps to its Windows store in June in certain markets, and it's now essentially removing this fee for all developers worldwide. Apple still charges an annual $99 fee to developers, and Google charges a one-time registration fee of $25. "Developers will no longer need a credit card to get started, removing a key point of friction that has affected many creators around the world," explains Chetna Das, senior product manager at Microsoft. "By eliminating these one-time fees, Microsoft is creating a more inclusive and accessible platform that empowers more developers to innovate, share and thrive on the Windows ecosystem." [...] The Microsoft Store is now used by more than 250 million monthly active users, according to Microsoft. Microsoft is now encouraging more developers to make use of the store, where they can publish a variety of Win32, UWP, PWA, .NET, MAUI, or Electron apps. Developers can even use their own in-app commerce system to keep 100 percent of their revenues on non-gaming apps.

Read more of this story at Slashdot.

6.16.7: stable

Kernel Linux - Enj, 11/09/2025 - 5:23md
Version:6.16.7 (stable) Released:2025-09-11 Source:linux-6.16.7.tar.xz PGP Signature:linux-6.16.7.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.16.7

6.12.47: longterm

Kernel Linux - Enj, 11/09/2025 - 5:22md
Version:6.12.47 (longterm) Released:2025-09-11 Source:linux-6.12.47.tar.xz PGP Signature:linux-6.12.47.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.12.47

6.6.106: longterm

Kernel Linux - Enj, 11/09/2025 - 5:20md
Version:6.6.106 (longterm) Released:2025-09-11 Source:linux-6.6.106.tar.xz PGP Signature:linux-6.6.106.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.6.106

6.1.152: longterm

Kernel Linux - Enj, 11/09/2025 - 5:19md
Version:6.1.152 (longterm) Released:2025-09-11 Source:linux-6.1.152.tar.xz PGP Signature:linux-6.1.152.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-6.1.152

5.15.193: longterm

Kernel Linux - Enj, 11/09/2025 - 5:18md
Version:5.15.193 (longterm) Released:2025-09-11 Source:linux-5.15.193.tar.xz PGP Signature:linux-5.15.193.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-5.15.193

5.10.244: longterm

Kernel Linux - Enj, 11/09/2025 - 5:16md
Version:5.10.244 (longterm) Released:2025-09-11 Source:linux-5.10.244.tar.xz PGP Signature:linux-5.10.244.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-5.10.244

Fedora 44 vs. Linux Kernel Exploits: Inside the Move to Strengthen Linux Security Settings

LinuxSecurity.com - Enj, 11/09/2025 - 1:08md
If you're running Linux systems, you know that Linux kernel security is a constant, evolving challenge. New attack surfaces emerge, and keeping up with hardening techniques can feel like a never-ending sprint.

Thibault Martin: TIL that You can spot base64 encoded JSON, certificates, and private keys

Planet GNOME - Mar, 05/08/2025 - 3:00md

I was working on my homelab and examined a file that was supposed to contain encrypted content that I could safely commit on a Github repository. The file looked like this

{ "serial": 13, "lineage": "24d431ee-3da9-4407-b649-b0d2c0ca2d67", "meta": { "key_provider.pbkdf2.password_key": "eyJzYWx0IjoianpHUlpMVkFOZUZKcEpSeGo4UlhnNDhGZk9vQisrR0YvSG9ubTZzSUY5WT0iLCJpdGVyYXRpb25zIjo2MDAwMDAsImhhc2hfZnVuY3Rpb24iOiJzaGE1MTIiLCJrZXlfbGVuZ3RoIjozMn0=" }, "encrypted_data": "ONXZsJhz37eJA[...]", "encryption_version": "v0" }

Hm, key provider? Password key? In an encrypted file? That doesn't sound right. The problem is that this file is generated by taking a password, deriving a key from it, and encrypting the content with that key. I don't know what the derived key could look like, but it could be that long indecipherable string.

I asked a colleague to have a look and he said "Oh that? It looks like a base64 encoded JSON. Give it a go to see what's inside."

I was incredulous but gave it a go, and it worked!!

$ echo "eyJzYW[...]" | base64 -d {"salt":"jzGRZLVANeFJpJRxj8RXg48FfOoB++GF/Honm6sIF9Y=","iterations":600000,"hash_function":"sha512","key_length":32}

I couldn't believe my colleague had decoded the base64 string on the fly, so I asked. "What gave it away? Was it the trailing equal signs at the end for padding? But how did you know it was base64 encoded JSON and not just a base64 string?"

He replied,

Whenever you see ey, that's {" and then if it's followed by a letter, you'll get J followed by a letter.

I did a few tests in my terminal, and he was right! You can spot base64 json with your naked eye, and you don't need to decode it on the fly!

$ echo "{" | base64 ewo= $ echo "{\"" | base64 eyIK $ echo "{\"s" | base64 eyJzCg== $ echo "{\"a" | base64 eyJhCg== $ echo "{\"word\"" | base64 eyJ3b3JkIgo=

But there's even better! As tyzbit reported on the fediverse, you can even spot base64 encoded certificates and private keys! They all start with LS, which reminds of the LS in "TLS certificate."

$ echo -en "-----BEGIN CERTIFICATE-----" | base64 LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0t

[!warning] Errata

As pointed out by gnabgib and athorax on Hacker News, this actually detects the leading dashes of the PEM format, commonly used for certificates, and a YAML file that starts with --- will yield the same result

$ echo "---\n" | base64 LS0tXG4K

This is not a silver bullet!

Thanks Davide and Denis for showing me this simple but pretty useful trick, and thanks tyzbit for completing it with certs and private keys!

Matthew Garrett: Cordoomceps - replacing an Amiga's brain with Doom

Planet GNOME - Mar, 05/08/2025 - 2:30pd
There's a lovely device called a pistorm, an adapter board that glues a Raspberry Pi GPIO bus to a Motorola 68000 bus. The intended use case is that you plug it into a 68000 device and then run an emulator that reads instructions from hardware (ROM or RAM) and emulates them. You're still limited by the ~7MHz bus that the hardware is running at, but you can run the instructions as fast as you want.

These days you're supposed to run a custom built OS on the Pi that just does 68000 emulation, but initially it ran Linux on the Pi and a userland 68000 emulator process. And, well, that got me thinking. The emulator takes 68000 instructions, emulates them, and then talks to the hardware to implement the effects of those instructions. What if we, well, just don't? What if we just run all of our code in Linux on an ARM core and then talk to the Amiga hardware?

We're going to ignore x86 here, because it's weird - but most hardware that wants software to be able to communicate with it maps itself into the same address space that RAM is in. You can write to a byte of RAM, or you can write to a piece of hardware that's effectively pretending to be RAM[1]. The Amiga wasn't unusual in this respect in the 80s, and to talk to the graphics hardware you speak to a special address range that gets sent to that hardware instead of to RAM. The CPU knows nothing about this. It just indicates it wants to write to an address, and then sends the data.

So, if we are the CPU, we can just indicate that we want to write to an address, and provide the data. And those addresses can correspond to the hardware. So, we can write to the RAM that belongs to the Amiga, and we can write to the hardware that isn't RAM but pretends to be. And that means we can run whatever we want on the Pi and then access Amiga hardware.

And, obviously, the thing we want to run is Doom, because that's what everyone runs in fucked up hardware situations.

Doom was Amiga kryptonite. Its entire graphical model was based on memory directly representing the contents of your display, and being able to modify that by just moving pixels around. This worked because at the time VGA displays supported having a memory layout where each pixel on your screen was represented by a byte in memory containing an 8 bit value that corresponded to a lookup table containing the RGB value for that pixel.

The Amiga was, well, not good at this. Back in the 80s, when the Amiga hardware was developed, memory was expensive. Dedicating that much RAM to the video hardware was unthinkable - the Amiga 1000 initially shipped with only 256K of RAM, and you could fill all of that with a sufficiently colourful picture. So instead of having the idea of each pixel being associated with a specific area of memory, the Amiga used bitmaps. A bitmap is an area of memory that represents the screen, but only represents one bit of the colour depth. If you have a black and white display, you only need one bitmap. If you want to display four colours, you need two. More colours, more bitmaps. And each bitmap is stored in an independent area of RAM. You never use more memory than you need to display the number of colours you want to.

But that means that each bitplane contains packed information - every byte of data in a bitplane contains the bit value for 8 different pixels, because each bitplane contains one bit of information per pixel. To update one pixel on screen, you need to read from every bitmap, update one bit, and write it back, and that's a lot of additional memory accesses. Doom, but on the Amiga, was slow not just because the CPU was slow, but because there was a lot of manipulation of data to turn it into the format the Amiga wanted and then push that over a fairly slow memory bus to have it displayed.

The CDTV was an aesthetically pleasing piece of hardware that absolutely sucked. It was an Amiga 500 in a hi-fi box with a caddy-loading CD drive, and it ran software that was just awful. There's no path to remediation here. No compelling apps were ever released. It's a terrible device. I love it. I bought one in 1996 because a local computer store had one and I pointed out that the company selling it had gone bankrupt some years earlier and literally nobody in my farming town was ever going to have any interest in buying a CD player that made a whirring noise when you turned it on because it had a fan and eventually they just sold it to me for not much money, and ever since then I wanted to have a CD player that ran Linux and well spoiler 30 years later I'm nearly there. That CDTV is going to be our test subject. We're going to try to get Doom running on it without executing any 68000 instructions.

We're facing two main problems here. The first is that all Amigas have a firmware ROM called Kickstart that runs at powerup. No matter how little you care about using any OS functionality, you can't start running your code until Kickstart has run. This means even documentation describing bare metal Amiga programming assumes that the hardware is already in the state that Kickstart left it in. This will become important later. The second is that we're going to need to actually write the code to use the Amiga hardware.

First, let's talk about Amiga graphics. We've already covered bitmaps, but for anyone used to modern hardware that's not the weirdest thing about what we're dealing with here. The CDTV's chipset supports a maximum of 64 colours in a mode called "Extra Half-Brite", or EHB, where you have 32 colours arbitrarily chosen from a palette and then 32 more colours that are identical but with half the intensity. For 64 colours we need 6 bitplanes, each of which can be located arbitrarily in the region of RAM accessible to the chipset ("chip RAM", distinguished from "fast ram" that's only accessible to the CPU). We tell the chipset where our bitplanes are and it displays them. Or, well, it does for a frame - after that the registers that pointed at our bitplanes no longer do, because when the hardware was DMAing through the bitplanes to display them it was incrementing those registers to point at the next address to DMA from. Which means that every frame we need to set those registers back.

Making sure you have code that's called every frame just to make your graphics work sounds intensely irritating, so Commodore gave us a way to avoid doing that. The chipset includes a coprocessor called "copper". Copper doesn't have a large set of features - in fact, it only has three. The first is that it can program chipset registers. The second is that it can wait for a specific point in screen scanout. The third (which we don't care about here) is that it can optionally skip an instruction if a certain point in screen scanout has already been reached. We can write a program (a "copper list") for the copper that tells it to program the chipset registers with the locations of our bitplanes and then wait until the end of the frame, at which point it will repeat the process. Now our bitplane pointers are always valid at the start of a frame.

Ok! We know how to display stuff. Now we just need to deal with not having 256 colours, and the whole "Doom expects pixels" thing. For the first of these, I stole code from ADoom, the only Amiga doom port I could easily find source for. This looks at the 256 colour palette loaded by Doom and calculates the closest approximation it can within the constraints of EHB. ADoom also includes a bunch of CPU-specific assembly optimisation for converting the "chunky" Doom graphic buffer into the "planar" Amiga bitplanes, none of which I used because (a) it's all for 68000 series CPUs and we're running on ARM, and (b) I have a quad core CPU running at 1.4GHz and I'm going to be pushing all the graphics over a 7.14MHz bus, the graphics mode conversion is not going to be the bottleneck here. Instead I just wrote a series of nested for loops that iterate through each pixel and update each bitplane and called it a day. The set of bitplanes I'm operating on here is allocated on the Linux side so I can read and write to them without being restricted by the speed of the Amiga bus (remember, each byte in each bitplane is going to be updated 8 times per frame, because it holds bits associated with 8 pixels), and then copied over to the Amiga's RAM once the frame is complete.

And, kind of astonishingly, this works! Once I'd figured out where I was going wrong with RGB ordering and which order the bitplanes go in, I had a recognisable copy of Doom running. Unfortunately there were weird graphical glitches - sometimes blocks would be entirely the wrong colour. It took me a while to figure out what was going on and then I felt stupid. Recording the screen and watching in slow motion revealed that the glitches often showed parts of two frames displaying at once. The Amiga hardware is taking responsibility for scanning out the frames, and the code on the Linux side isn't synchronised with it at all. That means I could update the bitplanes while the Amiga was scanning them out, resulting in a mashup of planes from two different Doom frames being used as one Amiga frame. One approach to avoid this would be to tie the Doom event loop to the Amiga, blocking my writes until the end of scanout. The other is to use double-buffering - have two sets of bitplanes, one being displayed and the other being written to. This consumes more RAM but since I'm not using the Amiga RAM for anything else that's not a problem. With this approach I have two copper lists, one for each set of bitplanes, and switch between them on each frame. This improved things a lot but not entirely, and there's still glitches when the palette is being updated (because there's only one set of colour registers), something Doom does rather a lot, so I'm going to need to implement proper synchronisation.

Except. This was only working if I ran a 68K emulator first in order to run Kickstart. If I tried accessing the hardware without doing that, things were in a weird state. I could update the colour registers, but accessing RAM didn't work - I could read stuff out, but anything I wrote vanished. Some more digging cleared that up. When you turn on a CPU it needs to start executing code from somewhere. On modern x86 systems it starts from a hardcoded address of 0xFFFFFFF0, which was traditionally a long way any RAM. The 68000 family instead reads its start address from address 0x00000004, which overlaps with where the Amiga chip RAM is. We can't write anything to RAM until we're executing code, and we can't execute code until we tell the CPU where the code is, which seems like a problem. This is solved on the Amiga by powering up in a state where the Kickstart ROM is "overlayed" onto address 0. The CPU reads the start address from the ROM, which causes it to jump into the ROM and start executing code there. Early on, the code tells the hardware to stop overlaying the ROM onto the low addresses, and now the RAM is available. This is poorly documented because it's not something you need to care if you execute Kickstart which every actual Amiga does and I'm only in this position because I've made poor life choices, but ok that explained things. To turn off the overlay you write to a register in one of the Complex Interface Adaptor (CIA) chips, and things start working like you'd expect.

Except, they don't. Writing to that register did nothing for me. I assumed that there was some other register I needed to write to first, and went to the extent of tracing every register access that occurred when running the emulator and replaying those in my code. Nope, still broken. What I finally discovered is that you need to pulse the reset line on the board before some of the hardware starts working - powering it up doesn't put you in a well defined state, but resetting it does.

So, I now have a slightly graphically glitchy copy of Doom running without any sound, displaying on an Amiga whose brain has been replaced with a parasitic Linux. Further updates will likely make things even worse. Code is, of course, available.

[1] This is why we had trouble with late era 32 bit systems and 4GB of RAM - a bunch of your hardware wanted to be in the same address space and so you couldn't put RAM there so you ended up with less than 4GB of RAM

comments

Victor Ma: It's alive!

Planet GNOME - Mar, 05/08/2025 - 2:00pd

In the last two weeks, I’ve been working on my lookahead-based word suggestion algorithm. And it’s finally functional! There’s still a lot more work to be done, but it’s great to see that the original problem I set out to solve is now solved by my new algorithm.

Without my changes

Here’s what the upstream Crosswords Editor looks like, with a problematic grid:

The editor suggests words like WORD and WORM, for the 4-Across slot. But none of the suggestions are valid, because the grid is actually unfillable. This means that there are no possible word suggestions for the grid.

The words that the editor suggests do work for 4-Across. But they do not work for 4-Down. They all cause 4-Down to become a nonsensical word.

The problem here is that the current word suggestion algorithm only looks at the row and column where the cursor is. So it sees 4-Across and 1-Down—but it has no idea about 4-Down. If it could see 4-Down, then it would realize that no word that fits in 4-Across also fits in 4-Down—and it would return an empty word suggestion list.

With my changes

My algorithm fixes the problem by considering every intersecting slot of the current slot. In the example grid, the current slot is 4-Across. So, my algorithm looks at 1-Down, 2-Down, 3-Down, and 4-Down. When it reaches 4-Down, it sees that no letter fits in the empty cell. Every possible letter leads to either 4-Across or 4-Down or both slots to contain an invalid word. So, my algorithm correctly returns an empty list of word suggestions.

Julian Hofer: Git Forges Made Simple: gh & glab

Planet GNOME - Hën, 04/08/2025 - 2:00pd

When I set the goal for myself to contribute to open source back in 2018, I mostly struggled with two technical challenges:

  • Python virtual environments, and
  • Git together with GitHub.

Solving the former is nowadays my job, so let me write up my current workflow for the latter.

Most people use Git in combination with modern Git forges like GitHub and GitLab. Git doesn’t know anything about these forges, which is why CLI tools exist to close that gap. It’s still good to know how to handle things without them, so I will also explain how to do things with only Git. For GitHub there’s gh and for GitLab there’s glab. Both of them are Go binaries without any dependencies that work on Linux, macOS and Windows. If you don’t like any of the provided installation methods, you can simply download the binary, make it executable and put it in your PATH.

Luckily, they also have mostly the same command line interface. First, you have to login with the command that corresponds to your git forge:

In the case of gh this even authenticates Git with GitHub. With GitLab, you still have to set up authentication via SSH.

Working Solo

The simplest way to use Git is to use it like a backup system. First, you create a new repository on either Github or GitLab. Then you clone the repository:

git clone <REPO>">

From that point on, all you have to do is:

  • do some work
  • commit
  • push
  • repeat

On its own there aren’t a lot of reasons to choose this approach over a file syncing service like Nextcloud. No, the main reason you do this, is because you are either already familiar with the git workflow, or want to get used to it.

Contributing

Git truly shines as soon as you start collaborating with others. On a high level this works like this:

  • You modify some files in a Git repository,
  • you propose your changes via the Git forge,
  • maintainers of the repository review your changes, and
  • as soon as they are happy with your changes, they will integrate your changes into the main branch of the repository.
Starting Out With a Fresh Branch

Let’s go over the exact commands.

  1. You will want to start out with the latest upstream changes in the default branch. You can find out its name by running the following command:

    git ls-remote --symref origin HEAD
  2. Chances are it displays either ref/heads/main or refs/heads/master. The last component is the branch, so the default branch will be called either main or master. Before you start a new branch, you will run the following two commands to make sure you start with the latest state of the repository:

    git switch <DEFAULT-BRANCH>git pullgit pull">
  3. You switch and create a new branch with:

    git switch --create <BRANCH>">

    That way you can work on multiple features at the same time and easily keep your default branch synchronized with the remote repository.

Open a Pull Request

The next step is to open a pull request on GitHub or merge request on GitLab. Even though they are named differently, they are exactly the same thing. Therefore, I will call both of them pull requests from now on. The idea of a pull request is to integrate the changes from one branch into another branch (typically the default branch). However, you don’t necessarily want to give every potential contributor the power to create new branches on your repository. That is why the concept of forks exists. Forks are copies of a repository that are hosted on the same Git forge. Contributors can now create branches on their own forks and open pull requests based on these branches.

  1. If you don’t have push access to the repository, now it’s time to create your own fork.

  2. Then, you open the Pull Request

Checking out Pull Requests

Often, you want to check out a pull request on your own machine to verify that it works as expected.

Ubuntu Studio: Ubuntu Studio 24.10 Has Reached End-Of-Life (EOL)

Planet Ubuntu - Enj, 10/07/2025 - 2:00md

As of July 10, 2025, all flavors of Ubuntu 24.10, including Ubuntu Studio 24.10, codenamed “Oracular Oriole”, have reached end-of-life (EOL). There will be no more updates of any kind, including security updates, for this release of Ubuntu.

If you have not already done so, please upgrade to Ubuntu Studio 25.10 via the instructions provided here. If you do not do so as soon as possible, you will lose the ability without additional advanced configuration.

No single release of any operating system can be supported indefinitely, and Ubuntu Studio has no exception to this rule.

Regular Ubuntu releases, meaning those that are between the Long-Term Support releases, are supported for 9 months and users are expected to upgrade after every release with a 3-month buffer following each release.

Long-Term Support releases are identified by an even numbered year-of-release and a month-of-release of April (04). Hence, the most recent Long-Term Support release is 24.04 (YY.MM = 2024.April), and the next Long-Term Support release will be 26.04 (2026.April). LTS releases for official Ubuntu flavors (not Desktop or Server which are supported for five years) are three years, meaning LTS users are expected to upgrade after every LTS release with a one-year buffer.

Stuart Langridge: Making a Discord activity with PHP

Planet Ubuntu - Mar, 08/07/2025 - 9:11pd

Another post in what is slowly becoming a series, after describing how to make a Discord bot with PHP; today we're looking at how to make a Discord activity the same way.

An activity is simpler than a bot; Discord activities are basically a web page which loads in an iframe, and can do what it likes in there. You're supposed to use them for games and the like, but I suspect that it might be useful to do quite a few bot-like tasks with activities instead; they take up more of your screen while you're using them, but it's much, much easier to create a user-friendly experience with an activity than it is with a bot. The user interface for bots tends to look a lot like the command line, which appeals to nerds, but having to type !mybot -opt 1 -opt 2 is incomprehensible gibberish to real people. Build a little web UI, you know it makes sense.

Anyway, I have not yet actually published one of these activities, and I suspect that there is a whole bunch of complexity around that which I'm not going to get into yet. So this will get you up and running with a Discord activity that you can test, yourself. Making it available to others is step 2: keep an eye out for a post on that.

There are lots of "frameworks" out there for building Discord activities, most of which are all about "use React!" and "have this complicated build environment!" and "deploy a node.js server!", when all you actually need is an SPA web page1, a JS library, a small PHP file, and that's it. No build step required, no deploying a node.js server, just host it in any web space that does PHP (i.e., all of them). Keep it simple, folks. Much nicer.

Step 1: set up a Discord app

To have an activity, it's gotta be tied to a Discord app. Get one of these as follows:

  • Create an application at discord.com/developers/applications. Call it whatever you want
  • Copy the "Application ID" from "General Information" and make a secrets.php file; add the application ID as $clientid = "whatever";
  • In "OAuth2", "Reset Secret" under Client Secret and store it in secrets.php as $clientsecret
  • In "OAuth2", "Add Redirect": this URL doesn't get used but there has to be one, so fill it in as some URL you like (http://127.0.0.1 works fine)
  • Get the URL of your activity web app (let's say it's https://myserver/myapp/). Under URL Mappings, add myserver/myapp (no https://) as the Root Mapping. This tells Discord where your activity is
  • Under Settings, tick Enable Activities. (Also tick "iOS" and "Android" if you want it to work in the phone app)
  • Under Installation > Install Link, copy the Discord Provided Link. Open it in a browser. This will switch to the Discord desktop app. Add this app to the server of your choice (not to everywhere), and choose the server you want to add it to
  • In the Discord desktop client, click the Activities button (it looks like a playstation controller, at the end of the message entry textbox). Your app should now be in "Apps in this Server". Choose it and say Launch. Confirm that you're happy to trust it because you're running it for the first time

And this will then launch your activity in a window in your Discord app. It won't do anything yet because you haven't written it, but it's now loading.

Step 2: write an activity
  • You'll need the Discord Embedded SDK JS library. Go off to jsdelivr and see the URL it wants you to use (at time of writing this is https://cdn.jsdelivr.net/npm/@discord/embedded-app-sdk@2.0.0/+esm but check). Download this URL to get a JS file, which you should call discordsdk.js. (Note: do not link to this directly. Discord activities can't download external resources without some semi-complex setup. Just download the JS file)
  • Now write the home page for your app -- index.php is likely to be ideal for this, because you need the client ID that you put in secrets.php. A very basic one, which works out who the user is, looks something like this:
<html> <body> I am an activity! You are <output id="username">...?</output> <scr ipt type="module"> import {DiscordSDK} from './discordsdk.js'; const clientid = '<?php echo $clientid; ?>'; async function setup() { const discordSdk = new DiscordSDK(clientid); // Wait for READY payload from the discord client await discordSdk.ready(); // Pop open the OAuth permission modal and request for access to scopes listed in scope array below const {code} = await discordSdk.commands.authorize({ client_id: clientid, response_type: 'code', state: '', prompt: 'none', scope: ['identify'], }); const response = await fetch('/.proxy/token.php?code=' + code); const {access_token} = await response.json(); const auth = await discordSdk.commands.authenticate({access_token}); document.getElementById("username").textContent = auth.user.username; /* other properties you may find useful: server ID: discordSdk.guildId user ID: auth.user.id channel ID: discordSdk.channelId */ } setup()

You will see that in the middle of this, we call token.php to get an access token from the code that discordSdk.commands.authorize gives you. While the URL is /.proxy/token.php, that's just a token.php file right next to index.php; the .proxy stuff is because Discord puts all your requests through their proxy, which is OK. So you need this file to exist. Following the Discord instructions for authenticating users with OAuth, it should look something like this:

<?php require_once("secrets.php"); $postdata = http_build_query( array( "client_id" => $clientid, "client_secret" => $clientsecret, "grant_type" => "authorization_code", "code" => $_GET["code"] ) ); $opts = array('http' => array( 'method' => 'POST', 'header' => [ 'Content-Type: application/x-www-form-urlencoded', 'User-Agent: mybot/1.00' ], 'content' => $postdata, 'ignore_errors' => true ) ); $context = stream_context_create($opts); $result_json = file_get_contents('https://discord.com/api/oauth2/token', false, $context); if ($result_json == FALSE) { echo json_encode(array("error"=>"no response")); die(); } $result = json_decode($result_json, true); if (!array_key_exists("access_token", $result)) { error_log("Got JSON response from /token without access_token $result_json"); echo json_encode(array("error"=>"no token")); die(); } $access_token = $result["access_token"]; echo json_encode(array("access_token" => $access_token));

And... that's all. At this point, if you Launch your activity from Discord, it should load, and should work out who the running user is (and which channel and server they're in) and that's pretty much all you need. Hopefully that's a relatively simple way to get started.

  1. it's gotta be an SPA. Discord does not like it when the page navigates around

Stuart Langridge: A (limited) defence of footnotes

Planet Ubuntu - Enj, 03/07/2025 - 8:12pd

So, Jake Archibald wrote that we should "give footnotes the boot", and... I do not wholly agree. So, here are some arguments against, or at least perpendicular to. Whether this is in grateful thanks of or cold-eyed revenge about him making me drink a limoncello and Red Bull last week can remain a mystery.

Commentary about footnotes on the web tends to boil down into two categories: that they're foot, and that they're notes. Everybody1 agrees that being foot is a problem. Having a meaningless little symbol in some text which you then have to scroll down to the end of a document to understand is stupid. But, and here's the point, nobody does this. Unless a document on the web was straight up machine-converted from its prior life as a printed thing, any "footnotes" therein will have had some effort made to conceptually locate the content of the footnote inline with the text that it's footnoting. That might be a link which jumps you down to the bottom, or it might be placed at the side, or it might appear inline when clicked on, or it might appear in a popover, but the content of a "footnote" can be reached without your thread of attention being diverted from the point where you were previously2.

He's right about the numbers3 being meaningless, though, and that they're bad link text; the number "3" gives no indication of what's hidden behind it, and the analogy with "click here" as link text is a good one. We'll come back to this, but it is a correct objection.

What is a footnote, anyway?

The issue with footnotes being set off this way (that is: that they're notes) isn't, though, that it's bad (which it is), it's that the alternatives are worse, at least in some situations. A footnote is an extra bit of information which is relevant to what you're reading, but not important enough that you need to read it right now. That might be because it's explanatory (that is: it expands and enlarges on the main point being made, but isn't directly required), or because it's a reference (a citation, or a link out to where this information was found so it can be looked up later and to prove that the author didn't just make this up), or because it's commentary (where you don't want to disrupt the text that's written with additions inline, maybe because you didn't write it). Or, and this is important, because it's funnier to set it off like this. A footnote used this way is like the voice of the narrator in The Perils of Penelope Pitstop being funny about the situation. Look, I'll choose a random book from my bookshelf4, Reaper Man by Terry Pratchett.

This is done because it's funny. Alternatives... would not be funny.5

If this read:

Even the industrial-grade crystal ball was only there as a sop to her customers. Mrs Cake could actually read the future in a bowl of porridge. (It would say, for example, that you would shortly undergo a painful bowel movement.) She could have a revelation in a panful of frying bacon.

then it's too distracting, isn't it? That's giving the thing too much prominence; it derails the point and then you have to get back on board after reading it. Similarly with making it a long note via <details> or via making it <section role="aside">, and Jake does make the point that that's for longer notes.

Even the industrial-grade crystal ball was only there as a sop to her customers. Mrs Cake could actually read the future in a bowl of porridge.

NoteIt would say, for example, that you would shortly undergo a painful bowel movement.

She could have a revelation in a panful of frying bacon.

Now, admittedly, half the reason Pratchett's footnotes are funny is because they're imitating the academic use. But the other half is that there is a place for that "voice of the narrator" to make snarky asides, and we don't really have a better way to do it.

Sometimes the parenthesis is the best way to do it. Look at the explanations of "explanatory", "reference", and "commentary" in the paragraph above about what a footnote is. They needed to be inline; the definition of what I mean by "explanatory" should be read along with the word, and you need to understand my definition to understand why I think it's important. It's directly relevant. So it's inline; you must not proceed without having read it. It's not a footnote. But that's not always the case; sometimes you want to expand on what's been written without requiring the reader to read that expansion in order to proceed. It's a help; an addition; something relevant but not too relevant. (I think this is behind the convention that footnotes are in smaller text, personally; it's a typographic convention that this represents the niggling or snarky or helpful "voice in your head", annotating the ongoing conversation. But I haven't backed this up with research or anything.)

What's the alternative?

See, this is the point. Assume for the moment that I'm right6 and that there is some need for this type of annotation -- something which is important enough to be referenced here but not important enough that you must read it to proceed. How do we represent that in a document?

Jake's approaches are all reasonable in some situations. A note section (a "sidebar", I think newspaper people would call it?) works well for long clarifying paragraphs, little biographies of a person you've referenced, or whatever. If that content is less obviously relevant then hiding it behind a collapsed revealer triangle is even better. Short stuff which is that smidge more relevant gets promoted to be entirely inline and put in brackets. Stuff which is entirely reference material (citations, for example) doesn't really need to be in text in the document at all; don't footnote your point and then make a citation which links to the source, just link the text you wrote directly to the source. That certainly is a legacy of print media. There are annoying problems with most of the alternatives (a <details> can't go in a <p> even if inline, which is massively infuriating; sidenotes are great on wide screens but you still need to solve this problem on narrow, so they can't be the answer alone.) You can even put the footnote text in a tooltip as well, which helps people with mouse pointers or (maybe) keyboard navigation, and is what I do right here on this site.

But... if you've got a point which isn't important enough to be inline and isn't long enough to get its own box off to the side, then it's gotta go somewhere, and if that somewhere isn't "right there inline" then it's gotta be somewhere else, and... that's what a footnote is, right? Some text elsewhere that you link to.

We can certainly take advantage of being a digital document to display the annotation inline if the user chooses to (by clicking on it or similar), or to show a popover (which paper can't do). But if the text isn't displayed to you up front, then you have to click on something to show it, and that thing you click on must not itself be distracting. That means the thing you click on must be small, and not contentful. Numbers (or little symbols) are not too bad an approach, in that light. The technical issues here are dispensed with easily enough, as Lea Verou points out: yes, put a bigger hit target around your little undistracting numbers so they're not too hard to tap on, that's important.

But as Lea goes on to say, and Jake mentioned above... how do we deal with the idea that "3" needs to be both "small and undistracting" but also "give context so it's not just a meaningless number"? This is a real problem; pretty much by definition, if your "here is something which will show you extra information" marker gives you context about what that extra information is, then it's long enough that you actually have to read it to understand the context, and therefore it's distracting.7 This isn't really a circle that can be squared: these two requirements are in opposition, and so a compromise is needed.

Lea makes the same point with "How to provide context without increasing prominence? Wrapping part of the text with a link could be a better anchor, but then how to distinguish from actual links? Perhaps we need a convention." And I agree. I think we need a convention for this. But... I think we've already got a convention, no? A little superscript number or symbol means "this is a marker for additional information, which you need to interact with8 to get that additional information". Is it a perfect convention? No: the numbers are semantically meaningless. Is there a better convention? I'm not sure there is.

An end on't

So, Jake's right: a whole bunch of things that are currently presented on the web as "here's a little (maybe clickable) number, click it to jump to the end of the document to read a thing" could be presented much better with a little thought. We web authors could do better at this. But should footnotes go away? I don't think so. Once all the cases of things that should be done better are done better, there'll still be some left. I don't hate footnotes. I do hate limoncello and Red Bull, though.

  1. sensible
  2. for good implementations, anyway; if you make your footnotes a link down to the end of the document, and then don't provide a link back via either the footnote marker or by adding it to the end, then you are a bad web author and I condemn you to constantly find unpaired socks, forever
  3. or, ye gods and little fishes, a selection of mad typographic symbols which most people can't even type and need to be copied from the web search results for "that little squiggly section thingy"
  4. alright, I chose a random Terry Pratchett book to make the point, I admit; I'm not stupid. But it really was the closest one to hand; I didn't spend extra time looking for particularly good examples
  5. This is basically "explaining the joke", something which squashes all the humour out of it like grapes in a press. Sorry, PTerry.
  6. I always do
  7. I've seen people do footnote markers which are little words rather than numbers, and it's dead annoying. I get what they're trying to do, which is to solve this context problem, but it's worse
  8. you might 'interact' with this marker by clicking on it in a digital document, or by looking it up at the bottom of the page in a print doc, but it's all interaction

Jos&eacute; Antonio Rey: 2025: Finding a job, and the understanding the market

Planet Ubuntu - Mër, 25/06/2025 - 11:39md

So, I’ve been in the job market for a bit over a year. I was part of a layoff cycle in my last company, and finding a new gig has been difficult. I haven’t been able to find something as of yet, but it’s been a learning curve. The market is not what it has been in the last couple of years. With AI in the mix, lots of roles have been eliminated, or have shifted towards where human intervention is needed to interpret or verify the data AI is interpreting. Job hunting is a job in an of itself, and may even take a 9 to 5 role. I know of a lot of people who have gone through the same process as myself, and wanted to share some of insights and tips from what I’ve learned throughout the last year.

Leveraging your network

First, and I think most important, is to understand that there’s a lot of great people around that you might have worked with. You can always ask for recommendations, touch base, or even have a small chat to see how things are going on their end. Conversations can be very refreshing, and can help you get a new perspective as how the industries are shifting, where you might want to learn new skills, or how to improve your positioning in the market. Folks can talk around and see if there’s additional positions where you might be a good fit, and it’s always good to have a helping hand (or a few). At the end of the day, these folks are your own community. I’ve gotten roles in the past by being referred, and these connections have been critical for my understanding of how different businesses may approach the same problem, or even to solve internal conflicts. So, reach out to people you know!

Understanding the market

Like I mentioned in the opening paragraph, the market is evolving constantly. AI has taken a very solid role nowadays, and lots of companies ask about how you’ve used AI recently. Part of understanding the market is understanding the bleeding edge tools that are used to improve workflows and day-to-day efficiency. Research tools that are coming up, and that are shaping the market.

To give you an example. Haven’t tried AI yet? Give it a spin, even for simple questions. Understand where it works, where it fails, and how you, as a human, can make it work for you. Get a sense of the pitfalls, and where human intervention is needed to interpret or verify the data that’s in there. Like one of my former managers said, “trust, but verify”. Or, you can even get to the point of not trusting the data, and sharing that as a story!

Apply thoughtfully

Someone gave me the recommendation to apply to everything that I see where I “could be a fit”. While this might have its upsides, you might also end up in situations where you are not actually a fit, or where you don’t know the company and what it does. Always take the time, at least a few minutes, to understand the company that you’re applying for, research their values, and how they align to yours. Read about the product they’re creating, selling, or offering, and see if it’s a product where you could contribute your skills. Then, you can make the decision of applying. While doing this you may discover that you are applying to a position in a sector that you’re not interested in, or where your skillset might not be used to its full potential. And you might be missing out on some other opportunities that are significantly more aligned to you.

Also take the time to fully review the job description. JDs are pretty descriptive, and you might stumble upon certain details that don’t align with yourself, such as the salary, hours, location, or certain expectations that you might feel don’t fit within the role or that you are not ready for.

Prepare for your interviews

You landed an interview – congratulations! Make sure that you’ve researched the company before heading in. If you’ve taken a look at the company and the role before applying, take a glimpse again. You might find more interesting things, and it will demonstrate that you are actually preparing yourself for the interview. Also, interviewing is a two-way street. Make sure that you have some questions at the end. Double-check the role of your interviewer in the company, and ensure that you have questions that are tailored to their particular roles. Think about what you want to get from the interview (other than the job!).

Job sourcing

There are many great job sources today – LinkedIn being the biggest of all of them. Throughout my searches I’ve also found weworkremotely.com and hnhiring.com are great sources. I strongly advise that you expand your search and find sources that are relevant to your particular role or industry. This has opened up a lot of opportunities for me!

Take some time for yourself

I know that having a job is important. However, it’s also important to take time for yourself. Your mental health is important. You can use this time to develop some skills, play some games, take care of your garden, or even reorganize your home. Find a hobby and distract yourself every now and then. Take breaks, and ensure you’re not over-stressing yourself. Read a bit about burnout, and take care of yourself, as burnout can also happen from job hunting. And if you need a breather, make sure you take one, but don’t overdo it! Time is valuable, so it’s all about finding the right balance.

Hopefully this is helpful for some folks that are going through my same situation. What other things have worked for you? Do you have any other tips you could share? I’d be happy to read about them! Share them with me on LinkedIn. I’m also happy to chat – you can always find me at jose@ubuntu.com.

Faqet

Subscribe to AlbLinux agreguesi