Deploying web projects can seem intimidating at first, especially when working with a remote server like Linode. Recently, I decided to deploy a simple HTML project (index.html) on a Linode server using Nginx. Here’s a detailed account of the steps I took, the challenges I faced, and the solutions I applied.
Step 1: Accessing the Linode ServerThe first step was to connect to my Linode server via SSH:
ssh root@<your-linode-ip>Initially, I encountered a timeout issue, which reminded me to check network settings and ensure SSH access was enabled for my Linode instance. Once connected, I had access to the server terminal and could manage files and services.
Step 2: Preparing the ProjectMy project was simple—it only contained an index.html file. I uploaded it to the server under:
/var/www/hng13-stage0-devopsI verified the project folder structure with:
ls -l /var/www/hng13-stage0-devopsSince there was no public folder or PHP files, I knew I needed to adjust the Nginx configuration to serve directly from this folder.
Step 3: Setting Up NginxI opened the Nginx configuration for my site:
sudo nano /etc/nginx/sites-available/hng13Initially, I mistakenly pointed root to a non-existent folder (public), which caused a 404 Not Found error. The correct configuration looked like this:
server { listen 80; server_name <your_linode-ip>; root /var/www/hng13-stage0-devops; # points to folder containing index.html index index.html index.htm; location / { try_files $uri $uri/ =404; } } Step 4: Enabling the Site and TestingAfter creating the configuration file, I enabled the site:
sudo ln -s /etc/nginx/sites-available/hng13 /etc/nginx/sites-enabled/I also removed the default site to avoid conflicts:
sudo rm /etc/nginx/sites-enabled/defaultThen I tested the configuration:
sudo nginx -tIf the syntax was OK, I reloaded Nginx:
sudo systemctl reload nginx Step 5: Checking PermissionsNginx must have access to the project files. I ensured the correct permissions:
sudo chown -R www-data:www-data /var/www/hng13-stage0-devops sudo chmod -R 755 /var/www/hng13-stage0-devops Step 6: Viewing the SiteFinally, I opened my browser and navigated to
http://<your-linode-ip>And there it was—my index.html page served perfectly via Nginx.
Challenges and Lessons LearnedDeploying my project was a learning experience. Even small mistakes like pointing to the wrong folder or placing directives in the wrong context can break the site—but step-by-step debugging and understanding the errors helped me fix everything quickly.This has kick started my devOps journey and I truly loved the challenge
Misc news about the gedit text editor, mid-October edition! (Some sections are a bit technical).
Rework of the file loading and saving (continued)The refactoring continues in the libgedit-gtksourceview module, this time to tackle a big class that takes too much responsibilities. A utility is in development which will permit to delegate a part of the work.
The utility is about character encoding conversion, with support of invalid bytes. It takes as input a single GBytes (the file content), and transforms it into a list of chunks. A chunk contains either valid (successfully converted) bytes, or invalid bytes. The output format - the "list of chunks" - is subject to change to improve memory consumption and performances.
Note that invalid bytes are allowed, to be able to open really any kind of files with gedit.
I must also note that this is quite sensitive work, at the heart of document loading for gedit. Normally all these refactorings and improvements will be worth it!
Progress in other modulesThere has been some progress on other modules:
In addition to Liberapay, you can now support the work that I do on GitHub Sponsors. See the gedit donations page.
Thank you ❤️
Over the past few weeks, I’ve been working on improving some test code that I had written.
Refactoring time!My first order of business was to refactor the test code. There was a lot of boilerplate, which made it difficult to add new tests, and also created visual clutter.
For example, have a look at this test case:
static void test_egg_ipuz (void) { g_autoptr (WordList) word_list = NULL; IpuzGrid *grid; g_autofree IpuzClue *clue = NULL; g_autoptr (WordArray) clue_matches = NULL; word_list = get_broda_word_list (); grid = create_grid (EGG_IPUZ_FILE_PATH); clue = get_clue (grid, IPUZ_CLUE_DIRECTION_ACROSS, 2); clue_matches = word_list_find_clue_matches (word_list, clue, grid); g_assert_cmpint (word_array_len (clue_matches), ==, 3); g_assert_cmpstr (word_list_get_indexed_word (word_list, word_array_index (clue_matches, 0)), ==, "EGGS"); g_assert_cmpstr ( word_list_get_indexed_word (word_list, word_array_index (clue_matches, 1)), ==, "EGGO"); g_assert_cmpstr ( word_list_get_indexed_word (word_list, word_array_index (clue_matches, 2)), ==, "EGGY"); }That’s an awful lot of code just to say:
And this was repeated in every test case, and needed to be repeated in every new test case I added. So, I knew that I had to refactor my code.
Fixtures and functionsMy first step was to extract all of this setup code:
g_autoptr (WordList) word_list = NULL; IpuzGrid *grid; g_autofree IpuzClue *clue = NULL; g_autoptr (WordArray) clue_matches = NULL; word_list = get_broda_word_list (); grid = create_grid (EGG_IPUZ_FILE_PATH); clue = get_clue (grid, IPUZ_CLUE_DIRECTION_ACROSS, 2); clue_matches = word_list_find_clue_matches (word_list, clue, grid);To do this, I used a fixture:
typedef struct { WordList *word_list; IpuzGrid *grid; } Fixture; static void fixture_set_up (Fixture *fixture, gconstpointer user_data) { const gchar *ipuz_file_path = (const gchar *) user_data; fixture->word_list = get_broda_word_list (); fixture->grid = create_grid (ipuz_file_path); } static void fixture_tear_down (Fixture *fixture, gconstpointer user_data) { g_object_unref (fixture->word_list); }My next step was to extract all of this assertion code:
g_assert_cmpint (word_array_len (clue_matches), ==, 3); g_assert_cmpstr (word_list_get_indexed_word (word_list, word_array_index (clue_matches, 0)), ==, "EGGS"); g_assert_cmpstr ( word_list_get_indexed_word (word_list, word_array_index (clue_matches, 1)), ==, "EGGO"); g_assert_cmpstr ( word_list_get_indexed_word (word_list, word_array_index (clue_matches, 2)), ==, "EGGY");To do this, I created a new function that runs word_list_find_clue_matches() and asserts that the result equals an expected_words parameter.
static void test_clue_matches (WordList *word_list, IpuzGrid *grid, IpuzClueDirection clue_direction, guint clue_index, const gchar *expected_words[]) { const IpuzClue *clue = NULL; g_autoptr (WordArray) clue_matches = NULL; g_autoptr (WordArray) expected_word_array = NULL; clue = get_clue (grid, clue_direction, clue_index); clue_matches = word_list_find_clue_matches (word_list, clue, grid); expected_word_array = str_array_to_word_array (expected_words, word_list); g_assert_true (word_array_equals (clue_matches, expected_word_array)); }After all that, here’s what my test case looked like:
static void test_egg_ipuz (Fixture *fixture, gconstpointer user_data) { test_clue_matches (fixture->word_list, fixture->grid, IPUZ_CLUE_DIRECTION_ACROSS, 2, (const gchar*[]){"EGGS", "EGGO", "EGGY", NULL}); }Much better!
Macro functionsBut as great as that was, I knew that I could take it even further, with macro functions.
I created a macro function to simplify test case definitions:
#define ASSERT_CLUE_MATCHES(DIRECTION, INDEX, ...) \ test_clue_matches (fixture->word_list, \ fixture->grid, \ DIRECTION, \ INDEX, \ (const gchar*[]){__VA_ARGS__, NULL})Now, test_egg_ipuz() looked like this:
static void test_egg_ipuz (Fixture *fixture, gconstpointer user_data) { ASSERT_CLUE_MATCHES (IPUZ_CLUE_DIRECTION_ACROSS, 2, "EGGS", "EGGO", "EGGY"); }I also made a macro function for the test case declarations:
#define ADD_IPUZ_TEST(test_name, file_name) \ g_test_add ("/clue_matches/" #test_name, \ Fixture, \ "tests/clue-matches/" #file_name, \ fixture_set_up, \ test_name, \ fixture_tear_down)Which turned this:
g_test_add ("/clue_matches/test_egg_ipuz", Fixture, EGG_IPUZ, fixture_set_up, test_egg_ipuz, fixture_tear_down);Into this:
ADD_IPUZ_TEST (test_egg_ipuz, egg.ipuz); An unfortunate bugSo, picture this: You’ve just finished refactoring your test code. You add some finishing touches, do a final test run, look over the diff one last time…and everything seems good. So, you open up an MR and start working on other things.
But then, the unthinkable happens—the CI pipeline fails! And apparently, it’s due to a test failure? But you ran your tests locally, and everything worked just fine. (You run them again just to be sure, and yup, they still pass.) And what’s more, it’s only the Flatpak CI tests that failed. The native CI tests succeeded.
So…what, then? What could be the cause of this? I mean, how do you even begin debugging a test failure that only happens in a particular CI job and nowhere else? Well, let’s just try running the CI pipeline again and see what happens. Maybe the problem will go away. Hopefully, the problem goes away.
…
Nope. Still fails.
…
Rats.
Well, I’ll spare you the gory details that it took for me to finally figure this one out. But the cause of the bug was me accidentally freeing an object that I should never have freed.
This meant that the corresponding memory segment could be—but, importantly, did not necessarily have to be—filled with garbage data. And this is why only the Flatpak job’s test run failed…well, at first, anyway. By changing around some of the test cases, I was able to get the native CI tests and local tests to fail. And this is what eventually clued me into the true nature of this bug.
So, after spending the better part of two weeks, here is the fix I ended up with:
@@ -94,7 +94,7 @@ test_clue_matches (WordList *word_list, guint clue_index, const gchar *expected_words[]) { - g_autofree IpuzClue *clue = NULL; + const IpuzClue *clue = NULL; g_autoptr (WordArray) clue_matches = NULL; g_autoptr (WordArray) expected_word_array = NULL;Recently I got around tackling a long standing issue for good. There were multiple attempts in the past 6 years to cache flatpak-builder artifacts with Gitlab but none had worked so far.
On the technical side of things, flatpak-builder relies heavily on extended attributes (xattrs) on files to do cache validation. Using gitlab’s built-in cache or artifacts mechanisms results in a plain zip archive which strips all the attributes from the files, causing the cache to always be invalid once restored. Additionally the hardlinks/symlinks in the cache break. One workaround for this is to always tar the directories and then manually extract them after they are restored.
On the infrastructure of things we stumble once again into Gitlab. When a cache or artifact is created, it’s uploaded into the Gitlab’s instance storage so it can later be reused/redownloaded into any runner. While this is great, it also quickly ramps up the network egress bill we have to pay along with storage. And since its a public gitlab instance that anyone can make request against repositories, it gets out of hand fast.
Couple weeks ago Bart pointed me out to Flathub’s workaround for this same problem. It comes down to making it someone else problem, and ideally one someone who is willing to fund FOSS infrastructure. We can use ORAS to wrap files and directories into an OCI wrapper and publish it to public registries. And it worked. Quite handy! OCI images are the new tarballs.
Now when a pipeline run against your default branch (and assuming it’s protected) it will create a cache artifact and upload to the currently configured OCI registry. Afterwards, any build, including Merge Request pipelines, will download the image, extract the artifacts and check how much of it is still valid.
From some quick tests and numbers, GNOME Builder went from a ~16 minute build to 6 minutes for our x86_64 runners. While on the AArch64 runner the impact was even bigger, going from 50 minutes to 16 minutes. Not bad. The more modules you are building in your manifest, the more noticeable it is.
Unlike Buildstream, there is no Content Addressable Server and flatpak-builder itself isn’t aware of the artifacts we publish or can associate them with the cache keys. The OCI/ORAS cache artifacts are manual and a bit hacky of a solution but works well in practice and until we have better tooling. To optimize a bit better for less cache-misses consider building modules from pinned commits/tags/tarballs and building modules from moving branches as late as possible.
If you are curious in the details, take a look at the related Merge Request in the templates repository and the follow up commits.
Free Palestine
It has been a couple of years since I started working on a Rust library called oo7 as a Secret Service client implementation. The library ended up also having support for per-sandboxed app keyring using the Secret portal with a seamless API for end-users that makes usage from the application side straightforward.
The project, with time, grew support for various components:
The last component was kickstarted by Dhanuka Warusadura, as we already had the foundation for that in the client library, especially the file backend reimplementation of gnome-keyring. The project is slowly progressing, but it is almost there!
The problem with replacing such a very sensitive component like gnome-keyring-daemon is that you have to make sure the very sensitive user data is not corrupted, lost, or inaccessible. For that, we need to ensure that both the file backend implementation in the oo7 library and the daemon implementation itself are well tested.
That is why I spent my weekend, as well as a whole day off, working on improving the test suite of the wannabe core component of the Linux desktop.
Coverage ReportOne metric that can give the developer some insight into which lines of code or functions of the codebase are executed when running the test suite is code coverage.
In order to get the coverage of a Rust project, you can use a project like Tarpaulin, which integrates with the Cargo build system. For a simple project, a command like this, after installing Tarpaulin, can give you an HTML report:
cargo tarpaulin \ --package oo7 \ --lib \ --no-default-features \ --features "tracing,tokio,native_crypto" \ --ignore-panics \ --out Html \ --output-dir coverageExcept in our use case, it is slightly more complicated. The client library supports switching between Rust native cryptographic primitives crates or using OpenSSL. We must ensure that both are tested.
For that, we can export our report in LCOV for native crypto and do the same for OpenSSL, then combine the results using a tool like grcov.
mkdir -p coverage-raw cargo tarpaulin \ --package oo7 \ --lib \ --no-default-features \ --features "tracing,tokio,native_crypto" \ --ignore-panics \ --out Lcov \ --output-dir coverage-raw mv coverage-raw/lcov.info coverage-raw/native-tokio.info cargo tarpaulin \ --package oo7 \ --lib \ --no-default-features \ --features "tracing,tokio,openssl_crypto" \ --ignore-panics \ --out Lcov \ --output-dir coverage-raw mv coverage-raw/lcov.info coverage-raw/openssl-tokio.infoand then combine the results with
cat coverage-raw/*.info > coverage-raw/combined.info grcov coverage-raw/combined.info \ --binary-path target/debug/ \ --source-dir . \ --output-type html \ --output-path coverage \ --branch \ --ignore-not-existing \ --ignore "**/portal/*" \ --ignore "**/cli/*" \ --ignore "**/tests/*" \ --ignore "**/examples/*" \ --ignore "**/target/*"To make things easier, I added a bash script to the project repository that generates coverage for both the client library and the server implementation, as both are very sensitive and require intensive testing.
With that script in place, I also used it on CI to generate and upload the coverage reports at https://bilelmoussaoui.github.io/oo7/coverage/. The results were pretty bad when I started.
TestingFor the client side, most of the tests are straightforward to write; you just need to have a secret service implementation running on the DBus session bus. Things get quite complicated when the methods you have to test require a Prompt, a mechanism used in the spec to define a way for the user to be prompted for a password to unlock the keyring, create a new collection, and so on. The prompter is usually provided by a system component. For now, we just skipped those tests.
For the server side, it was mostly about setting up a peer-to-peer connection between the server and the client:
let guid = zbus::Guid::generate(); let (p0, p1) = tokio::net::UnixStream::pair().unwrap(); let (client_conn, server_conn) = tokio::try_join!( // Client zbus::connection::Builder::unix_stream(p0).p2p().build(), // Server zbus::connection::Builder::unix_stream(p1) .server(guid) .unwrap() .p2p() .build(), ) .unwrap();Thanks to the design of the client library, we keep the low-level APIs under oo7::dbus::api, which allowed me to straightforwardly write a bunch of server-side tests already.
There are still a lot of tests that need to be written and a few missing bits to ensure oo7-daemon is in an acceptable shape to be proposed as an alternative to gnome-keyring.
Don't overdo itThe coverage report is not meant to be targeted at 100%. It’s not a video game. You should focus only on the critical parts of your code that must be tested. Testing a Debug impl or a From trait (if it is straightforward) is not really useful, other than giving you a small dose of dopamine from "achieving" something.
Till then, may your coverage never reach 100%.
Not as much as I wanted to do was done in September.
libopenrawExtracting more of the calibration values for colour correction on DNG. Currently work on fixing the purple colour cast.
Added Nikon ZR and EOS C50.
ExifToolSubmitted some metadata updates to ExifTool. Because it nice to have, and also because libopenraw uses some of these autogenerated: I have a Perl script to generate Rust code from it (it used to do C++).
NiepceFinally merged the develop branch with all the import dialog work after having requested that it be removed from Damned Lies to not strain the translator is there is a long way to go before we can freeze the strings.
Supporting castAmong the number of packages I maintain / update on flathub, LightZone is a digital photo editing application written in Java1. Updating to the latest runtime 25.08 cause it to ignore the HiDPI setting. It will honour GDK_SCALE environment but this isn't set. So I wrote the small command line too gdk-scale to output the value. See gdk-scale on gitlab. And another patch in the wrapper script.
HiDPI support remains a mess across the board. Fltk just recently gained support for it (it's used by a few audio plugins).
1Don't try this at home.
A while ago I wrote about the limited usefulness of SO_PEERPIDFD. for authenticating sandboxed applications. The core problem was simple: while pidfds gave us a race-free way to identify a process, we still had no standardized way to figure out what that process actually was - which sandbox it ran in, what application it represented, or what permissions it should have.
The situation has improved considerably since then.
cgroup xattrsCgroups now support user extended attributes. This feature allows arbitrary metadata to be attached to cgroup inodes using standard xattr calls.
We can change flatpak (or snap, or any other container engine) to create a cgroup for application instances it launches, and attach metadata to it using xattrs. This metadata can include the sandboxing engine, application ID, instance ID, and any other information the compositor or D-Bus service might need.
Every process belongs to a cgroup, and you can query which cgroup a process belongs to through its pidfd - completely race-free.
Standardized AuthenticationRemember the complexity from the original post? Services had to implement different lookup mechanisms for different sandbox technologies:
All of this goes away. Now there’s a single path:
This works the same way regardless of which sandbox engine launched the application.
A Kernel Feature, Not a systemd OneIt’s worth emphasizing: cgroups are a Linux kernel feature. They have no dependency on systemd or any other userspace component. Any process can manage cgroups and attach xattrs to them. The process only needs appropriate permissions and is restricted to a subtree determined by the cgroup namespace it is in. This makes the approach universally applicable across different init systems and distributions.
To support non-Linux systems, we might even be able to abstract away the cgroup details, by providing a varlink service to register and query running applications. On Linux, this service would use cgroups and xattrs internally.
Replacing Socket-Per-AppThe old approach - creating dedicated wayland, D-Bus, etc. sockets for each app instance and attaching metadata to the service which gets mapped to connections on that socket - can now be retired. The pidfd + cgroup xattr approach is simpler: one standardized lookup path instead of mounting special sockets. It works everywhere: any service can authenticate any client without special socket setup. And it’s more flexible: metadata can be updated after process creation if needed.
For compositor and D-Bus service developers, this means you can finally implement proper sandboxed client authentication without needing to understand the internals of every container engine. For sandbox developers, it means you have a standardized way to communicate application identity without implementing custom socket mounting schemes.
Another edition of LinuxDays took place in Prague last weekend – the country’s largest Linux event drawing more than 1200 attendees and as every yearm we had a Fedora booth there – this time we also representing CentOS.
I was really glad that Tomáš Hrčka helped me staff the booth. I’m focused on the desktop part of Fedora and don’t follow the rest of the project in such detail. As a member of FESCo and Fedora infra team he has a great overview of what is going on in the project and our knowledge complemented each other very well when answering visitors’ questions. I’d also like to thank Adellaide Mikova who helped us tremendously despite not being a technical person.
This year I took our heavy 4K HDR display and showcased HDR support in Fedora Linux whose implementation was a multi-year effort for our team. I played HDR videos in two different video players (one that supports HDR and one that doesn’t), so that people could see a difference, and explained what needed to be implemented to make it work.
Another highlight of our booth were the laptops that run Fedora exceptionally well: Slimbook and especially Framework Laptop. Visitors were checking them out and we spoke about how the Fedora community works with the vendors to make sure Fedora Linux runs flawlessly on their laptops.
We also got a lot of questions about CentOS. We met quite a few people who were surprised that CentOS still exists. We explained to them that it lives on in the form of CentOS Stream and tried to dispel some of common misconceptions surrounding it.
Exhausting as it is, I really enjoy going to LinuxDays, but it’s also a great opportunity to explain things and get direct feedback from the community.