You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - https://planet.gnome.org/
Përditësimi: 1 ditë 1 orë më parë

Sebastian Wick: On the Usefulness of SO_PEERPIDFD

Enj, 21/09/2023 - 6:11md

Kernel 6.5 added a few new pidfd functions: SCM_PIDFD and SO_PEERPIDFD. The idea behind them is the same as SCM_CREDENTIALS and SO_PEERCRED respectively. The only difference is that the PIDFD functions return not a plain, numerical PID but a file descriptor instead.

A plain PID is small number of type pid_t that is incremented for each new process and wraps over when too many processes have been created. This PID is usually used to look up some information about the process via files in /proc/$PID. While a process is looking up some information, it is possible that the process that PID initially referred to has terminated and a new process with this PID has been created. The looked up information is now incorrect, possibly resulting in a security vulnerability.

The pidfd on the other hand always refers to one process and can be queried about the state of the process. This allows one to look up information from /proc/$PID without the race mentioned earlier. The SO_PEERCRED functionality in particular is interesting because it allows a service to query the pidfd of a connected client.

Or so it seems. For flatpak, wayland compositors and D-Bus services also want to authenticate their clients but they do not rely on this functionality. Instead the preferred approach taken here (implemented in wayland as the security-context protocol, still in discussion for D-Bus as the org.freedesktop.DBus.Containers1 interface) is to create a new wayland or D-Bus socket for each application instance and make sure that those sockets are the only way to connect to the services (specifically the “normal” host sockets must not be made available to the application instances). Flatpak is responsible for creating those sockets, attaching some metadata, and then mounting them. Currently the metadata is a triple: the sandboxing engine (flatpak, snap, …), the application id and the application instance. There are plans to extend this for further metadata.

The question is, why add all of this complexity to authenticate a process when the pidfd approach is so much easier. Turns out that the hard part is knowing what to do with the pid, where to look up the information that you need. For flatpak, /proc/$PID/root/.flatpak-info is a file that cannot be changed from processes in the sandbox and contains among other things the flatpak instance-id which can be used to look up more data from $XDG_RUNTIME_DIR/.flatpak/$instance-id. For snap the whole process is very different and other technologies like firejail have basically no way to do this lookup (don’t take my word on it).

(Aside: xdg-desktop-portal does does look up flatpak information from /proc/$PID and I just said it doesn’t use SO_PEERCRED so why is this not broken? Flatpak makes sure there is a process in the sandbox that stays alive the entire time and acts as a proxy between the app and the D-Bus broker to enforce access control. The PID of all connections from inside the sandbox are the PID of the proxy. This is still technically racy but it becomes much harder to pull off an attack. Implementing SO_PEERCRED to get rid of the race entirely in xdg-desktop-portal would be nice.)

Implementing all kinds of different, subtle mechanisms to look up information from different sandbox engines you might not even know exist doesn’t scale and that makes pidfd with SO_PEERCRED much less useful than one would expect in a lot of cases.

Matthias Clasen: Paths in GTK, part 2

Enj, 21/09/2023 - 12:44md

In the first part of this series, we introduced the concept of paths and looked at how to create a GskPath. But there’s more to paths than that.

Path Points

Many interesting properties of paths can change as you move along the trajectory of the path. To query such properties, we first need a way to pin down the point on the path that we are interested in.

GTK has the GskPathPoint struct for this purpose, and provides a number of functions to obtain them, such as gsk_path_get_closest_point(), which lets you find the point on the path that is closest to a given point.

Once you have a GskPathPoint, you can query the properties of the path at that point. The most basic property is the position, but you can also get the tangent, the curvature, or the distance from the beginning of the path.

Input

Another interesting question when using paths in a user interface is:

Is the mouse pointer hovering over the path?

You need the answer to this question if you want to highlight a path that the pointer is over, or if you want to react to the user clicking a path.

For a filled path, GTK provides the answer with the gsk_path_in_fill() method.

For a stroked path, it is much more complicated to provide a 100% accurate answer (in particular, if the stroke is using a dash pattern), but we can provide an approximate answer that is often good enough: a point is inside the stroke, if the distance to the closest point on the path is less than half the line width.

Outlook

The next part of this series will look at rendering with paths.

Jussi Pakkanen: Circles do not exist

Mar, 19/09/2023 - 2:57md

Many logos, drawings and other graphical designs have the following shape in it. What is this shape?

If you thought: "Ah-ha! I'm smart and read the title of this blog post so I know that this is most definitely not a circle."

Well it is. Specifically it is a raster image of a circle that I created with the Gimp just for this use.

However almost every "circle" you can see in printed media (and most purely digital ones) are not, in fact, circles. Why is this?

Since roughly the mid 80s all "high quality" print jobs have been done either in PostScript or, nowadays almost exclusively, in PDF. They use the same basic drawing model, which does not have a primitive for circles (or circle arcs). The only primitives they have are straight line segments, rectangles and Bézier curves. None of these can be used to express a circle accurately. You can only do an approximation of a circle but it is always slightly eccentric. The only way to create a proper circle is to have a raster image like the one above.

Does this matter in the real world?

For printing probably not. Almost nobody can tell the difference between a real circle and one that has been approximated with a Bézier curve with just four points. Furthermore, the human vision system is a bit weird and perfect circles look vertically elongated. You have to make them non-circular for people to consider them properly circular.

But suppose you want to use one of these things:

This is a laser cutter that takes its "print job" as a PDF file and uses its vector drawing commands to drive the cutting head. This means that it is impossible to use it to print a wheel. You'd need to attach the output to a lathe and sand it down to be round so it actually functions as a wheel rather than as a vibration source.

Again one might ask whether this has any practical impact. For this case, again, probably not. But did you know that one of the cases PDF is being considered (and, based on Internet rumors, is already being used) is as an interchange format for CAD drawings? Now it suddenly starts mattering. If you have any component where getting a really accurate circle shape is vital (like pistons and their holes), suddenly all your components are slightly misshaped. Which would not be fun.

Extra bonus information

Even though it is impossible to construct a path that is perfectly circular, PDF does provide a way to draw a filled circle. Here is the relevant snippet from the PDF 2.0 spec, subsection 8.5.3.2:

If a subpath is degenerate (consists of a single-point closed path or of two or more points at the same coordinates), the S operator shall paint it only if round line caps have been specified, producing a filled circle centred at the single point.

Who is willing to put money on the line that every PDF rendering implementation actually uses circles rather than doing the simple thing of approximating it with Béziers?

Matthias Clasen: Paths in GTK

Mar, 19/09/2023 - 4:18pd

It is no secret that we want to get rid of cairo as the drawing API in GTK, so we can move more of our drawing onto the GPU.

While People have found creative ways to draw things with render nodes, they don’t provide a comprehensive drawing API like Skia or, yes, cairo. Not a very satisfying state of affairs.

A few years ago, we started to investigate how to change this, by making paths available as first-class objects in GTK. This effort is finally starting to come to fruition, and you can see the first results in GTK 4.13.0.

Paths

So, what is a path? A rough definition could be:

A sequence of line segments or curves that may or may not be connected at their endpoints.

When we say curves, we specifically mean quadratic or cubic Bézier curves. On top of cairo, we also support rational quadratic Béziers (or as Skia calls them: conics), since they let us model circles and rounded rectangles precisely.

This picture shows a typical path, consisting of 4 curves and 2 lines, some of which are connected. As you can see, paths can be closed (like the 4 curves here) or open (like the 2 lines), with a start- and endpoint.

And how are paths useful for drawing? First, you can use a path to define an area (the part that’s inside the path) and fill it with a color, a gradient or some more complex content.

Alternatively, you can stroke the path with various properties such as line width, color or dash pattern.

Paths in GTK

The object that we use for paths in GTK is GskPath. It is a compact, immutable representation that is optimized for rendering. To create a GskPath, you need to use a GskPathBuilder, which has many convenience methods to create paths, either from individual curves or from predefined shapes.

This example creates a path that is a closed triangle:

builder = gsk_path_builder_new (); gsk_path_builder_move_to (builder, 0, 50); gsk_path_builder_line_to (builder, 100, 50); gsk_path_builder_line_to (builder, 50, 0); gsk_path_builder_close (builder); path = gsk_path_builder_free_to_path (builder);

And this one creates a circular path with the given center and radius:

builder = gsk_path_builder_new (); gsk_path_builder_add_circle (builder, center, radius); path = gsk_path_builder_free_to_path (builder);

Outlook

In the next post, we’ll look at properties of paths, and how to query them.

Georges Basile Stavracas Neto: Extending the month to infinity

Pre, 15/09/2023 - 10:30md

Greetings! It’s been a long time since my last article.

I’d like to share some recent developments in GNOME Calendar that got some people really excited about: the infinitely scrolling month view.

The Now

Before GNOME 45, Calendar offers two views – week and month – as well as a sidebar with a list of events.

The headerbar offers controls to navigate back and forth in time. The effect of these controls depends on the current view: if you’re in the week view, going forward in time moves you a week ahead; in the month view, it moves you a month.

Both views have evolved to be strictly about their respective time ranges. That means the month view, for example, strictly deals with the current month. Days from other months that sneak into the view are not interactive, and it doesn’t show events on them. This has been a long-standing feature request in GNOME Calendar.

GNOME Calendar 44 (screenshot by @TheEvilSkeleton)

The week view doesn’t really suffer from the same problem, even though it has the same constraint, since weeks are not variable in length like months.

While this approach has served us well enough for more than a decade, it had significant usability issues. One of the primary goals of a calendaring application is to facilitate planning ahead. The static month view made harmed the usability of the application, in particular when in the last days of the month, since you could not see ahead without chaging the month entirely.

Another shortcoming of the static month view was scrolling. Because the view was bound to a particular month, there was no way to transition between months smoothly. I eventually added mouse & touchpad scroll support to it, but it has been a source of bugs and confusion since the view abruptly switches after an apparently random amount of scrolling.

Overall, despite the constant efforts to improve it, it felt like we were hitting the limitations of the current design. To the drawing board we needed to go.

New possibilities with GTK4?

GTK4 introduces a family of data-oriented widgets (GtkListView, GtkGridView, GtkColumnView) that are able to handle virtually endless datasets, we had a promising start with rethinking the month view. We could just stuff weeks into a GtkListView, or days into a GtkGridView, and be done with it, right?

Well, not quite.

There is a fundamental difference between datasets and timelines, which very directly informs the architecture of these GTK widgets: timelines are infinite, datasets are bounded.

No matter how many entries you add to a dataset, or how large you make that dataset, it will still have a countable number of items. Timelines are uncountably infinite, both towards the past and the future. ¹

This, by itself, directly affects the user interface we can present, and prevents us from using GtkListView, or pretty much anything that uses a GtkAdjustment as the underlying controller of position. I originally assumed that the number of workarounds to make a new month work with adjustment would be manageable, but after some weeks of experimentation, it became abundantly clear that this approach cannot work without either a massive number of hacks, and at the cost of maintainability and my own sanity.

Eventually I bowed to the fate, and wrote a completely custom layout for the new month.

Extending the month to infinity

As it happens so often in computing, we don’t necessarily have to implement material infinity to give people the impression of infinity.

The ultimate goal here is to make the month view smoothly scroll between weeks. In principle, the smallest way to create the illusion of infinity is to show current weeks, plus one or more offscreen rows above and below.

Due to the way calendars are stored by evolution-data-server, where data transfers happen over D-Bus, we have to be particularly careful to prefetch more events than visible onscreen, and keep them cached. Scrolling still changes the date range in which the month view operates – every time a week is moved up or down, we need to discard events of weeks that went out of range, and request events of week that are now within range.

All these range changes needs to be carefully managed so that we don’t blow evolution-data-server too much with range changes.

Fundamentally, the architecture of the new month view is based on a row recycling mechanism. Conceptually, the month view is just a list of rows, each row representing a week, laid out vertically given the available width. It’s as if the month view is a partial circular buffer of time. Moving the bottom row to the top, and vice versa, during scroll, allows the month view to keep a static number of row widgets allocated at all times,

After experimenting with different ranges and caching a bit, I’ve arrived at the wonderful number of 10 weeks of wiggle room – so in addition to the 5 visible week rows, there’s 10 weeks above them, and 10 weeks below them. This completely arbitrary value seems enough to cache just enough events for the transitions not to be noticeable.

The new month view can now anchor itself against any particular week, and is no more limited to a single month. On months that take precisely 5 rows, you’ll be able to see events from the previous and next month as well – something that’s been requested over and over for a long time now.

Most of the expected features – dragging events around, creating events, etc – continue to work exactly as they used to.

Touching the limits of infinity

Of course, being able to fetch, cache, layout, and render 25 weeks worth of events pushed Calendar’s backend to its limits. We had to make things fast, real fast, in order to keep up with e.g. scrolling at 60 FPS.

This was a good opportunity to revisit how the month view calculates the layout and overflow of each week. The layout algorithm had to be reworked almost from scratch, and I eventually figured out that it was still violating GTK4 layout rules during the layout phase – it was changing visibility of events during layout, which causes a relayout inside a relayout. These days, GTK4 has some tolerance for that, but after increasing the number of events by a factor of 5x, not even GTK4 could be so complacent with Calendar.

Now we pre-calculate the event layout while building up the list of events, and keep it cached in a size-agnostic way. During layout, it only resolves overflow, and it does so by carefully changing child visibility of the widgetry.

This not only enables the month view to actually render its contents, but we also don’t do any significant calculation at layout time, which makes Calendar smoother overall. Nice.

While developing this new month view, every single potential threading issue in Calendar showed up as a nasty crash as well. Most of them should be fixed now.

This, at last, allows us to have this nice, beautiful, smooth Calendar app:

The new month view in all its glory (credits to @TheEvilSkeleton again) A story of ups and downs

It is not a secret that calendars aren’t the most exciting applications humankind has ever conceived. In an era where we have to look at hands to detect whether a picture is generated by AI; an era where we watch the dusk of both the eleventh and the 45-billion-dollar Xs; an era where we Linux has become a mainstream gaming platform, and all the problems that come with that; who on actual Earth gives a damn about a niche-of-a-niche calendar app?!

I do.

During the development of GNOME 45, I finally could squeeze some dry drops of free time to dedicate some love to Calendar again. It took some serious effort to make the new month view a reality, but looking at it now, I think it was worth it.

After the original maintainer and the main designer left the project and the community, around 2016, it felt a tad bit lonely. Motivation to continue working on was eroding at a slow but constant pace. Calendar’s issue tracker was out of control. The IRC and Matrix channels were dead. Attempts to get some funding for Calendar development led nowhere – who would be willing to pay people to work on such an uninteresting, unmarketable component of the desktop?! – and of course, there was no reason for day job to give me work time to dedicate to Calendar. The project was, by all measurements, a dying project. This was not healthy.

But lurking in the darkest corners of Canada, silent, and always on the watch, a night elf with personal investment in the project noticed the situation. And acted. Over the last couple of months, Jeff single-handedly tamed the issue tracker again; triaged and labeled every single issue; closed almost 300 issues; made everything actionable again; helped building up a roadmap; and, the most important to me, offered a friendly hand and brought back the fun of developing an open source app.

It made me realize that, contrary to what I believed for too many years, and despite not being exactly what’s advertised as free software culture, it really is all about people. The whole thing. Just being there, talking and discussing and having fun and eventually doing some contribution – that’s the sweet spot of free software culture to me. The reason I’m still involved with other GNOME projects? The people. The reason Calendar didn’t just die? The people.

I’m pretty attached to Calendar as it was the first project I contributed with code on GNOME, and it makes me happy to see that the project is slowly getting back to track, and a community is gathering around once again.

Join us in making this nice calendaring app at Matrix!

¹ – GNOME Calendar is resonably far from dealing with the granularity of Planck time!

Alice Mikhaylenko: Libadwaita 1.4

Pre, 15/09/2023 - 7:45md
A few apps using libadwaita 1.4

It’s that time of year again, so let’s look at what’s new.

New Adaptive Widgets

I’ve already talked about them in my last blog post, so I won’t go into details this time.

Breakpoints

Libadwaita 1.4 introduces a breakpoint system, allowing to change UI in arbitrary ways depending on the window size. Breakpoints can be used with AdwWindow, AdwApplicationWindow, or with AdwBreakpointBin if you need more control.

Breakpoints can be used in a fully declarative way from UI files, for example:

<object class="AdwBreakpoint"> <condition>max-width: 500sp</condition> <setter object="split-view" property="collapsed">True</property> </object>

As a tradeoff, you have to manually specify the window’s or bin’s minimum size and ensure its contents actually fit, same as you do on a small screen.

To help with that, GtkButton, GtkMenuButton, AdwSplitButton and AdwButtonContent now all include a :can-shrink property to enable text ellipsizing, while widgets like AdwBanner automatically enable it for their buttons in order to not get uncontrollably wide.

For breakpoint conditions one can use pixels (px), points (pt) or a new sp unit (scalable pixels, name lifted from Android), which is equivalent to pixels with default text scale, but scales with it: 1sp is equivalent to 1.25px with Large Text enabled and so on. To accommodate different text scale factors better, it is recommended to use sp whenever it’s feasible.

Navigation View

AdwNavigationView is an integrated widget implementing the browsing pattern, replacing AdwLeaflet with can-unfold=false. It provides a navigation stack that can be populated statically (e.g. from a UI file) or dynamically, and automatically provides gestures and shortcuts.

It also provides the navigation.push and navigation.pop actions, allowing to push pages directly from a UI file:

<object class="AdwActionRow"> <property name="title" translatable="yes">_Details</property> <property name="use-underline">True</property> <property name="activatable">True</property> <property name="action-name">navigation.push</property> <property name="action-target">"details"</property> <child> <object class="GtkImage"> <property name="icon-name">go-next-symbolic</property> <property name="accessible-role">presentation</property> </object> </child> </object>

To further simplify using it, AdwHeaderBar can automatically show the correct title for each navigation page, as well as a back button to pop the current page when appropriate.

Automatic back buttons also provide a context menu that allows to pop multiple pages at once:

This still works with nested navigation views, as well with navigation views combined with split views.

Split Views

While AdwNavigationView replaces the can-unfold=false case of AdwLeaflet, AdwNavigationSplitView replaces the other one.

It has two children: sidebar and content, and it displays them side by side. When the :collapsed property is set to TRUE, it literally turns into an AdwNavigationView. It doesn’t set it automatically though – you are supposed to do it from your breakpoints as needed.

It also provides a more sophisticated sizing for the sidebar, based on the percentage of the split view’s total width.

Meanwhile, AdwOverlaySplitView is similar, but instead of turning into a navigation view when collapsed, it overlays the sidebar over content, not unlike AdwFlap. As such, AdwFlap is what it replaces.

It has a few extra features compared to navigation split view, such as an ability to move the sidebar to the right and show or hide it even when not collapsed, but the two widgets have extremely similar API.

And, like with AdwNavigationView, AdwHeaderBar can integrate with split views: when put inside one, it will automatically hide redundant window buttons, so there’s no need to show or hide them manually like with AdwLeaflet or AdwFlap.

Toolbar View

The new split view styles really need flat header bars to work well. While we’ve had the .flat style class since libadwaita 1.0, in practice it’s quite limited, especially with scrolling content.

As such, there’s a new widget called AdwToolbarView. It contains a content widget and a number of top and bottom bars (for example, AdwHeaderBar, AdwTabBar, GtkSearchBar, GtkActionBar, or GtkBox with the .toolbar style class). Then it will automatically manage the correct styles for the toolbars, for example making them flat and managing undershoot shadows on scrolling content (though this can be changed using the :top-bar-style and :bottom-bar-style properties), as well as collapsing spacing between them:

It’s recommended to always use it instead of GtkBox when you have header bars or other toolbars, regardless of whether you’re using split views.

Deprecations

With breakpoints and the new widgets, a number of older widgets have been deprecated, namely AdwLeaflet, AdwFlap, AdwSqueezer and AdwViewSwitcherTitle, as well as the old subpage API in AdwPreferencesWindow and the .flat style class for header bars. Refer to the migration guide for how exactly to replace them.

List Rows

There has been a number of boxed list additions this cycle.

Switch Row

Joshua Lee added AdwSwitchRow – a simple AdwActionRow subclass containing a GtkSwitch. While it’s easy to implement manually, it’s a very common case and so it’s nice to have a shortcut.

Spin Row

Chris added AdwSpinRow – a list row with an embedded GtkSpinButton, similar to AdwEntryRow.

Property Row

While it’s not a widget, the new .property style class, also by Chris, can swap styles on AdwActionRow‘s title and subtitle to emphasize the latter. This can be useful when displaying, say, EXIF properties in an image viewer.

Misc Changes
  • Jamie added adw_about_window_new_from_appdata() to simplify creating about windows.
  • AdwClamp can now scale with text scale factor, via the :unit property, incl. defaulting to the sp unit.
  • Yuri Izmer implemented search in AdwComboRow, matching GtkDropDown.

  • Maksym Hazevych added AdwPreferencesPage:description property, allowing to show a description at the top of the page.
  • Corey Berla fixed another bunch of drag-n-drop issues to make sure it works as expected in Nautilus.
  • The way AdwTabOverview handles thumbnails has been significantly reworked to make it work better with WebKitWebView.
  • Xenia added the AdwToast:use-markup property to allow disabling markup in toasts (it’s enabled by default).
  • A lot of accessibility issues throughout different widgets have been fixed – special thanks goes to Lukáš Tyrychtr and Maximiliano.
  • Header bars and other toolbars are now white instead of darker grey in the light variant, while the previous grey is now used for sidebars instead. Header bars set as GtkWindow titlebar now also have a shadow, same as when used in a toolbar view with top-bar-style=raised
  • While default GTK dialogs cannot use the new widgets, they have been styled to look similar anyway.

As always, thanks to all the contributors who helped to make this release happen.

Sam Thursfield: Status update, 15/09/2023

Pre, 15/09/2023 - 1:06md

Musically this has been a fun month. One of my favourite things about living in Galicia is that ska-punk never went out of fashion here and you can legitimately go to a festival by the sea and watch Ska-P. Unexpectedly brilliant and chaotic live show. I saw an interview recently where Angelo Moore of Fishbone was asked by a puppet what his favourite music is, and he answered: “I like … I like the Looney Tunes music”. Same energy.

I wrote already this month about my DIY media server and the openQA CLI tool. This post contains some brief thoughts about Nushell and then some lengthy thoughts about the future of the Web. Enjoy!

Nushell everywhere

I read a blog by serial shell innovator JT entited “The case for Nushell”. I’ve been using Nushell for data-focused work for a while and the post inspired me to make it my default shell in a few places.

Nushell is really comfortable to use these days, it’s very addictive the first time you construct a one-liner to pretty-print some JSON or XML, select the fields you want and output a table as Markdown that you can paste straight into a Gitlab issue. My only complaint is the autocomplete isn’t quite as good as the Fish shell yet. (And that you can’t type rm -R… like chown and chmod only accept -R, and now rm only accepts a lower case -r, how am I supposed to remember that guys???)

I have a load of arcane Bash knowledge that I guess I’ll have to hang onto for a while yet, particularly as my job mostly involves SSH’ing into strange old machines from the 1990s. Perhaps I can try and contribute Nushell binaries that run on HP-UX and Solaris. (For the avoidance of doubt, that previous sentence is a joke).

Kagi Small Web

There’s a new search engine on the block called Kagi which is marketed as “premium search engine”, you pay $0.05 per search, and in return the results are ad-free.

I like this idea. I signed up for the free trial 100 searches, and I haven’t got far with them.

It turns out most of the web searches I do, are things I could search on a specific site if I wasn’t so lazy. For example I search “rust stdio” when I could go to the Rust documentation on my local machine and search there. Or I search for a programming problem when I could clearly just search StackOverflow itself. DuckDuckGo has made me lazy; adding a potential $0.05 cost to searches firstly makes you realize how few you actually need to do. Maybe this is a good thing.

Anyway, Kagi. They just launched something named Kagi Small Web, which is announced here:

Kagi Small Web offers a fresh approach by promoting recently published content from the “small web.” We gather new content, published within the last week, from a handpicked list of blogs and surface it in multiple ways:

  • Directly within Kagi search results for applicable queries (existing Kagi members do not need to do anything, this will be automatic)
  • Via the new Kagi Small Web website
  • Through the Kagi Small Web RSS feed
  • Via our Search API, where results are now part of the news enrichment API

Initially inspired by a vibrant discussion on Hacker News, we began our experiment in late July, highlighting blog posts from HN users within our search results. The positive feedback propelled the initiative forward. Today, our evolving concept boasts a curated list of nearly 6,000 genuine websites featuring people with a wide variety of interests.

When I first saw this my mind initially jumped to the problematic parts. Who are these guys to suddenly define what the Small Web is, and define it as a a club of some 6,000 websites chosen by Hackers News? All sites must be in English, so is the Web only for English speakers now?? More importantly, why is my site not on the list? Why wasn’t I consulted??

There’s also something very inspiring about the project. I try to follow the rule “something is better than nothing”, and this project is a pretty bold step forwards, which inspired a bunch of thoughts about the future of The Web.

Google Search is Dying

Since about 2000, when you think of the Web, you think of Google.

Google Search has been dying a slow, public death for about the last ten years. Google has been too big to innovate since the early 2010s (with one important exception, the Emoji Kitchen).

Google Search remained king until now for two reasons: one, their tech for turning hopelessly vague search queries into useful results was better than anyone’s in the industry, and two, as of 2023, almost nobody else can operate at the scale needed to index all of the text on the Web.

I guess there’s a third reason too, which is spending billions of $$$ to be the default search provider nearly everywhere, to the point that the USA is running an antitrust hearing against them, but let’s focus on the technical aspects.

The current fad for large language models is going to bring big changes to the Web, for better or worse. One of those is that “intent analysis” is suddenly much easier than it was. Note, I’m not talking about prompting an LLM with a question
and presenting the randomly generated output as an answer. I’m talking about taking unstructured text, such as “trains to London” and turning it into an actionable query. A 1990’s era search engine would throw away the “to” return any website that contained “trains” and “London”. Google Search shows a table of live departure times for trains heading to London. (With some less useful things above and below, since this is the Google Search of 2023).

A small LLM such as Vicuna can kinda just DO this stuff, not perfectly of course, but its an order of magnitude easier than a decade ago. Perhaps Google kept their own LLM research internal for so long for fear of losing exactly this edge? The “We have no moat” memo suggests fear.

Onto the second thing, indexing all the content on the Web. LLMs don’t make this easier. They make it impossible.

Its now so easy to generate human-like text on the Web using machines, that it doesn’t make sense to index all the text on the Web any more. Much of it is already human-generated generated garbage aiming to game search ranking algorithms (see “A storefront for robots” for fun examples).

Very soon 99% of text on the web will be machine generated garbage. Welcome to the dark forest.

For a short time I was worried about this, but I think it’s a natural evolution of the Web. This is the end of the Olde World Wide Web. What comes next?

There is more than one Small Web

If you’ve read this far, firstly, thanks and well done, in 2023 its hard to read so many paragraphs in one go! I didn’t even put in single video.

Let me share the insight I had on thinking over Kagi Small Web. Maybe it’s obvious and maybe it isn’t.

A search engine of 6,000 websites is small-scale enough that one person could conceivably run it.

Let’s go back a step. How do you deal with a Web that’s 99% machine-generated noise? I imagine Google will try to solve this by using language models to detect if the page was generated by a language model, triggering another fairly pointless technological arms race against the spammers who will be generating this stuff. This won’t work very well.

The only way for humans to make sense of the new Dark Forest Web is to have lists of websites, curated by humans, and to search through those when we want to find information.

If you’re old you know that this isn’t a new idea. In fact, we kinda had all of this stuff in the form of web rings, link pages on other people’s websites, bookmarking services, link sites like Digg, Fark and Reddit, RSS feeds and feed readers. If you look at Kagi Small Web reader site it’s literally a web ring. It’s StumbleUpon. It’s Planet GNOME. But through the lens of 2023, it’s
also something new.

So I’m not going to start reading Kagi’s small web, though it may be great. And I’m going stop capitalising “small web”, because I think we’re going to curate millions of these collectively, in thousands of languages, in infinite online communities. We’re going to have open source tools for searching and archiving high quality online content. Who knows? Perhaps in 10 years we’ll have small web search tools integrated into GNOME.

Further Reading

This year, 2023, is the 25th Year of our Google, and The Verge are publishing a series of excellent articles looking forwards and backwards. I can recommend reading “The end of the Googleverse” as a starting point. Another great one: “Google and YouTube are trying to have it both ways with AI and copyright“.

Pratham Gupta: GSOC 2023 Final Report

Enj, 14/09/2023 - 2:54md

This is the final report for my project. Here i will be explaining about the method we took to find anagrams.

GNOME Crosswords Editor

Although still under development, Editor is a important part of Crosswords application for GNOME. It allows us to create basic crosswords with grids and clues.

Project Information

My project is add anagram-search support for the Crosswords Editor. The data for the searching comes from a word list file. My task is to search for anagrams. It needs to fast enough, so that the user can set the input word, and the search results are displayed instantaneously (without any lag).

The word-list file

To understand my project, you need to know about the data file i.e. word-list file. The file is made up of 3 sections:

  1. WordList: Just a huge list of words.
  2. FilterFragments: A list of word-indices that follow a particular pattern. e.g. A?? is pattern and its list will have index for words that start with A and have length of 3. Similarly for fragment ?B??, the list will have offsets for words that have second letter as B and length of 4.
  3. Index Section: A JSON block which stores indexes for the above sections.
Finding AnagramsApproach 1

We simulate a trie to find anagrams of a word.

Trie to search for anagrams

Here at each node we maintain a list of words that follow the particular pattern, like AT? will have words having a prefix of AT and length 3. For branches where the list of words becomes empty like AT?, we perform branch culling.

We were successful in finding anagrams for a word using this approach, but this method is a bit slow to be directly used in a user interface. For a 12 letter word like ABBREVIATION, it took 0.3 seconds to find the anagrams. Thus we need another faster method.

Approach 2Basic Idea

I will try to explain this with an example:

  • Two words: HEART and EARTH
  • Sort these words, we get AEHRT and AEHRT
  • Create a hash for both of them
  • Notice that they will have the same hash as the sorted words are same
  • We use this unique property to find anagrams.
Implementation

We solve this problem in two stages:

Stage 1: At the compile time

Create a new section in word-list file for anagram fragments, it has the following two parts:

  1. anagram word list: This is a list of words that have same hash. We store a gushort (2 bytes) as index for a word.
  2. anagram hash index: Every entry in anagram word list has a corresponding entry in anagram hash index section, where we store the hash (guint — 4 bytes), offset of the the list entry (guint - 4bytes) and the length of the entry (gchar — 1 byte), with the total size of 9 bytes for each index.

Stage 2: At the run time

Here we search for anagrams from the data created in stage 1. The search follows like:

  • Suppose we have a word “BAT”, we sort its letters and hash it, lets call the hash generated as H1 and the length of word i.e. 3 as LE1.
  • Now, search for H1 in the anagram hash index section, this will be a binary search as the section is always sorted, thus will be very fast.
  • Once found, store the offset (the next 4 bytes), called O1 and the length (the next 1 byte), called L1. The offset points to the anagram-indices stored in the angram-word-list section and the length tells us the number of anagrams.
  • Go to O1 and read the next L1 * 2 bytes. Every 2 bytes here are the index of the words that we want.
  • We get the indices, lets say I1 and I2, set a Fragment list of length LE1, and read the words at index I1 and I2. These are the required anagrams.

Thus we have found the anagrams, we can now show them to the user.

What’s done

Code to write data into word list file has been written and pushed into the main repository. The code creates the above discussed anagram-hash-index and anagram-word-list sections.

What’s left to do

We need to read the word-list file in the run time, find the anagrams using the above mentioned approach and display them to the user.

Matthew Garrett: Reconstructing an invalid TPM event log

Mër, 13/09/2023 - 11:02md
TPMs contain a set of registers ("Platform Configuration Registers", or PCRs) that are used to track what a system boots. Each time a new event is measured, a cryptographic hash representing that event is passed to the TPM. The TPM appends that hash to the existing value in the PCR, hashes that, and stores the final result in the PCR. This means that while the PCR's value depends on the precise sequence and value of the hashes presented to it, the PCR value alone doesn't tell you what those individual events were. Different PCRs are used to store different event types, but there are still more events than there are PCRs so we can't avoid this problem by simply storing each event separately.

This is solved using the event log. The event log is simply a record of each event, stored in RAM. The algorithm the TPM uses to calculate the PCR values is known, so we can reproduce that by simply taking the events from the event log and replaying the series of events that were passed to the TPM. If the final calculated value is the same as the value in the PCR, we know that the event log is accurate, which means we now know the value of each individual event and can make an appropriate judgement regarding its security.

If any value in the event log is invalid, we'll calculate a different PCR value and it won't match. This isn't terribly helpful - we know that at least one entry in the event log doesn't match what was passed to the TPM, but we don't know which entry. That means we can't trust any of the events associated with that PCR. If you're trying to make a security determination based on this, that's going to be a problem.

PCR 7 is used to track information about the secure boot policy on the system. It contains measurements of whether or not secure boot is enabled, and which keys are trusted and untrusted on the system in question. This is extremely helpful if you want to verify that a system booted with secure boot enabled before allowing it to do something security or safety critical. Unfortunately, if the device gives you an event log that doesn't replay correctly for PCR 7, you now have no idea what the security state of the system is.

We ran into that this week. Examination of the event log revealed an additional event other than the expected ones - a measurement accompanied by the string "Boot Guard Measured S-CRTM". Boot Guard is an Intel feature where the CPU verifies the firmware is signed with a trusted key before executing it, and measures information about the firmware in the process. Previously I'd only encountered this as a measurement into PCR 0, which is the PCR used to track information about the firmware itself. But it turns out that at least some versions of Boot Guard also measure information about the Boot Guard policy into PCR 7. The argument for this is that this is effectively part of the secure boot policy - having a measurement of the Boot Guard state tells you whether Boot Guard was enabled, which tells you whether or not the CPU verified a signature on your firmware before running it (as I wrote before, I think Boot Guard has user-hostile default behaviour, and that enforcing this on consumer devices is a bad idea).

But there's a problem here. The event log is created by the firmware, and the Boot Guard measurements occur before the firmware is executed. So how do we get a log that represents them? That one's fairly simple - the firmware simply re-calculates the same measurements that Boot Guard did and creates a log entry after the fact[1]. All good.

Except. What if the firmware screws up the calculation and comes up with a different answer? The entry in the event log will now not match what was sent to the TPM, and replaying will fail. And without knowing what the actual value should be, there's no way to fix this, which means there's no way to verify the contents of PCR 7 and determine whether or not secure boot was enabled.

But there's still a fundamental source of truth - the measurement that was sent to the TPM in the first place. Inspired by Henri Nurmi's work on sniffing Bitlocker encryption keys, I asked a coworker if we could sniff the TPM traffic during boot. The TPM on the board in question uses SPI, a simple bus that can have multiple devices connected to it. In this case the system flash and the TPM are on the same SPI bus, which made things easier. The board had a flash header for external reprogramming of the firmware in the event of failure, and all SPI traffic was visible through that header. Attaching a logic analyser to this header made it simple to generate a record of that. The only problem was that the chip select line on the header was attached to the firmware flash chip, not the TPM. This was worked around by simply telling the analysis software that it should invert the sense of the chip select line, ignoring all traffic that was bound for the flash and paying attention to all other traffic. This worked in this case since the only other device on the bus was the TPM, but would cause problems in the event of multiple devices on the bus all communicating.

With the aid of this analyser plugin, I was able to dump all the TPM traffic and could then search for writes that included the "0182" sequence that corresponds to the command code for a measurement event. This gave me a couple of accesses to the locality 3 registers, which was a strong indication that they were coming from the CPU rather than from the firmware. One was for PCR 0, and one was for PCR 7. This corresponded to the two Boot Guard events that we expected from the event log. The hash in the PCR 0 measurement was the same as the hash in the event log, but the hash in the PCR 7 measurement differed from the hash in the event log. Replacing the event log value with the value actually sent to the TPM resulted in the event log now replaying correctly, supporting the hypothesis that the firmware was failing to correctly reconstruct the event.

What now? The simple thing to do is for us to simply hard code this fixup, but longer term we'd like to figure out how to reconstruct the event so we can calculate the expected value ourselves. Unfortunately there doesn't seem to be any public documentation on this. Sigh.

[1] What stops firmware on a system with no Boot Guard faking those measurements? TPMs have a concept of "localities", effectively different privilege levels. When Boot Guard performs its initial measurement into PCR 0, it does so at locality 3, a locality that's only available to the CPU. This causes PCR 0 to be initialised to a different initial value, affecting the final PCR value. The firmware can't access locality 3, so can't perform an equivalent measurement, so can't fake the value.

comments

Jo Shields: Building a NAS

Mar, 12/09/2023 - 11:33md
The status quo

Back in 2015, I bought an off-the-shelf NAS, a QNAP TS-453mini, to act as my file store and Plex server. I had previously owned a Synology box, and whilst I liked the Synology OS and experience, the hardware was underwhelming. I loaded up the successor QNAP with four 5TB drives in RAID10, and moved all my files over (after some initial DoA drive issues were handled).

QNAP TS-453mini product photo

That thing has been in service for about 8 years now, and it’s been… a mixed bag. It was definitely more powerful than the predecessor system, but it was clear that QNAP’s OS was not up to the same standard as Synology’s – perhaps best exemplified by “HappyGet 2”, the QNAP webapp for downloading videos from streaming services like YouTube, whose icon is a straight rip-off of StarCraft 2. On its own, meaningless – but a bad omen for overall software quality

The logo for QNAP HappyGet 2 and Blizzard’s StarCraft 2 side by side

Additionally, the embedded Celeron processor in the NAS turned out to be an issue for some cases. It turns out, when playing back videos with subtitles, most Plex clients do not support subtitles properly – instead they rely on the Plex server doing JIT transcoding to bake the subtitles directly into the video stream. I discovered this with some Blu-Ray rips of Game of Thrones – some episodes would play back fine on my smart TV, but episodes with subtitled Dothraki speech would play at only 2 or 3 frames per second.

The final straw was a ransomware attack, which went through all my data and locked every file below a 60MiB threshold. Practically all my music gone. A substantial collection of downloaded files, all gone. Some of these files had been carried around since my college days – digital rarities, or at least digital detritus I felt a real sense of loss at having to replace. This episode was caused by a ransomware targeting specific vulnerabilities in the QNAP OS, not an error on my part.

So, I decided to start planning a replacement with:

  • A non-garbage OS, whilst still being a NAS-appliance type offering (not an off-the-shelf Linux server distro)
  • Full remote management capabilities
  • A small form factor comparable to off-the-shelf NAS
  • A powerful modern CPU capable of transcoding high resolution video
  • All flash storage, no spinning rust

At the time, no consumer NAS offered everything (The Asustor FS6712X exists now, but didn’t when this project started), so I opted to go for a full DIY rather than an appliance – not the first time I’ve jumped between appliances and DIY for home storage.

Selecting the core of the system

There aren’t many companies which will sell you a small motherboard with IPMI. Supermicro is a bust, so is Tyan. But ASRock Rack, the server division of third-tier motherboard vendor ASRock, delivers. Most of their boards aren’t actually compliant Mini-ITX size, they’re a proprietary “Deep Mini-ITX” with the regular screw holes, but 40mm of extra length (and a commensurately small list of compatible cases). But, thankfully, they do have a tiny selection of boards without the extra size, and I stumbled onto the X570D4I-2T, a board with an AMD AM4 socket and the mature X570 chipset. This board can use any AMD Ryzen chip (before the latest-gen Ryzen 7000 series); has built in dual 10 gigabit ethernet; IPMI; four (laptop-sized) RAM slots with full ECC support; one M.2 slot for NVMe SSD storage; a PCIe 16x slot (generally for graphics cards, but we live in a world of possibilities); and up to 8 SATA drives OR a couple more NVMe SSDs. It’s astonishingly well featured, just a shame it costs about $450 compared to a good consumer-grade Mini ITX AM4 board costing less than half that.

I was so impressed with the offering, in fact, that I crowed about it on Mastodon and ended up securing ASRock another sale, with someone else looking into a very similar project to mine around the same timespan.

The next question was the CPU. An important feature of a system expected to run 24/7 is low power, and AM4 chips can consume as much as 130W under load, out of the box. At the other end, some models can require as little as 35W under load – the OEM-only “GE” suffix chips, which are readily found for import on eBay. In their “PRO” variant, they also support ECC (all non-G Ryzen chips support ECC, but only Pro G chips do). The top of the range 8 core Ryzen 7 PRO 5750GE is prohibitively expensive, but the slightly weaker 6 core Ryzen 5 PRO 5650GE was affordable, and one arrived quickly from Hong Kong. Supplemented with a couple of cheap 16 GiB SODIMM sticks of DDR4 PC-3200 direct from Micron for under $50 a piece, that left only cooling as an unsolved problem to get a bootable test system.

The official support list for the X570D4I-2T only includes two rackmount coolers, both expensive and hard to source. The reason for such a small list is the non standard cooling layout of the board – instead of an AM4 hole pattern with the standard plastic AM4 retaining clips, it has an Intel 115x hole pattern with a non-standard backplate (Intel 115x boards have no backplate, the stock Intel 115x cooler attaches to the holes with push pins). As such every single cooler compatibility list excludes this motherboard. However, the backplate is only secured with a mild glue – with minimal pressure and a plastic prying tool it can be removed, giving compatibility with any 115x cooler (which is basically any CPU cooler for more than a decade). I picked an oversized low profile Thermalright AXP120-X67 hoping that its 120mm fan would cool the nearby MOSFETs and X570 chipset too.

Thermalright AXP120-X67, AMD Ryzen 5 PRO 5650GE, ASRock Rack X570D4I-2T, all assembled and running on a flat surface Testing up to this point

Using a spare ATX power supply, I had enough of a system built to explore the IPMI and UEFI instances, and run MemTest86 to validate my progress. The memory test ran without a hitch and confirmed the ECC was working, although it also showed that the memory was only running at 2933 MT/s instead of the rated 3200 MT/s (a limit imposed by the motherboard, as higher speeds are considered overclocking). The IPMI interface isn’t the best I’ve ever used by a long shot, but it’s minimum viable and allowed me to configure the basics and boot from media entirely via a Web browser.

Memtest86 showing test progress, taken from IPMI remote control window

One sad discovery, however, which I’ve never seen documented before, on PCIe bifurcation.

With PCI Express, you have a number of “lanes” which are allocated in groups by the motherboard and CPU manufacturer. For Ryzen prior to Ryzen 7000, that’s 16 lanes in one slot for the graphics card; 4 lanes in one M.2 connector for an SSD; then 4 lanes connecting the CPU to the chipset, which can offer whatever it likes for peripherals or extra lanes (bottlenecked by that shared 4x link to the CPU, if it comes down to it).

It’s possible, with motherboard and CPU support, to split PCIe groups up – for example an 8x slot could be split into two 4x slots (eg allowing two NVMe drives in an adapter card – NVME drives these days all use 4x). However with a “Cezanne” Ryzen with integrated graphics, the 16x graphics card slot cannot be split into four 4x slots (ie used for for NVMe drives) – the most bifurcation it allows is 8x4x4x, which is useless in a NAS.

Screenshot of PCIe 16x slot bifurcation options in UEFI settings, taken from IPMI remote control window

As such, I had to abandon any ideas of an all-NVMe NAS I was considering: the 16x slot split into four 4x, combined with two 4x connectors fed by the X570 chipset, to a total of 6 NVMe drives. 7.6TB U.2 enterprise disks are remarkably affordable (cheaper than consumer SATA 8TB drives), but alas, I was locked out by my 5650GE. Thankfully I found out before spending hundreds on a U.2 hot swap bay. The NVMe setup would be nearly 10x as fast as SATA SSDs, but at least the SATA SSD route would still outperform any spinning rust choice on the market (including the fastest 10K RPM SAS drives)

Containing the core

The next step was to pick a case and power supply. A lot of NAS cases require an SFX (rather than ATX) size supply, so I ordered a modular SX500 unit from Silverstone. Even if I ended up with a case requiring ATX, it’s easy to turn an SFX power supply into ATX, and the worst result is you have less space taken up in your case, hardly the worst problem to have.

That said, on to picking a case. There’s only one brand with any cachet making ITX NAS cases, Silverstone. They have three choices in an appropriate size: CS01-HS, CS280, and DS380. The problem is, these cases are all badly designed garbage. Take the CS280 as an example, the case with the most space for a CPU cooler. Here’s how close together the hotswap bay (right) and power supply (left) are:

Internal image of Silverstone CS280 NAS build. Image stolen from ServeTheHome

With actual cables connected, the cable clearance problem is even worse:

Internal image of Silverstone CS280 NAS build. Image stolen from ServeTheHome

Remember, this is the best of the three cases for internal layout, the one with the least restriction on CPU cooler height. And it’s garbage! Total hot garbage! I decided therefore to completely skip the NAS case market, and instead purchase a 5.25″-to-2.5″ hot swap bay adapter from Icy Dock, and put it in an ITX gamer case with a 5.25″ bay. This is no longer a served market – 5.25″ bays are extinct since nobody uses CD/DVD drives anymore. The ones on the market are really new old stock from 2014-2017: The Fractal Design Core 500, Cooler Master Elite 130, and Silverstone SUGO 14. Of the three, the Fractal is the best rated so I opted to get that one – however it seems the global supply of “new old stock” fully dried up in the two weeks between me making a decision and placing an order – leaving only the Silverstone case.

Icy Dock have a selection of 8-bay 2.5″ SATA 5.25″ hot swap chassis choices in their ToughArmor MB998 series. I opted for the ToughArmor MB998IP-B, to reduce cable clutter – it requires only two SFF-8611-to-SF-8643 cables from the motherboard to serve all eight bays, which should make airflow less of a mess. The X570D4I-2T doesn’t have any SATA ports on board, instead it has two SFF-8611 OCuLink ports, each supporting 4 PCI Express lanes OR 4 SATA connectors via a breakout cable. I had hoped to get the ToughArmor MB118VP-B and run six U.2 drives, but as I said, the PCIe bifurcation issue with Ryzen “G” chips meant I wouldn’t be able to run all six bays successfully.

NAS build in Silverstone SUGO 14, mid build, panels removed Silverstone SUGO 14 from the front, with hot swap bay installed Actual storage for the storage server

My concept for the system always involved a fast boot/cache drive in the motherboard’s M.2 slot, non-redundant (just backups of the config if the worst were to happen) and separate storage drives somewhere between 3.8 and 8 TB each (somewhere from $200-$350). As a boot drive, I selected the Intel Optane SSD P1600X 58G, available for under $35 and rated for 228 years between failures (or 11,000 complete drive rewrite cycles).

So, on to the big expensive choice: storage drives. I narrowed it down to two contenders: new-old-stock Intel D3-S4510 3.84TB enterprise drives, at about $200, or Samsung 870 QVO 8TB consumer drives, at about $375. I did spend a long time agonizing over the specification differences, the ZFS usage reports, the expected lifetime endurance figures, but in reality, it came down to price – $1600 of expensive drives vs $3200 of even more expensive drives. That’s 27TB of usable capacity in RAID-Z1, or 23TB in RAID-Z2. For comparison, I’m using about 5TB of the old NAS, so that’s a LOT of overhead for expansion.

Storage SSD loaded into hot swap sled Booting up

Bringing it all together is the OS. I wanted an “appliance” NAS OS rather than self-administering a Linux distribution, and after looking into the surrounding ecosystems, decided on TrueNAS Scale (the beta of the 2023 release, based on Debian 12).

TrueNAS Dashboard screenshot in browser window

I set up RAID-Z1, and with zero tuning (other than enabling auto-TRIM), got the following performance numbers:

IOPSBandwidth4k random writes19.3k75.6 MiB/s4k random reads36.1k141 MiB/sSequential writes–2300 MiB/sSequential reads–3800 MiB/sResults using fio parameters suggested by Huawei

And for comparison, the maximum theoretical numbers quoted by Intel for a single drive:

IOPSBandwidth4k random writes16k?4k random reads90k?Sequential writes–280 MiB/sSequential reads–560 MiB/sNumbers quoted by Intel SSD successors Solidigm.

Finally, the numbers reported on the old NAS with four 7200 RPM hard disks in RAID 10:

IOPSBandwidth4k random writes4301.7 MiB/s4k random reads800632 MiB/sSequential writes–311 MiB/sSequential reads–566 MiB/s

Performance seems pretty OK. There’s always going to be an overhead to RAID. I’ll settle for the 45x improvement on random writes vs. its predecessor, and 4.5x improvement on random reads. The sequential write numbers are gonna be impacted by the size of the ZFS cache (50% of RAM, so 16 GiB), but the rest should be a reasonable indication of true performance.

It took me a little while to fully understand the TrueNAS permissions model, but I finally got Plex configured to access data from the same place as my SMB shares, which have anonymous read-only access or authenticated write access for myself and my wife, working fine via both Linux and Windows.

And… that’s it! I built a NAS. I intend to add some fans and more RAM, but that’s the build. Total spent: about $3000, which sounds like an unreasonable amount, but it’s actually less than a comparable Synology DiskStation DS1823xs+ which has 4 cores instead of 6, first-generation AMD Zen instead of Zen 3, 8 GiB RAM instead of 32 GiB, no hardware-accelerated video transcoding, etc. And it would have been a whole lot less fun!

The final system, powered up

(Also posted on PCPartPicker)

Jussi Pakkanen: A logo for CapyPDF

Dje, 10/09/2023 - 12:02pd

The two most important things about any software project are its logo and mascot. Here is a proposal for both for CapyPDF.

As you can probably tell I'm not a professional artist, but you gotta start somewhere. The original idea was to have a capybara head which is wearing the PDF logo much like a bow tie around its ear. The gist of it should come across, though it did look much better inside my brain. The PDF squiggle logo is hard to mold to the desired shape.

The font is Nimbus Sans, which is one of the original PostScript Core Fonts. More precisely it is a freely licensed metrically compatible version of Helvetica. This combines open source with the history of PDF quite nicely.

Juan Pablo Ugarte: Cambalache 0.14.0 Released!

Pre, 08/09/2023 - 3:22pd

I am pleased to announce a new Cambalache version.

Cambalache is a new RAD tool for Gtk 4 and 3 with a clear MVC design and data model first philosophy.

Version 0.14.0 brings two new features one of them not even originally supported by Glade.

Release Notes:
    • Add GMenu support
    • Add UI requirements edit support
    • Add Swedish translation. Anders Jonsson
    • Updated Italian translation. Lorenzo Capalbo
    • Show deprecated and not available warnings for Objects, properties and signals
    • Output minimum required library version instead of latest one
    • Fix output for templates with inline object properties
    • Various optimizations and bug fixes
    • Bump test coverage to 66%
Menu Support

The <menu> tag in GtkBuilder is the builtin support for GMenuModel and GMenu classes defined in GIO.

All you need to do is create a “(menu)” model object…

and add items, sections or submenus to it.

People familiar with the GObject type system might have noticed that “(menu)” is not a valid GType name this is intentional and used for built in features like <menu> tag or defining external objects ( “(external)” ) this allows me to model everything in a generic way and have custom code to export this in the right non generic format.

UI Requirements

Just like Glade, Cambalache now let you choose which library version you want to target.

A list of all libraries used in the UI file. will be available in the requires tab of the object editor.

Here you will be able to select which target version you want to use in the UI file and the application will warn you if the object, property or signal is not available in the version you selected.

You can also leave it blank and Cambalache will automatically use the minimum library required for your UI.

Where to get it?

As always you can get the code in gitlab

git clone https://gitlab.gnome.org/jpu/cambalache.git

or download from flathub

flatpak remote-add --user --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo flatpak install --user flathub ar.xjuan.Cambalache

Happy coding!

Sam Thursfield: Improvements to my helper tool for VM-based openQA testing

Enj, 07/09/2023 - 4:05md

It’s two years since I started looking into end-to-end testing of GNOME using openQA. While developing the end-to-end tests I find myself running tests locally on my machine a lot, and the experience was fiddly, so I wrote a simple helper tool named ssam_openqa to automate my workflow.

Having chosen to write ssam_openqa in Rust, it’s now really fun to hack on, and I somewhat gratuitously gave it an interactive frontend using the indicatif Rust library.

Here’s what it looks like to run the GNOME OS end-to-end tests with --frontend=interactive in the newly released 1.1.0rc2 release (video):

You can pause the tests while running by pressing CTRL+C, or using the --pause-test or --pause-event to pause on certain events. This lets you open a VNC viewer and access the VM itself, which makes debugging test failures much nicer.

I’m giving a couple more talks about end-to-end testing with openQA this year. This month at OSSEU 2023 in Bilbao, I’m filling in for my colleague James Thomas, talking about openQA in automotive projects. And in October, at XDC 2023 in A Coruña, I’ll be speaking about using openQA as a way to do end-to-end testing of graphical apps. See you there!

Sriyansh Shivam: GSoC 2023: Final Report

Mër, 06/09/2023 - 10:39md

Hello to everyone.

So this is the final report on the work I completed throughout the Google Summer of Code contribution period (May-September). There's a lot to share and discuss, but I'll try to keep this brief.

Mentors:

Sonny Piers and Andy Holmes

Project:

Make GNOME Platform demos for Workbench

About Workbench:

Workbench lets you experiment with GNOME technologies, no matter if tinkering for the first time or building and testing a GTK user interface.

Among other things, Workbench comes with

  • Live GTK/CSS preview

  • Library of 100+ examples

  • JavaScript, Rust and Vala support

  • Declarative user interface syntax

  • Autosave, sessions and projects

  • Code linter and formatter

  • Offline documentation

  • Terminal output

  • 1000+ icons

Tasks performed:

Created beginner-friendly and easy-to-understand examples/demos for all widgets of GTK 4.10 and Libadwaita 1.3 to help newcomers understand how to use them effectively.

Provided ready-to-use code snippets of the widgets/APIs covered, making it easier for developers to integrate them into their projects.

Covered GLib/GIO and Libportal APIs and create relevant examples to help developers understand how to use them in their applications

Created demos while taking UI and UX design concepts into account to showcase how to make aesthetically pleasing and functional user interfaces.

Covered GNOME HIG Patterns to ensure that the examples and demos follow the GNOME Human Interface Guidelines, making them consistent with other GNOME applications and user-friendly for users

Implemented Search function in Workbench

Further Tasks to be performed:
  • Port and integrate the existing implemented search function from a demo as a feature into Workbench.

  • Adding new demos with the forthcoming GTK and Adwaita releases.

  • Update existing demos as GJS, GIO, and GLib versions are updated and new features are added.

My Contributions:

I was amazed and thrilled to learn that I had written about 3800 lines of code in the previous few months. I have contributed the following:

CodeFind Feature:

Apart from the demo coverage I also implemented a codefind feature as a library entry in the workbench.

https://github.com/sonnyp/Workbench/pull/364

  • Implemented a code-search feature similar to GNOME Text Editor

  • Highlights matched search results

  • Supports both case and non-case-sensitive search

  • Implemented move to next search and move to previous search functions

  • Keybindings for ease of access

Currently, this feature is implemented as a library entry in Workbench, which is further planned to be integrated shortly.

Going Beyond:

I was thrilled to have the opportunity to contribute to GJS (GNOME Javascript) because we completed what we had planned a week or two before the last week of the coding period, and my mentor allowed me to concentrate on something apart from the project.

  • Made console.log format GError correctly

    Before:

    gjs> try {imports.gi.Gio.File.new_for_path('/foo').load_contents(null) } catch (err) { console.error(err) } (gjs:68852): Gjs-Console-CRITICAL **: 21:22:41.833: { "stack": "@typein:7:47\n@<stdin>:1:42\n", "fileName": "typein", "lineNumber": 7, "columnNumber": 47 }

    After:

    gjs> try {imports.gi.Gio.File.new_for_path('/foo').load_contents(null) } catch (err) { console.error(err) } (gjs:68852): Gjs-Console-CRITICAL **: 21:22:41.833: Gio.IOErrorEnum: Error opening file /foo: No such file or directory @typein:9:47 @<stdin>:1:42
  • Improved console.log output

    Before:

    gjs> console.log(new TextDecoder()) Gjs-Console-Message: 21:38:35.467: { "encoding": "utf-8", "ignoreBOM": false, "fatal": false }

    After:

    gjs> console.log(new TextDecoder()) Gjs-Console-Message: 21:38:35.467: TextDecoder { "encoding": "utf-8", "ignoreBOM": false, "fatal": false }

More information about this issue can be found here

Post GSoC plans:
  • Dive deeper into GNOME technologies and contribute more, particularly in GJS (GNOME Javascript).

  • Continuing exploration of other Open-Source Communities.

  • Creating more complex projects using what I learned here.

  • Engaging with newcomers more regularly to assist them in breaking into open-source software development

Hardships during GSoC:
  • Keeping up with coding and schoolwork (yes, I attended normal, full-time classes for the majority of the coding period)

  • Keeping up with the time zone differences (my mentors and I are in completely different time zones)

  • Proactively learning and adjusting to new tasks. (Each week, we were given a new topic to demonstrate, ensuring continual and adaptable learning.)

My learnings:
  • Working as a team to resolve issues.

  • Communicating with team members ensuring no blockers

  • Iterative Development

  • Writing clean code

  • Remote software development practices

  • Documentation and technical writing skills

  • Work Discipline

  • Personal growth

Achievement:

I am thrilled to share that Workbench now has over 100+ interactive demos!

THANK YOU!

GNOME , my mentors Sonny Piers and Andy Holmes, my teammates, and everyone who supported me during this period. This summer has been a roller coaster journey for me, from coding to documentation to an international sponsored trip; it's been a blast. I'm looking forward to contributing more actively and learning to the best of my ability.

Jean-François Fortin Tam: Help us make GNOME Calendar rock-solid by expanding the test suite!

Mër, 06/09/2023 - 4:12md

GNOME Calendar 45 will be a groundbreaking release in terms of UX (more on that later?), performance, and to some extent, reliability (we’ve at least solved two complex crashers recently, including a submarine Cthulhu crasher heisenbug and its offspring)… and yet, I think this might be “just the beginning” of a new era. And a beginning… is a very delicate time.

If you’ve tried to use GNOME Calendar in the past decade or so, you’ve certainly encountered one of the many bugs related to timezones, daylight saving time, and all of that crazy stuff that would make Tom Scott curl up in a corner and cry. But it doesn’t have to be that way, and in fact, there is a way for anyone who knows a bit of C programming to help us build the tools to solve this mission-critical problem.

Today, I’d like to urge you to help in writing some automated tests.
The sooner our test suite can grow, the faster we can make GNOME Calendar rock-solid.

As I explain in this epic blocker ticket:

[…] There currently are some unit tests in GNOME Calendar, but not nearly enough, and we need to cover a lot more in order to fix the pile of timezone-related issues […]

We really need to use test-driven development here, otherwise these bugs will be a nightmare to fix and to keep fixed forever. Before we start fixing those bugs, we need as many unit tests as possible to cover spec compliance.

By helping write unit tests to ensure GNOME Calendar complies with standard calendaring specifications, your contribution will make a huge impact in GNOME Calendar’s future reliability, as an improved test suite will make it 100x easier for us to fix existing bugs while preventing introducing new bugs.

Doing this will also help us more confidently expand GNOME Calendar’s featureset to handle meetings planning functionality. […]

Why is this suddenly mission-critical now?

I believe a boosted test suite is now becoming mission-critical, because in my logical ordering of the dependency chain for this new roadmap I devised in 2023, I have determined that timezones/DST/midnight backend+views reliability blocks almost everything left: you can’t expand the UI and UX featureset to encompass meetings & timezones management unless you have a rock-solid, reliable backend and views/representation that you can trust to begin with (why create nice UIs if the users won’t trust the resulting events/data/views?)

So we need to fix those ugly issues once and for all. And there’s pretty much only one way to do this kind of daunting bugfixing work efficiently and sustainably: with test-driven development.

Once we have those tests, I trust that we will be able to make the app pretty much bulletproof.

Having extensive automated test coverage will allow us to confidently/safely fix some of the first “fundamental” compliance bugs there, and I can bet that by doing so, we would incidently solve tons of other issues at once—I wouldn’t be surprised if this would allow us to solve 50 issues or more. This will not only make a ton of existing users happy (and make GNOME Calendar viable for a ton more new users), but also make contributors happy because we will be able to resolve and close a ton of tickets cluttering our view. It’s all part of a flywheel of sustainability here.

“How hard can it be?”

While writing such tests might not be easy to do for a “beginner programmer who does not know the C language”, it is a good “newcomer contributor” task for someone who already knows C programming and are looking to make a lasting quality impact on a popular FLOSS desktop productivity application that many people love.

Maybe you, or a friend or colleague, would be interested by this challenge (while existing GNOME Calendar contributors keep fixing bugs at the same time)? This blog post is retootable if you want to spread the word there, too.

Ready to help us improve reliability?

If the above sounds like an interesting way to help the project, read the detailed ticket for more details, join us on the cozy “#gnome-calendar:gnome.orgMatrix channel (mapped to the #gnome-calendar IRC channel on irc.libera.chat) and get started!

Pratham Gupta: GUADEC 2023 in Riga, Latvia

Hën, 04/09/2023 - 8:01md
My experience atGNOME Intern Lightning Talks

This summer I traveled to Riga to attend my first international conference, Gnome Users And Developers European Conference (GUADEC) 2023 and true to its promise, the experience that unfolded was nothing short of amazing.

Well, it began with me missing the opening ceremony because I got to the wrong venue (happens in a new city), and then took a cab to the correct venue. With a cup of coffee in one hand and an apple in the other, i entered the venue, got my ID card and started attending the talks.

The correct venue

The talks were great, were on various topics, most about technology and while some others about society and GNOME’s role in it. My favourite talk for the day was about GNOME design. They showed some very interesting things like GNOME mobile shell, new tilling, new activities button etc.

After the talks, it was time to explore the city, and what better way to do so than talking a walk. Went to old town Riga, passing by the most beautiful Daugava River which flows through the centre of the city.

River view

The days that followed were even more exciting which involved meeting new people, sharing experiences and knowledge, with each interaction unveiling fresh perspectives about the history as well the current working of GNOME.

I don’t want to make this blog too long to read.

Here’s a mandatory pic with the standee.

Jonathan Blandford: Crosswords 0.3.11: Acrostic Panels

Hën, 04/09/2023 - 1:25pd

Long time, no release.

When I last blogged about GNOME Crosswords, I had a design plan to improve the editing API. It’s been a busy summer since then. The crosswords team rewrote large chunks of code to implement and use this new API:

crosswords: 146 files changed, 10545 insertions(+), 4915 deletions(-) libipuz: 53 files changed, 8224 insertions(+), 961 deletions(-)

There’s now over 38KLOC between the two codebases — this is starting to look like a real application!

Editor and libpanel

The biggest change this cycle is the implementation of the new editing interface. I started changing the code five months ago, but it took a while to land. We now use libpanel from GNOME Builder to manage the information panels. Libpanel has a lot of the functionality I want, though unfortunately I’m fighting its geometry handling.

New grid editor panel New clues editor panel

I really struggled getting the UI design for this to work, and I had a number of regrettable paths along the way. Fortunately, Niko agreed to help out with this, and showed up with some fabulous design work! I’m so much happier with the current approach, and am getting ready to implement more of his designs for the next cycle.

Behind the scenes, implementing this was a challenge. I blogged about those challenges previously, but in a nutshell, mutating a puzzle has a lot of side-effects which can leave you in an invalid state. For example, something simple like adding a block could completely change the numbering of the grid.

Federico and I fixed this by adding a number of functions to enforce heuristics and conventions. I also added ipuz_puzzle_fix_all() which will  get a fully well-formed puzzle regardless of the state. It’s turned out to be a really nice design pattern.

I have now released the Editor as a separate application in flathub. You can download it here.

Acrostics

Another major feature this release is Acrostic Puzzle support. Tanmay worked on that for his GSOC project and did fabulously (details in his blog post.) The end result is gorgeous:

Animation of an Acrostic puzzle being solved

I had a great time mentoring Tanmay for the summer, and we’re already making plans on how to add acrostic support to the Editor.

GUADEC and GNOME Mobile

GUADEC in Latvia was a blast. Riga was fun, the countryside was surprisingly interesting, and the overall feel of the conference was lovely. As always, it was great to meet so many people, new and old.

A wild Goomba in Riga

GNOME Superstar Martin came to my Crosswords BOF in Riga and got it running on an actual mobile device. The end result was.. actually usable! It seems like all the work we did on adaptive sizing paid dividends. There are some rough corners (and Martin filed a number of bugs) but it worked surprisingly well out of the box.

I also got to meet with my other GSOC student Pratham. His work on anagrams will land next release, so we’ll have more to talk about then.

libipuz.org

The final thing mentioning is that I bought the libipuz.org domain for the library. Unfortunately the main ipuz.org site has been down all summer, I put a mirror of the ipuz spec up there so people can read it.

I also managed to reach Roy Leban by phone. He’s the original author of the ipuz spec, and was able to clarify some questions I had when interpreting it. This led me down a deep rabbit hole around the part of the spec regarding clue directions, as it required a substantial rewrite to avoid hardcoding Across/Down directions everywhere.

Finally, Pranjal showed up completely out of nowhere with an MR to add ipuz_puzzle_equal(). This was a tricky function to write, and one I’ve wanted for a really long time. This has been making all our tests so much better. Hero! Pranjal is interested in adding a sudoku loader/saver to libipuz — maybe there will be more in this space in the future.

Thanks
  • Niko, for massive help with the designs for the Editor
  • Tanmay, for Acrostic support
  • Martin, for testing Crosswords on mobile
  • Pranjal, for ipuz_puzzle_equal()
  • Federico, for testing fixes and overall support
  • Rosanna, for continued advice and crossword support
  • Pratham, for initial anagram support
  • Bart, for whatever magic he did to make libipuz.org work
  • The translators, for keeping us multi-lingual

Until next time!

Andy Holmes: Mentoring in Open Source

Dje, 03/09/2023 - 8:53md

This year, I was invited by Sonny Piers to be a co-mentor for the GNOME Foundation, working on platform demos for Workbench. I already contribute a lot of entry-level documentation and help a lot of contributors, so this felt like a good step in a direction I've been heading for a while.

# Internships

Together, Sonny and I mentored three interns; two by way of Google Summer of Code (GSoC) and a third for Outreachy. Both are international programs providing a great opportunity for newcomers, but differ in some important ways.

# Google Summer of Code

Google Summer of Code has been around for a very long time, although I've been around long enough to remember when it started, and how it helped change the public image of open source. Google's program is well developed, while at the same time being mostly hands-off, leaving most of the planning and direction to the mentors.

Something that did influence our choice of applicants was the requirement that interns be open source beginners, which I feel doesn't account for those who could benefit from more advanced mentoring. Of course, we had plenty of applicants and I have no regrets about our selection.

# Outreachy

Outreachy is a newer program and one that actually started as an initiative of the GNOME Foundation. It is focused on those that are underrepresented in the industry, and fostering a positive feedback loop to address this ongoing issue.

In contrast to Google's program, Outreachy is not limited to students or developers, which is important when you consider the barriers to education and lack of opportunity its applicants often face. The program also includes prompts and goals, both for mentors and mentees, that help make these internships more engaging.

The infrastructure for Outreachy is kind of unassuming at first, compared to Google's sleek and streamlined website. Once you're a few weeks into the program though, you realize they really are a lot more focused on genuine mentorship. Nothing about your interaction with the coordinators is impersonal or superficial.

Marina Zhurakhinskaya deserves special mention here, and even if you haven't heard the name before, you have heard the names of those whose lives she touched. Although I never had the honour of working with her myself, it's impossible to ignore how we all continue to benefit from her contributions.

# People

I should first thank Sonny for the opportunity to participate as a mentor for the GNOME Foundation. This is not something I've been able to do in a formal way since high school when I was allowed extra classes to assist younger students in the electronics program.

We've all been in the position where you really just need someone to answer the question, "Okay, but how does it actually work?". Being a part of the "Oh, now I get it!" moments is an unparalleled experience for me. I'm really grateful I had the chance to be involved in this way again.

# Akshay Warrier

Akshay is going to do great things in open source and I think he really gets community-driven software. He's one of those people that renews your excitement for open source.

Aside from the contributions he's made to Workbench as part of his internship, I was really excited to see him appear at our yearly GNOME Shell Extensions workshop. I know he had a fantastic time at GUADEC and the way he talked about it made it obvious how much importance he places on people.

I think we'll see more of Akshay and, one day, I think he could make a really great mentor himself.

# José Hunter

José comes up with some great ideas, and he's quick to jump in and get them implemented. Given the right circumstances, I think we might see him do some really cool things in the community.

He's blogged about privacy and the encroach of corporate interests, and it's hard not to think this encouraged his involvement in open source. I think he has a natural impulse to form his own opinions about the technology he uses and employs, and an honest interest in projects related to his hobbies like libmanette.

I hope José sticks around after his internship is complete, because he has a personality that has served the community well in the past.

# Sriyansh Shivam

Sriyansh displayed an aptitude for development early on, but also a habit of responding to feedback in a really constructive and professional way. That's a hard thing to do sometimes, and he really has the drive for self-improvement.

We spent a fair amount of time working through a series of demos about the model-view-controller pattern together, and I really enjoyed that. These demos covered everything from GListModel and GtkListBox, to the newer view widgets like GtkColumnView. These are very popular in GTK4 applications, and often quite complex, so I'm quite proud to see the results of his hard work.

I'm not sure we really had a chance to see his full potential, and I hope he gets the opportunity to take on a longer more challenging project.

# The Future

I hope to have the time and opportunity to mentor again, and next time I would like to apply myself in a more thoughtful way. I write a lot of entry-level documentation for free software, but the feedback is far more asynchronous and has to be applied more statistically.

Having spent time with several mentees over an extended period of time was a good way to learn a lot. Sonny and I had some good conversations about what seemed to work in retrospect, which was very enlightening. I would really like to try mentoring again, but next time with a more developed strategy.

I will say that co-mentoring is definitely something we should encourage more as a community. It's difficult to be confident of your take on an interpersonal relationship, and having someone to balance that turned out to be invaluable to the mentorship.

Aryan Kaushik: GUADEC 2023 Experience

Sht, 02/09/2023 - 11:52md
Sveiki visiem!

In this blog, I'm pumped to share my experience attending GUADEC 2023 held in Riga, Latvia.

Let's start :)

During the conference, I presented a joint talk with Pooja Patel on "How to add 16.67% more users and contributors: A guide to creating accessible applications".

The talk was on Day 2 of the conference and we got quite nervous haha. We didn't join the walking tour and had to skip some of the most amazing talks on Day 2.

Fortunately, The journey was made more streamlined due to the extensive support of the staff team. I also want to thank Melissa for doing all the bookings and keeping up with our issues xD

The conference was not easy to attend in any way. Me and Asmit (One of the organisers and friends) decided pretty early on that we would volunteer for the GUADEC Organising team. Unfortunately, a lot of stuff happened and I got late. Then I refrained from joining conferences for some time, but my friends convinced me in the end :D And then another downturn came when my exams got severely delayed and clashed completely with the conference. I wanted to volunteer there but it crashed all the plans.

After severe planning and exploring options, I decided to skip 2 exams and attend the conference either way, hence I had to take the remaining exams the next day on reaching India. Although I will have to give the skipped ones again, I don't regret it even a bit. But it made the trip somewhat hectic as I had the tension of remaining exams throughout the trip which took a toll on my health :(

Anyway, let's proceed with the blog :D

The touchdown

After reaching Helsinki (Layover for flight), I met the GSoC'23 interns of GNOME. It rejuvenated the memories when we were one and took our first flight. It was amazing to interact with them and share our love for open source.

We then took a Flight to Riga. The sights from the plane were mesmerising, to say the least.

On reaching the hotel all my friends surprised me, it was great to catch up with them and meet them again after GNOME Asia. They are the best ;)

Unfortunately, I didn't arrive on time for the introductory party but well

The first day of summit

The first day was quite good, got to attend the talks which I was looking forward to like - "Accessibility update: the Python stack, Rust, and DBus" by Federico, "Oxidizing GTK" by Emmanuele (The memes like the Elon one were awesome xD), "Community building and best DEI practices in Free and Open Source Communities" by Anisa was great, I had the pleasure of meeting her during GNOME Asia and it was the same energy and an amazing talk as expected.

During the break, I met many people like the Ubuntu staff (Mauro you were a delight to converse with). Met Matthias and Rosanna again, awesome as always and finally met Melissa :D.

Then I attended "GNOME Design: State of the Union" which was awesome as I got a sneak peek into how GNOME is evolving and saw those beautiful designs. It amazes me how much GNOME has progressed.

After that, we attended the "How GNOME Gets into Ubuntu" and "Keynote: All the little things that keep open source going" which following the overall themes, were again great.

The second day of summit

On the second day, I mostly attended online due to our talk. We practised it, tried to remove our nervousness and got ready.

And the moment finally came. Anisa and Caroline introduced and motivated us :) In the end, it went quite smoothly, There were things which could have been executed in a better manner, but in the end, what can't.

Then we attended the Lightning talks, and man they were so fun. Probably one of the best moments of the conference. There I got to learn Melissa's awesome stage-handling skills LOL. It was just indescribable.

I also conversed with Regina during the break and it was awesome to discuss various topics. She had some great viewpoints and insights regarding GNOME mentorship and contributor involvement.

After the talks, we explored the city a bit and enjoyed the mesmerising architecture and culture of Latvia.

The third day of summit

Attending "Building Student Communities to Foster OSS" by Hrittik was quite a good experience, he then handed out some swag which I'm using right now to plan the blog haha, so thanks for that.

Then the AGM and group photo session came. The group photo was funny and amazing, I wish that was recorded as well lol.

I certainly enjoyed the "Our logo sucks" Lightning talk just after the "pants of thanks". You need a certain level of courage to bash the logo in front of its creator at a conference organised by the same organisation lol. Jokes aside, it was quite on point and I can relate with most of the stuff mentioned. Even if it is not implemented, I hope it stirs a discussion at least.

After which we continued to attend other talks and enjoyed the conference end.

Then we attended the GNOME Dinner, it was great! I met Rudolf and we conversed quite a lot. He was awesome and full of energy. I also met Georges Stavracas (feaneron), the person who killed the meme (iykyk) Unfortunately due to my introvertness I wasn't able to converse much with them but yeah, was great to finally meet. There were many more people there whom I wanted to ping but dinner is probably not a good time to do so lol unless sitting together.

I also had a great time talking with Felipe, a person I hoped to meet for quite a long.

The BoFs

Being a GNOME GSoC'22 Intern, I was looking forward to the "GSoC + Outreachy internships with GNOME" BoF by Felipe, we discussed various points which we hope to get implemented to enhance the contributor experience even more and strengthen our community more.

The Nothern Riga tour

The tour was a great opportunity to check out Riga and interact with people.

Met Emmanuele there and finally decided to speak to him. He was as awesome as I imagined, Again introvertness killed me but yeah, it is always great to meet people you look up to.

Meeting people

At last, I met many new people and got to learn a lot. Made new friends, got to meet people I look up to and many more. Overall it was amazing. I hope I wasn't that introvert but yeah, slowly getting comfortable around new people :)

The End

Thanks to all the people for making the event so great. It was an experience like no other. I would also like to thank GNOME Foundation for sponsoring the trip :)

I also got accepted as a GNOME Foundation member just recently, which is awesome :D.

Florian Müllner: Extensions in GNOME 45

Sht, 02/09/2023 - 3:20md

By now it is probably no longer news to many: GNOME Shell moved from GJS’ own custom imports system to standard JavaScript modules (ESM).

Imports? ESM?

JavaScript originated in web browsers to add a bit of interactivity to otherwise static pages. There was no need to split up small code snippets into multiple files, so the language did not provide a mechanism for that.

This did become an issue when people started writing bigger programs in JavaScript, so environments like node.js and GJS added their own import systems to organize code into multiple files. As a consequence, developers and tooling had a hard time transitioning from one environment to another.

That changed in 2015 when ECMAScript 6 standardized modules, resulting in a well-defined, widely-supported syntax supported by all major JavaScript engines. GJS has supported ESModules since 2021, but porting GNOME Shell was a much bigger task that had to be done all at once.

So? Why should I care?

Well, there is a teeny tiny drawback: Modules and legacy imports are incompatible in practice.

Modules are loaded differently than scripts, and some statements — namely import and export — are only valid in modules. That means that trying to import a module with the legacy system will result in a syntax error if the module uses one of those statements (about as likely as a pope being Catholic).

Modules also hide anything to the outside that isn’t explicitly exported. So while it is technically possible to import a script as module, it is about as useful as importing an empty file.

What does this mean for extensions?

Extensions that target older GNOME versions will not work in GNOME 45. Likewise, extensions that are adapted to work with GNOME 45 will not work in older versions.

You can still support more than one GNOME version, but you will have to upload different versions to extensions.gnome.org for pre- and post-45 support.

There is a porting guide with detailed information. The two most important changes (that will be enough for many extensions!) are:

  1. Use standard syntax to import modules from gnome-shell:

    import * as Main from 'resource:///org/gnome/shell/ui/main.js'; Main.notify('Loaded!');
  2. Export a default class with enable() and disable() methods from your extension.js.

    You may want to extend the new Extension class that replaces the convenience API from the old ExtensionUtils module.

    import {Extension, gettext as _} from 'resource:///org/gnome/shell/extensions/extension.js'; export default class MyTestExtension extends Extension { enable() { console.log(_('%s is now enabled').format(this.uuid)); } disable() { console.log(_('%s is now disabled.').format(this.uuid)); } }

Last but not least, you can always find friendly people on Matrix and Discourse who will be happy to help with any porting issues.

Summary
  • Moving from the custom import system from GJS to the industry standard ECMAScript 6 will cause every extension to break. The move though will mean we are following proper standards and not home grown ones allowing greater compatibility with JavaScript ecosystem.
  • Legacy imports are still supported on extensions.gnome.org but you will need to upload a pre and post GNOME 45 support in order support both LTS and regular distribtuions.

For GNOME Extension Developers:
There is a active extension community who can help you port to the new import system at Matrix and Discourse who can help you quickly port to the new version.

You can test your extensions by downloading the latest GNOME OS and trying your extension there.

To the GNOME Community:
Please file bugs with your favorite extensions or have a friendly conversation with your extension writers so that we can help minimize the impact of this change. Ideally, you could help with the port and provide a pull or merge request to help maintainers.

Resources

Faqet