You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - http://planet.gnome.org/
Përditësimi: 1 javë 5 ditë më parë

Jim Hall: Why the Linux console has sixteen colors (SeaGL)

Hën, 12/11/2018 - 9:00md
At the 2018 Seattle GNU/Linux Conference after-party, I gave a lightning talk about why the Linux console has only sixteen colors. Lightning talks are short, fun topics. I enjoyed giving the lightning talk, and the audience seemed into it, too. So I thought I'd share my lightning talk here. These are my slides in PNG format, with notes added:
Also, my entire presentation is under the CC-BY:
When you bring up a terminal window, or boot Linux into plain text mode, maybe you've wondered why the Linux console only has sixteen colors. No matter how awesome your graphics card, you only get these sixteen colors for text:
You can have eight background colors, and sixteen foreground colors. But why is that?
Remember that Linux is a PC operating system, so you have to go back to the early days of the IBM PC. Although the rules are the same for any Unix terminal.

The origins go back to CGA, the Color/Graphics Adapter from the earlier PC-compatible computers. This was a step up from the plain monochrome displays; as the name implies, monochrome could only display black or white. CGA could display a limited range of colors.

CGA supported mixing red (R), green (G) and blue (B) colors. In its simplest form, RGB is either "on" or "off." In this case, you can mix the RGB colors in 2×2×2=8 ways. So RGB=100 is Red, and RGB=010 is Green, and RGB=001 is Blue. And you can mix colors, like RGB=011 is cyan. This simple table shows the binary and decimal representations of RGB:
To double the number of colors, CGA added an extra bit called the "intensifier" bit. With the intensifier bit set, the red, green and blue colors would be set to their maximum values. Without the intensifier bit, each RGB value would be set to a "midrange" intensity. Let's represent that intensifier bit as an extra 1 or 0 in the binary color representation, as iRGB:
That means 0100 gives "red" and 1100 (with intensifier bit set) results in "bright red." Also, 0010 is "green" and 1010 is "bright green." And 0000 is "black," but 1000 is "bright black."

Oh wait, there's a problem. "Black" and "bright black" are the same color, because there's no RGB value to intensify.

But we can solve that! CGA actually implemented a modified iRGB definition, using two intermediate values, at about one-third and two-thirds intensity. Most "normal" mode (0–7) colors used values at the two-thirds intensity. Translating from "normal" mode to "bright" mode, convert zero values to the one-third intensity, and two-thirds values to full intensity.

With that, you can represent all the colors in the rainbow: red, orange, yellow, blue, indigo, and violet. You can sort of fake the blue, indigo, and violet with the different "blue" shades.

Oops, we don't have orange! But we can fix that by assigning 0110 yellow a one-third green value that turned the color into orange, although most people saw it as brown.

Here's another iteration of the color table, using 0x0 to 0xF for the color range, with 0x5 and 0xA as the one-third and two-thirds intensities, respectively:
And that's how the Linux console got sixteen text colors! That's also why you'll often see "brown" labeled "yellow" in some references, because it started out as plain "yellow" before the intensifier bit. Similarly, you may also see "gray" represented as "bright black," because "gray" is really "black" with the intensifier bit set.

So let's look at the bit patterns. You have four bits for the foreground color, 0000 black to 1111 bright white:
And you have three bits for the background color, from 000 black to 111 white:
But why not four bits for the background color? That's because the final bit is reserved for a special attribute. With this attribute set, your text could blink on and off. The "Blink" bit was encoded at the end of the foreground and background bit-pattern:
That's a full byte! And that's why the Linux console has only sixteen colors; the Linux console inherits text mode colors from CGA, which encodes colors a full byte at a time.

It turns out the rules are the same for other Unix terminals, which also used eight bits to represent colors. But on other terminals, 0110 really was yellow, not orange or brown.

Jim Hall: Usability Testing in Open Source Software (SeaGL)

Hën, 12/11/2018 - 5:02md
I recently attended the 2018 Seattle GNU/Linux Conference, where I gave a presentation about usability testing in open source software. I promised to share my presentation deck. Here are my slides in PNG format, with notes added:
Also, my entire presentation is under the CC-BY:
I've been involved in Free/open source software since 1993, but recently I developed an interest in usability testing in open source software. During a usability testing class in my Master's program in Scientific and Technical Communication (MS) I studied the usability of GNOME and Firefox. Later, I did a deeper examination of the usability of open source software, focusing on GNOME, as part of my Master's capstone. (“Usability Themes in Open Source Software,” 2014.)

Since then, I've joined the GNOME Design Team where I help with usability testing.

I also (sometimes) teach usability at the University of Minnesota. (CSCI 4609 Processes, Programming, and Languages: Usability of Open Source Software.)
I’ve worked with others on usability testing since then. I have mentored in Outreachy, formerly the Outreach Program for Women. Sanskriti, Gina, Renata, Ciarrai, Diana were all interns in usability testing. Allan and Jakub from the GNOME Design Team co-mentored as advisers.
What do we mean when we talk about “usability”? You can find some formal definitions of usability that talk about the Learnability, Efficiency, Memorability, Errors, and Satisfaction. But I find it helps to have a “walking around” definition of usability.

A great way to summarize usability is to remember that real people are busy people, and they just need to get their stuff done. So a program will have good usability if real people can do real tasks in a realistic amount of time.

User eXperience (UX) is technically not the same as usability. Where usability is about real people doing real tasks in a reasonable amount of time, UX is more about the emotional connection or emotional response the user has when using the software.

You can test usability in different ways. I find the formal usability test and prototype test work well. You can also indirectly examine usability, such as using an expert to do a heuristic evaluation, or using questionnaires. But really, nothing can replace watching a real person trying to use your software; you will learn a lot just by observing others.
People think it's hard to do usability testing, but it's actually easy to do a usability test on your own. You don’t need a fancy usability lab or any professional experience. You just need to want to make your program easier for other people to use.

If you’re starting from scratch, you really have three steps to do a formal usability test:

1. Consider who are your users. Write this down as a short paragraph for each kind of user for your software. Make it a realistic fiction. These are your Personas. With personas, you can make design decisions that always benefit the user. “If we change __ then that will make it easier for users like Jane.” “If we add __ then that will help people like Steve.”

2. For each persona, write a brief statement about why that user might use the software to do their tasks. There are different ways that a user might use the software, but just jot down one way. This is a Use Scenario. With scenarios, you can better understand the circumstances when people use the software.

3. Now take a step back and think about the personas and scenarios. Write down some realistic tasks that real people would do with the software. Make each one stand on its own. These are scenario tasks, and they make up your actual usability test. Where you should write personas and scenarios in third-person (“__ does this..”) you should write scenario tasks in second-person (“you do this..”) Each scenario task should set up a brief context, then ask the tester to do something specific. For example:

You don’t have your glasses with you, so it’s hard to see the text on the screen. Make the text bigger so you can read it more easily.

The challenge in scenario tasks is not to accidentally give hints for what the tester should do. Avoid using the same words and phrases from menus. Don’t be too exact about what the tester should do - instead, describe the goal, and let the tester find their own path. Remember that there may be more than one way to do something.

The key in doing a usability test is to make it iterative. Do a usability test, analyze your results, then make changes to the design based on what you learned in the test. Then do another test. But how many testers do you need?
You don’t need many testers to do a usability test if you do it iteratively. Doing a usability test with five testers is enough to learn about the usability problems and make tweaks to the interface. At five testers, you’ve uncovered more than 80% of usability problems, assuming most testers can uncover about 31% of issues (typical).

But you may need more testers for other kinds of usability tests. “Only five” works well for traditional/formal usability tests. For a prototype test, you might need more testers.

But five is enough for most tests.
If every tester can uncover about 31% of usability problems, then note what happens when you have one, five, and ten testers in a usability test. You can cover 31% with one tester. With more testers, you have overlap in some areas, but you cover more ground with each tester. At five testers, that’s pretty good coverage. At ten testers, you don’t have considerably better coverage, just more overlap.

I made this sample graphic to demonstrate. The single red square covers 31% of the grey square's area (in the same way a tester can usually uncover about 31% of the usability problems, if you've designed your test well). Compare five and ten testers. You don't get significantly more coverage at ten testers than at five testers. You get some extra coverage, and more overlap, but that's a lot of extra effort for not a lot of extra value. Five is really all you need.
Let me show you a usability test that I did. Actually, I did two of them. This was part of my work on my Master’s degree. My capstone was Usability Themes in Open Source Software. Hall, James. (2014). Usability Themes in Open Source Software. University of Minnesota.

I wrote up the results for each test as separate articles for Linux Journal: “The Usability of GNOME” (December, 2014) and “It’s about the user: Usability in open source software” (December, 2013).
I like to show results in a “heat map.” A heat map is just a convenient way to show test results. Scenario tasks are in rows and each tester is a separate column.

For each cell (a tester doing a task) I use a color to show how easy or how difficult that task was for the tester. I use this scale:

—Green if the tester easily completed the task. For example, if the tester seemed to know exactly what to do, what menu item to activate or which icon to click, you would code the task in green for that tester.

—Yellow if the tester experienced some (but not too much) difficulty in the task.

—Orange if the tester had some trouble in the task. For example, if the tester had to poke around the menus for a while to find the right option, or had to hunt through toolbars and selection lists to locate the appropriate icon, you would code the task in orange for that tester.

—Red if the tester experienced severe difficulty in completing the task.

—Black if the tester was unable to figure out how to complete the task, and gave up.

There are some “hot” rows here, which show tasks that were difficult for testers: setting the font and colors in gedit, and setting a bookmark in Nautilus. Also searching for a file in Nautilus was a bit challenging, too. So my test recommended that the GNOME Design Team focus on these four to make them easier to do.
This next one is the heat map from my capstone project.

Note that I tried to do a lot here. You need to be realistic in your time. Try for about an hour (that’s what I did) but make sure your testers have enough time. The gray “o” in each cell is where we didn’t have enough time do that task.

You can see some “hot rows” here too: setting the font in gedit, and renaming a folder in Nautilus. And changing all instances of some words in gedit, and installing a program in Software, and maybe creating two notes in Notes.
Most of the interns did a traditional usability test. So that’s what Sanskriti did here:
Sanskriti did a usability test that was similar to mine, so we could measure changes. She had a slightly different color map here, using two tones for green. But you can see a few hot rows: changing the default colors in gedit, adding photos to an album in Photos, and setting a photo as a desktop wallpaper from Photos. Also some warm rows in creating notes in Notes, and creating a new album in Photos.
Gina was from my second cycle in Outreachy, and she did another traditional usability test:
You can see some hot rows in Gina's test: bookmarking a location in Nautilus, adding a special character (checkmark) using Characters and Evince, and saving the location (bookmark) in Evince. Also some warm rows: changing years in Calendar, and saving changes in Evince. Maybe searching for a file in Nautilus.

Gina did such great work that we co-authored an article in FOSS Force: "A Usability Study of GNOME" (March, 2016).
In the next cycle of Outreachy, we had three interns: Gina, Ciarrai and Diana. Renata did a traditional usability test:
In Renata’s heat map, you can see some hot rows: creating an album in Photos, adding a new calendar in Calendar, and connecting to an online account in Calendar. And maybe deleting a photo in Photos and setting a photo as a wallpaper image in Photos. Some issues in searching for a date in Calendar, and creating an event in Calendar.

See also our article in Linux Voice Magazine: "GNOME Usability Testing" (November, 2016, Issue 32).
Ciarrai did a prototype test for a future design change to the GNOME Settings application:
In the future Settings, the Design Team thought they’d have a list of categories down the side. Clicking on a category shows you the settings for that category. Here’s a mock-up for Wi-Fi in the new Settings. You can see the list of other categories down the left side:
Remember the “only five” slide from a while back? That’s only for traditional/formal usability tests. For a prototype test, we didn’t think five was enough, so Ciarrai did ten testers.

For Ciarrai’s heat map, we used slightly different colors because the tester wasn’t actually using the software. They were pointing to a paper printout. Here, green indicates the tester knew exactly what to point to, and red indicates they pointed to the wrong one. Or for some tasks that had sub-panels, orange indicates they got to the first panel, and failed to get to the second setting.

You can see some hot rows, indicating where people didn’t know what category would have the Settings option they were looking for: Monitor colors, and Screen lock time. Also Time zone, Default email client, and maybe Bluetooth and Mute notifications.
Other open source projects have adopted the same usability test methods to examine usability. Debian did a usability test of GNOME. Here’s their test: (*original)
They had more general “goals” for testers, called “missions.” Similar to scenario tasks, the missions had a more broad goal that provided some flexibility for the tester. But not very different from scenario tasks.

You can see some hot rows here: temporary files and change default video program in Settings, and installing/removing packages in Package Management. Also some issues in creating a bookmark in Nautilus, and adding/removing other clocks in Settings.
If you want more information, please visit my blog or email me.
I hope this helps you to do usability testing on your own programs. Usability is not hard! Anyone can do it!

Richard Hughes: More fun with libxmlb

Hën, 12/11/2018 - 4:51md

A few days ago I cut the 0.1.4 release of libxmlb, which is significant because it includes the last three features I needed in gnome-software to achieve the same search results as appstream-glib.

The first is something most users of database libraries will be familiar with: Bound variables. The idea is you prepare a query which is parsed into opcodes, and then at a later time you assign one of the ? opcode values to an actual integer or string. This is much faster as you do not have to re-parse the predicate, and also means you avoid failing in incomprehensible ways if the user searches for nonsense like ]@attr. Borrowing from SQL, the syntax should be familiar:

g_autoptr(XbQuery) query = xb_query_new (silo, "components/component/id[text()=?]/..", &error); xb_query_bind_str (query, 0, "gimp.desktop", &error);

The second feature makes the caller jump through some hoops, but hoops that make things faster: Indexed queries. As it might be apparent to some, libxmlb stores all the text in a big deduplicated string table after the tree structure is defined. That means if you do <component component="component">component</component> then we only store just one string! When we actually set up an object to check a specific node for a predicate (for instance, text()='fubar' we actually do strcmp("fubar", "component") internally, which in most cases is very fast…

Unless you do it 10 million times…

Using indexed strings tells the XbMachine processing the predicate to first check if fubar exists in the string table, and if it doesn’t, the predicate can’t possibly match and is skipped. If it does exist, we know the integer position in the string table, and so when we compare the strings we can just check two uint32_t’s which is quite a lot faster, especially on ARM for some reason. In the case of fwupd, it is searching for a specific GUID when returning hardware results. Using an indexed query takes the per-device query time from 3.17ms to about 0.33ms – which if you have a large number of connected updatable devices makes a big difference to the user experience. As using the indexed queries can have a negative impact and requires extra code it is probably only useful in a handful of cases. In case you do need this feature, this is the code you would use:

xb_silo_query_build_index (silo, "component/id", NULL, &error); // the cdata xb_silo_query_build_index (silo, "component", "type", &error); // the @type attr g_autoptr(XbNode) n = xb_silo_query_first (silo, "component/id[text()=$'test.firmware']", &error);

The indexing being denoted by $'' rather than the normal pair of single quotes. If there is something more standard to denote this kind of thing, please let me know and I’ll switch to that instead.

The third feature is: Stemming; which means you can search for “gaming mouse” and still get results that mention games, game and Gaming. This is also how you can search for words like Kongreßstraße which matches kongressstrasse. In an ideal world stemming would be computationally free, but if we are comparing millions of records each call to libstemmer sure adds up. Adding the stem() XPath operator took a few minutes, but making it usable took up a whole weekend.

The query we wanted to run would be of the form id[text()~=stem('?') but the stem() would be called millions of times on the very same string for each comparison. To fix this, and to make other XPath operators faster I implemented an opcode rewriting optimisation pass to the XbMachine parser. This means if you call lower-case(text())==lower-case('GIMP.DESKTOP') we only call the UTF-8 strlower function N+1 times, rather than 2N times. For lower-case() the performance increase is slight, but for stem it actually makes the feature usable in gnome-software. The opcode rewriting optimisation pass is kinda dumb in how it works (“lets try all combinations!”), but works with all of the registered methods, and makes all existing queries faster for almost free.

One common question I’ve had is if libxmlb is supposed to obsolete appstream-glib, and the answer is “it depends”. If you’re creating or building AppStream metadata, or performing any AppStream-specific validation then stick to the appstream-glib or appstream-builder libraries. If you just want to read AppStream metadata you can use either, but if you can stomach a binary blob of rewritten metadata stored somewhere, libxmlb is going to be a couple of orders of magnitude faster and use a ton less memory.

If you’re thinking of using libxmlb in your project send me an email and I’m happy to add more documentation where required. At the moment libxmlb does everything I need for fwupd and gnome-software and so apart from bugfixes I think it’s basically “done”, which should make my manager somewhat happier. Comments welcome.

Michael Catanzaro: The GNOME (and WebKitGTK+) Networking Stack

Hën, 12/11/2018 - 5:52pd

WebKit currently has four network backends:

  • CoreFoundation (used by macOS and iOS, and thus Safari)
  • CFNet (used by iTunes on Windows… I think only iTunes?)
  • cURL (used by most Windows applications, also PlayStation)
  • libsoup (used by WebKitGTK+ and WPE WebKit)

One guess which of those we’re going to be talking about in this post. Yeah, of course, libsoup! If you’re not familiar with libsoup, it’s the GNOME HTTP library. Why is it called libsoup? Because before it was an HTTP library, it was a SOAP library. And apparently somebody thought that when Mexican people say “soap,” it often sounds like “soup,” and also thought that this was somehow both funny and a good basis for naming a software library. You can’t make this stuff up.

Anyway, libsoup is built on top of GIO’s sockets APIs. Did you know that GIO has Object wrappers for BSD sockets? Well it does. If you fancy lower-level APIs, create a GSocket and have a field day with it. Want something a bit more convenient? Use GSocketClient to create a GSocketConnection connected to a GNetworkAddress. Pretty straightforward. Everything parallels normal BSD sockets, but the API is nice and modern and GObject, and that’s really all there is to know about it. So when you point WebKitGTK+ at an HTTP address, libsoup is using those APIs behind the scenes to handle connection establishment. (We’re glossing over details like “actually implementing HTTP” here. Trust me, libsoup does that too.)

Things get more fun when you want to load an HTTPS address, since we have to add TLS to the picture, and we can’t have TLS code in GIO or GLib due to this little thing called “copyright law.” See, there are basically three major libraries used to implement TLS on Linux, and they all have problems:

  • OpenSSL is by far the most popular, but it’s, hm, shall we say technically non-spectacular. There are forks, but the forks have problems too (ask me about BoringSSL!), so forget about them. The copyright problem here is that the OpenSSL license is incompatible with the GPL. (Boring details: Red Hat waves away this problem by declaring OpenSSL a system library qualifying for the GPL’s system library exception. Debian has declared the opposite, so Red Hat’s choice doesn’t gain you anything if you care about Debian users. The OpenSSL developers are trying to relicense to the Apache license to fix this, but this process is taking forever, and the Apache license is still incompatible with GPLv2, so this would make it impossible to use GPLv2+ software except under the terms of GPLv3+. Yada yada details.) So if you are writing a library that needs to be used by GPL applications, like say GLib or libsoup or WebKit, then it would behoove you to not use OpenSSL.
  • GnuTLS is my favorite from a technical standpoint. Its license is LGPLv2+, which is unproblematic everywhere, but some of its dependencies are licensed LGPLv3+, and that’s uncomfortable for many embedded systems vendors, since LGPLv3+ contains some provisions that make it difficult to deny you your freedom to modify the LGPLv3+ software. So if you rely on embedded systems vendors to fund the development of your library, like say libsoup or WebKit, then you’re really going to want to avoid GnuTLS.
  • NSS is used by Firefox. I don’t know as much about it, because it’s not as popular. I get the impression that it’s more designed for the needs of Firefox than as a Linux system library, but it’s available, and it works, and it has no license problems.

So naturally GLib uses NSS to avoid the license issues of OpenSSL and GnuTLS, right?

Haha no, it uses a dynamically-loadable extension point system to allow you to pick your choice of OpenSSL or GnuTLS! (Support for NSS was started but never finished.) This is OK because embedded systems vendors don’t use GPL applications and have no problems with OpenSSL, while desktop Linux users don’t produce tivoized embedded systems and have no problems with LGPLv3. So if you’re using desktop Linux and point WebKitGTK+ at an HTTPS address, then GLib is going to load a GIO extension point called glib-networking, which implements all of GIO’s TLS APIs — notably GTlsConnection and GTlsCertificate — using GnuTLS. But if you’re building an embedded system, you simply don’t build or install glib-networking, and instead build a different GIO extension point called glib-openssl, and libsoup will create GTlsConnection and GTlsCertificate objects based on OpenSSL instead. Nice! And if you’re Centricular and you’re building GStreamer for Windows, you can use yet another GIO extension point, glib-schannel, for your native Windows TLS goodness, all hidden behind GTlsConnection so that GStreamer (or whatever application you’re writing) doesn’t have to know about SChannel or OpenSSL or GnuTLS or any of that sad complexity.

Now you know why the TLS extension point system exists in GIO. Software licenses! And you should not be surprised to learn that direct use of any of these crypto libraries is banned in libsoup and WebKit: we have to cater to both embedded system developers and to GPL-licensed applications. All TLS library use is hidden behind the GTlsConnection API, which is really quite nice to use because it inherits from GIOStream. You ask for a TLS connection, have it handed to you, and then read and write to it without having to deal with any of the crypto details.

As a recap, the layering here is: WebKit -> libsoup -> GIO (GLib) -> glib-networking (or glib-openssl or glib-schannel).

So when Epiphany fails to load a webpage, and you’re looking at a TLS-related error, glib-networking is probably to blame. If it’s an HTTP-related error, the fault most likely lies in libsoup. Same for any other GNOME applications that are having connectivity troubles: they all use the same network stack. And there you have it!

P.S. The glib-openssl maintainers are helping merge glib-openssl into glib-networking, such that glib-networking will offer a choice of GnuTLS or OpenSSL and obsoleting glib-openssl. This is still a work in progress. glib-schannel will be next!

P.S.S. libcurl also gives you multiple choices of TLS backend, but makes you choose which at build time, whereas with GIO extension points it’s actually possible to choose at runtime from the selection of installed extension points. The libcurl approach is fine in theory, but creates some weird problems, e.g. different backends with different bugs are used on different distributions. On Fedora, it used to use NSS, but now uses OpenSSL, which is fine for Fedora, but would be a license problem elsewhere. Debian actually builds several different backends and gives you a choice, unlike everywhere else. I digress.

Jussi Pakkanen: Compile any C++ program 10× faster with this one weird trick!

Dje, 11/11/2018 - 11:09md
tl/dr: Is it unity builds? Yes.
I would like to know more!At work I have to compile a large code base from scratch fairly often. One of the components it has is a 3D graphics library. It takes around 2 minutes 15 seconds to compile using an 8 core i7. After a while I got bored with this and converted the system to use a unity build. In all simplicity what that means is that if you have a target consisting of files foo.cpp, bar.cpp, baz.cpp etc you create a cpp file with the following contents:
#include<foo.cpp>#include<bar.cpp>#include<baz.cpp>
Then you would tell the build system to build that instead of the individual files. With this method the compile time dropped down to 1m 50s which does not seem like that much of a gain but the compilation used only one CPU core. The remaining 7 are free for other work. If the project had 8 targets of roughly the same size, building them incrementally would take 18 minutes. With unity builds they would take the exact same 1m 50s assuming perfect parallelisation, which happens fairly often in practice.Wait, what? How is this even?The main reason that C++ compiles slowly has to do with headers. Merely including a few headers in the standard library brings in tens or hundreds of thousands of lines of code that must be parsed, verified, converted to an AST and codegenerated in every translation unit. This is extremely wasteful especially given that most of that work is not used but is instead thrown away.
With an Unity build every #include is processed only once regardless of how many times it is used in the component source files.
Basically this amounts to a caching problem, which is one of the two really hard problems in computer science in addition to naming things and off by one errors.Why is this not used by everybody then?There are several downsides and problems. You can't take any old codebase and compile it as a unity build. The first blocker is that things inside source files leak into other ones since they are all textually included one after the other.. For example if you have two files and each of them declares a static function with the same name, it will lead to name clashes and a compilation failure. Similarly things like using namespace std declarations leak from one file to another causing havoc.
But perhaps the biggest problem is that every recompilation takes the same time. An incremental rebuild where one file has changed takes a few seconds or so whereas a unity builds takes the full 1m 50s every time. This is a major roadblock to iterative development and the main reason unity builds are not widely used.A possible workflow with MesonFor simplicity let's assume that we have a project that builds and works with unity builds. Meson has an automatic unity build file generator that can be enabled by setting the value of the unity build option.
This solves the basic build problem but not the incremental one. However usually you'd develop only one target (be it a library, executable or module) and want to build only that one incrementally and everything else as a unity build. This can be done by editing the build definition of the target in question and adding an override option:
executable(..., override_options : ['unity=false'])
Once you are done you can remove the override from the build file to return everything back to normal.How does this tie in with C++ modules?Directly? Not in any way really. However one of the stated advantages of modules has always been faster build times. There are a few module implementations but there is very little public data on how they behave with real world codebases. During a CppCon presentation on modules Google's Chandler Carruth mentioned that in Google's code base modules resulted in 30% build time reduction.
It was not mentioned whether Google uses unity builds internally but they almost certainly don't (based on things such as this bug report on Bazel). If we assume that theirs is the fastest existing "classical" C++ build mechanism, which it probably is, the conclusion is that it is an order of magnitude slower than a unity build on the same source files. A similar performance gap would probably not be tolerated in any other part of the C++ ecosystem.
The shoemaker's children go barefoot.

Julian Sparber: Purism Fractal sponsorship

Sht, 10/11/2018 - 11:08pd

I’m happy to announce that Purism agreed to sponsor my work on Fractal for the next couple of weeks. I will polish the room history and drastically improve the UX/UI around scrolling, loading messages etc. which will make Fractal feel much nicer. As part of this I will also clean up and refactor the current code. On my agenda is the following:

Smooth history loading

Loading old messages in the history is currently a bit jarring, because the scroll position isn’t preserved when new messages come in. I’d like to address this by loading messages outside the viewport, making it so that the user isn’t even aware that more messages are being loaded most of the time. This is a crucial part of why modern messaging apps feel so nice.

Faster message rendering

There are some inefficiencies in how messages are currently rendered, which make showing messages not as smooth as it could be. Fixing this could improve the experience of sending/receiving messages significantly.

 “New messages” behavior
  • Re-add a “New messages” divider, since it was lost as part of the big history refactor I recently completed
  • Scroll to last seen message when opening app instead of most recent message
  • Fix bugs in current behavior and make sure the divider always shows up
Add day label

Add a label with the day/date at the beginning of every new day, like other messaging apps do.

Bastian Ilsø Hougaard: Developer Center Initiative – Meeting Summary 10th November 2018

Sht, 10/11/2018 - 10:06pd

The Developer Center Initiative is an attempt to reboot the developer center  based on a new modern platform. For more information, see my previous  blog posts.

It’s been two months since the last Developer Center meeting. In light of that I called in for a meeting last week to get a status of things. We discussed three items:

  • Changing the current state of the developer center.
  • The need for a physical meetup to reinforce spirits and get bulk work started.
  • Pending bugs and feature merges in hotdoc relevant for the developer center.
Developer Center State

Thibault currently holds a branch for gnome-devel-docs. The branch contains the old GNOME Developer docs ported to markdown. To ensure that no duplicate work happens between gnome-devel-docs master and the branch, the next step is to announce to relevant mailing lists that further contribution to the developer docs should happen in the gnome-devel-docs branch. Even more ideal would be to have the branch pushed to master.  The markdown port is not synchronized in any way with the mallard docs in master, so any changes to the mallard docs would require re-synchronization and that’s why currently editing ported markdown docs in the branch currently is a no-go for now.

Pushing the branch does imply that we initially loose translations though and most changes made to gnome-devel-docs seem to be translations these days with a few exceptions (mostly grammar corrections). Thibault and Mathieu expressed interest in supporting translated docs in the future, but it is a substantial amount of work and low on the todo list.

We agreed that I should try to get in touch by e-mail to the relevant mailing lists (including translations) and to individuals who contributed to gnome-devel-docs recently to hear their opinion before we proceed.

Hotdoc status

On the hotdoc side, Mathieu and Thibault explained that they have pending work from the GStreamer docs waiting to land and include in a new release.  There is also an ongoing feature request to support flexboxes through markdown syntax which would be nice to have if we want to align Thibault’s branch more closely to Allan’s mockup.

With help from Thibault and Mathieu I managed to get a local instance of hotdoc running. A few bugs were fixed along the way and I plan to blog post a getting started guide to get Thibaults branch running on your computer which hopefully in turn can be used to improve the existing documentation on hotdoc.

Hackfest plans

For the past two months activity in this initiative has been running low which signals to me that we need to meet together physically again. The meeting attendance this time was around 3-4 people. I am going to FOSDEM 2019 and if anyone else interested in the gnome developer center’s future are attending too, I’ll be happy to meet up during conference for a chat. If sufficiently many shows interest, we could also extend the conference a day, sit down and have a look at the state of things by then.

Otherwise, if someone out there could help providing venue/office space sometime in spring, I can try to gather the group of people who have shown interest so far and get a proper hackfest going.

Tobias Mueller: Talking at PETCon2018 in Hamburg, Germany and OpenPGP Email Summit in Brussels, Belgium

Pre, 09/11/2018 - 10:06md

Just like last year, I managed to be invited to the Privacy Enhancing Technologies Conference to talk about GNOME. First, Simone Fischer-Huebner from Karlstadt University talked about her projects which are on the edge of security, cryptography, and usability, which I find a fascinating area to be in. She presented outcomes of her Prismacloud project which also involves fancy youtube videos…

I got to talk about how I believe GNOME is in a good position make a safe and secure operating system. I presented some case studies and reported on the challenges that I see. For example, Simone mentioned in her talk that certain users don’t trust a software if it is too simple. Security stuff must be hard, right?! So how do measure the success of your security solution? Obviously you can test with users, but certain things are just very hard to get users for. For example, testing GNOME Keysign requires a user not only with a set up MUA but also with a configured GnuPG. This is not easy to come by. The discussions were fruitful and I got sent a few references that might be useful in determining a way forward.

OpenPGP Email Summit

I also attended the OpenPGP Email Summit in Brussels a few weeks ago. It’s been a tiny event graciously hosted by a local company. Others have written reports, too, which are highly interesting to read.

It’s been an intense weekend with lots of chatting, thinking, and discussing. The sessions were organised in a bar-camp style manner. That is, someone proposed what to discuss about and the interested parties then came together. My interest was in visual security indication, as triggered by this story. Unfortunately, I was lured away by another interesting session about keyserver and GDPR compliance which ran in parallel.

For the plenary session, Holger Krekel reported on the current state of Delta.Chat. If you haven’t tried it yet, give it a go. It’s trying to provide an instant messaging interface with an email transport. I’ve used this for a while now and my experience is mixed. I still get to occasional email I cannot decrypt and interop with my other MUA listening on the very same mailbox is hit and miss. Sometimes, the other MUA snatches the email before Delta.chat sees it, I think. Otherwise, I like the idea very much. Oh, and of course, it implements Autocrypt, so your clients automatically encrypt the messages.

Continuing the previous talk, Azul went on to talk about countermitm, an attempt to overcome Autocrypt 1.0‘s weaknesses. This is important work. Because without the vision of how to go from Autocrypt Level 1 to Level 2, you may very well question to usefulness. As of now, Emails are encrypted along their way (well. Assuming MTA-STS) and if you care about not storing plain text messages in your mailbox, you could encrypt them already now. Defending against active attackers is hard so having sort of a plan is great. Anyway, countermitm defines “verified groups” which involves a protocol to be run via Email. I think I’ve mentioned earlier that I still think that it’s a bit a sad that we don’t have the necessary interfaces to run protocols over Email. Outlook, I think, can do simple stuff like voting for of many options or retracting an email. I would want my key exchange to be automated further, i.e. when GNOME Keysign sends the encrypted signature, I would want the recipient to decrypt it and send it back.

Phil Zimmermann, the father of PGP, mentioned a few issues he sees with the spec, although he also said that it’s been a while that he was deeply into this matter. He wanted the spec to be more modern and more aggressively pushing for today’s cryptography rather than for the crypto of the past. And in fact, he wants the crypto of tomorrow. Now. He said that we know that big agencies are storing message today for later analyses. And we currently have no good way of having what people call “perfect forward secrecy” so a future key compromise makes the messages of today readable. He wants post quantum crypto to defeat the prying eyes. I wonder whether anybody has implemented pq-schemes for GnuPG, or any other OpenPGP implementation, yet.

My takeaways are: The keyserver network needs a replacement. Currently, it is used for initial key discovery, key updates, and revocations. I think we can solve some of these problems better if we separate them. For example, revocations are pretty much a fire and forget thing whereas other key updates are not necessarily interesting in twenty years from now. Many approaches for making initial key discovery work have been proposed. WKD, Autocrypt, DANE, Keybase, etc. Eventually one of these approaches wins the race. If not, we can still resort back to a (plain) list of Email addresses and their key ids. That’s as good or bad as the current situation. For updates, the situation is maybe not as bad. But we might still want to investigate how to prevent equivocation.

Another big thing was deprecating cruft in the spec to move a bit faster in terms of cryptography and to allow implementers to get a compliant program running (more) quickly. Smaller topics were the use of PQ safe algorithm and exploitation of backwards incompatible changes to the spec, i.e. v5 keys with full fingerprints. Interestingly enough, a trimmed down spec had already been developed here.

Neil McGovern: GNOME ED update – October

Enj, 08/11/2018 - 1:22pd

I’m currently writing this from sunny (but very cold!) Colorado Springs, and tomorrow I’m off to SeaGL. We’ll have a booth there, so come and say hi to me and Rosanna if you’re in Seattle! More on that in the next report, however :)

General

As per usual, our main focus has been on the hiring of new staff members for the Foundation. We’ve completed a few second interviews and a couple of first interviews. We’re aiming to start making offers around the end of November. If you have put in an application, and haven’t heard back in a while, please don’t worry! It’s simply due to a large number of people who’ve applied and the very manual way we’ve had to process these. Everyone should hear back.

We’ve also had some interesting times with our banking. The short version is, we’ve moved banks to another provider. This has taken quite a bit of work, but hopefully, this should be settling down now.

As mentioned in issue #43, we have an employee handbook. However, it’s not public and hasn’t been updated. We’ve now managed to find a service that will do some of this for us, so we don’t need to create a whole load of text.

Finally, some minor items: Three trademark agreements were granted/modified, one GDPR request is being considered (removal of email from list archives), the GNOME namespace on handshake.org has been requested (which involved a number of calls with our trademark lawyer), a Dun and Bradstreet number (D-U-N-S) has been requested so we can then request a free Apple code signing certificate and the EU events box should now have a laptop.

Conferences SustainOSS Summit

This event was quite interesting and was held in London at the end of October. There was an estimated 150 people attend, and all to talk about the sustainability of open source software, and how this can be improved (sustainability here should be read in all forms; financial, newcomer experience; maintainer burnout etc).

Freenode#Live

Held in Bristol at the start of November Freenode#Live was once again an impressive event, and I presented my “Why Free Software on the desktop matters” talk. The most useful aspect of the event is to meet up with key people in the FOSS community, and Advisory Board members.

Finally, as we were leaving the venue, one of the venue staff members came up to say hello. He’s a professional graphic illustrator/designer and although having never heard about free software before, was impressed by the conference and our passion that he’s volunteering to help with design work.

Michael Catanzaro: Mesa Update Breaks WebKitGTK+ in Fedora 29

Mër, 07/11/2018 - 3:37pd

If you’re using Fedora and discovered that WebKitGTK+ is displaying blank pages, the cause is a bad mesa update, mesa-18.2.3-1.fc29. This in turn was caused by a GCC bug that resulted in miscompilation of mesa.

To avoid this bug, downgrade to mesa-18.2.2-1.fc29:

$ sudo dnf downgrade mesa*

You can also update to mesa-18.2.4-2.fc29, but this build has not yet reached updates-testing, let alone stable, so downgrading is easier for now. Another workaround is to run your application with accelerated compositing mode disabled, to avoid OpenGL usage:

$ WEBKIT_DISABLE_COMPOSITING_MODE=1 epiphany

On the bright side of things, from all the bug reports I’ve received over the past two days I’ve discovered that lots of people use Epiphany and notice when it’s broken. That’s nice!

Huge thanks to Dave Airlie for quickly preparing the fixed mesa update, and to Jakub Jelenik for handling the same for GCC.

Xavier Claessens: Speed up your GitLab CI

Mar, 06/11/2018 - 10:21md

GNOME GitLab has AWS runners, but they are used only when pushing code into a GNOME upstream repository, not when you push into your personal fork. For personal forks there is only one (AFAIK) shared runner and you could be waiting for hours before it picks your job.

But did you know you can register your own PC, or a spare laptop collecting dust in a drawer, to get instant continuous integration (CI) going? It’s really easy to setup!

1. Install docker apt install docker.io 2. Install gitlab-runner

Follow the instructions here:
https://gitlab.com/gitlab-org/gitlab-runner/blob/master/docs/install/linux-repository.md#installing-the-runner

(Note: The Ubuntu 18.04 package doesn’t seem to work.)

3. Install & start the GitLab runner service sudo gitlab-runner install sudo gitlab-runner start 4. Find the registration token

Go to your gitlab project page, settings -> CI/CD -> expand “runners”

5. Register your runner sudo gitlab-runner register --non-interactive --url https://gitlab.gnome.org --executor docker --docker-image fedora:27 --registration-token **

You can repeat step 5 with the registration token of all your personal forks in the same GitLab instance. To make this easier, here’s a snippet I wrote in my ~/.bashrc to register my “builder.local” machine on a new project. Use it as gitlab-register .

function gitlab-register { host=$1 token=$2 case "$host" in gnome) host=https://gitlab.gnome.org ;; fdo) host=https://gitlab.freedesktop.org ;; collabora) host=https://gitlab.collabora.com ;; *) host=https://gitlab.gnome.org token=$1 esac cmd="sudo gitlab-runner register --non-interactive --url $host --executor docker --docker-image fedora:27 --registration-token $token" #$cmd ssh builder.local -t "$cmd" }

Not only will you now get faster CI, but you’ll also reduce the queue on the shared runner for others!

Allan Day: Birds in flight

Mar, 06/11/2018 - 7:37md

If you follow Planet GNOME, you’ll know about Jim Hall’s fantastic usability testing work. For years Jim has spearheaded usability testing on GNOME, both by running tests himself and mentoring usability testing internships offered through Outreachy.

This Autumn, Jim will once again be mentoring usability testing internships. However, this time round, we’re planning on running the internships a bit differently.

In previous rounds of usability testing, the tests have typically been performed on released software: that is, apps and features that are already in the hands of users. This is great and has flagged up issues that we’ve gone on to fix, but it has some drawbacks.

Most obviously, it means that users are exposed to the software prior to usability testing takes place. However, it also means that test results can take a long time to be corrected: active development of the software in question might have been paused by the time the tests are conducted, and it can sometimes take a while until a developer is able to correct any usability issues that have been identified.

Therefore, for this round of the Outreachy internships, we are only going to test UX changes that are actively being worked on. Instead of testing finished features, the tests will be on two things:

  1. Mockups or prototypes of changes that we hope to implement soon (this can include static mockups and paper or software prototypes)
  2. Features or UI changes that are being actively worked on, but haven’t been released to users yet

One goal is to increase the number of cycles of data-driven iteration that our UX work goes through. Ideally there should be multiple rounds of testing and design changes prior to coding even taking place! This will reduce the amount of UI changes that have to be made, and in turn reduce the amount of work for our developers.

Organising the tests in this way, we’re drawing on ideas from agile and lean. The plan is to have a predefined schedule of tests. When test day rolls around, we’ll figure out what we want to test. This will force a routine to our practice and ensure that we keep the exercise light and iterative.

There’s lots of work in progress UX work in GNOME right now, all of which would benefit from testing. This includes the new menu arrangements that are replacing app menus, new sound settings, new design patterns for lists, new application permission settings, the new lock screen design, and more.

One thing I’d actually love to see is design initiatives rejected outright because of testing feedback.

The region and language settings are being updated right now. We can test this!

This approach to testing is an experiment and we’ll have to see how well it works in practice. However, if it does go well, I’m hopeful that we can incorporate it into our design and development practice more generally.

Jim and the rest of the design team will be looking for help from the rest of the GNOME community as we approach the test days. If anyone wants to help make prototypes or make sure that development branches can be easily run by our interns, your help would be extremely welcome. Likewise, we’d love to hear from anyone who has development work that they would like to have tested.

Philip Chimento: Taking Out the Garbage

Mar, 06/11/2018 - 3:07pd

From the title, you might think this post is about household chores. Instead, I’m happy to announce that we may have a path to solving GJS’s “Tardy Sweep Problem”.

For more information about the problem, read The Infamous GNOME Shell Memory Leak by Georges Stavracas. This is going to be a more technical post than my previous post on the topic, which was more about the social effects of writing blog posts about memory leaks. So first I’ll recap what the problem is.

Garbage, garbage, everywhere

At the root of the GNOME desktop is an object-oriented technology called GObject. GObjects are reference counted, but not garbage collected. As long as their reference count is nonzero, they are “alive”, and when their reference count drops to zero, they are deleted from memory.

GObject reference counting

Graphical user interfaces (such as a large part of GNOME Shell) typically involve lots of GObjects which all increase each other’s reference count. A diagram for a simple GUI window made with GTK might look like this:

A typical GUI would involve many more objects than this, but this is just for illustrating the problem.

Here, each box is an object in the C program.

Note that these references are all non-directional, meaning that they aren’t really implemented as arrows. In reality it looks more like a list of numbers; Window (1); Box (1); etc. Each object “knows” that it has one reference, but it knows nothing about which other objects own those references. This will become important later.

When the app closes, it drops its reference to the window. The window’s reference count becomes zero, so it is erased. As part of that, it drops all the references it owns, so the reference count of the upper box becomes zero as well, and so on down the tree. Everything is erased and all the memory is reclaimed. This all happens immediately. So far, so good.

Javascript objects

To write the same GUI in a Javascript program, we want each GObject in the underlying C code to have a corresponding Javascript object so that we can interact with the GUI from our Javascript code.

Javascript objects are garbage collected, and the garbage collector in the SpiderMonkey JS engine is a “tracing” garbage collector, meaning that on every garbage collection pass it starts out with objects in a “root set” that it knows are not garbage. It “traces” each of those objects, asking it which other objects it refers to, and keeps tracing each new object until it hits a dead end. Any objects that weren’t traced are considered garbage, and are deleted. (For more information, the Wikipedia article is informative: https://en.wikipedia.org/wiki/Tracing_garbage_collection)

We need to integrate the JS objects and their garbage collection scheme with the GObjects and their reference counting scheme. That looks like this:

The associations between the Javascript objects (“JS”) and the GObjects are bidirectional. That means, the JS object owns a reference to the GObject, meaning the reference count of every GObject in this diagram is 2. The GObject also “roots” the JS object (marks it as unable to be garbage collected) because the JS object may have some state set on it (for example, by writing button._alreadyClicked = false; in JS) that should not be lost while the object is still alive.

The JS objects can also refer to each other. For example, see the rightmost arrow from the window’s JS object to the button’s JS object. The JS code that created this GUI probably contained something like win._button = button;. These references are directional, because the JS engine needs to know which objects refer to which other objects, in order to implement the garbage collector.

Speaking of the garbage collector! The JS objects, unlike the GObjects, are cleaned up by garbage collection. So as long as a JS object is not “rooted” and no other JS object refers to it, the garbage collector will clean it up. None of the JS objects in the above graph can be garbage collected, because they are all rooted by the GObjects.

Toggle references and tardy sweeps

Two objects (G and JS) keeping each other alive equals a reference cycle, you might think. That’s right; as I described it above, neither object could ever get deleted, so that’s a memory leak right there. We prevent this with a feature called toggle references: when a GObject’s reference count drops to 1 we assume that the owner of the one remaining reference is the JS object, and so the GObject drops its reference to the JS object (“toggles down“). The JS object is then eligible for garbage collection if no other JS object refers to it.

(If this doesn’t make much sense, don’t worry. Toggle references are among the most difficult to comprehend code in the GJS codebase. It took me about two years after I became the maintainer of GJS to fully understand them. I hope that writing about them will demystify them for others a bit.)

When we close the window of this GUI, here is approximately what happens. The app drops its references to the GObjects and JS objects that comprise the window. The window’s reference count drops to 1, so it toggles down, dropping one direction of the association between GObject and JS object.

Unlike the GObject-only case where everything was destroyed immediately, that’s all that can happen for now! Everything remains in place until the next garbage collection, because at the top of the object tree is the window’s JS object. It is eligible to be collected because it’s not rooted and no other JS object refers to it.

Normally the JS garbage collector can collect a whole tree of objects at once. That’s why the JS engine needs to have all the information about the directionality of the references.

However, it won’t do that for this tree. The JS garbage collector doesn’t know about the GObjects. So unfortunately, it takes several passes of the garbage collector to get everything. After one garbage collection only the window is gone, and the situation looks like this:

Now, the outermost box’s JS object has nothing referring to it, so it will be collected on the next pass of the garbage collector:

And then it takes one more pass for the last objects to be collected:

The objects were not leaked, as such, but it took four garbage collection passes to get all of them. The problem we previously had, that Georges blogged about, was that the garbage collector didn’t realize that this was happening. In normal use of a Javascript engine, there are no GObjects that behave differently, so trees of objects don’t deconstruct layer by layer like this. So, there might be hours or days in between garbage collector passes, making it seem like that memory was leaked. (And often, other trees would build up in the intervening time between passes.)

Avoiding toggle references

To mitigate the problem Georges implemented two optimizations. First, the “avoid toggle references” patch, which was actually written by Giovanni Campagna several years ago but never finished, made it so that objects don’t start out using the toggle reference system. Instead, only the JS objects hold references to the GObjects. The JS object can get garbage collected whenever nothing else refers to it, and it will drop its reference to the GObject.

A problem then occurs when that wasn’t the last reference to the GObject, i.e. it’s being kept alive by some C code somewhere, and the GObject resurfaces again in JS, for example by being returned by a C function. In this case we recreate the JS object, assuming that it will be identical to the one that was already garbage collected. The only case where that assumption doesn’t hold, is when the JS code sets some state on one of the JS objects. For example, you execute something like myButton._tag = 'foo';. If myButton gets deleted and recreated, it won’t have a _tag property. So in the case where any custom state is set on a JS object, we switch it over to the toggle reference system once again.

In theory this should help, because toggle references cause the tardy sweep problem, so if fewer objects use toggle references, there should be fewer objects collected tardily. However, this didn’t solve the problem, because especially in GNOME Shell, most JS objects have some state on them. And, sadly, it made the toggle reference code even more complicated than it already was.

The Big Hammer

The second optimization Georges implemented was the affectionately nicknamed “Big Hammer”. It checks if any GObjects toggled down during a garbage collector pass, and if so, restart the garbage collector a few seconds after. This made CPU performance worse, but would at least make sure that all unused objects were deleted from memory within a reasonable time frame (under a minute, rather than a day.)

Combined with some other memory optimizations, this made GNOME 3.30 quite a lot less memory hungry than its predecessors.

An afternoon at Mozilla

Earlier this year, I had been talking on IRC to Ted Campbell and Steve Fink on the SpiderMonkey team at Mozilla for a while, about various ins and outs of being an external (i.e. not Firefox) user of SpiderMonkey’s JS engine API. Early September I found myself in Toronto, where Ted Campbell is based out of, and I paid a visit to the Mozilla office one afternoon.

I had lunch with Ted and Kannan Vijayan of the SpiderMonkey team where we discussed the current status of external SpiderMonkey API users. Afterwards, we made the plans which eventually became this GitHub repository of examples and best practices for using the SpiderMonkey JS engine outside of Firefox. We have both documentation and code examples there, and more on the way. This is still in progress, but it should be the beginning of a good resource for embedding the JS engine, and the end of all those out-of-date pages on MDN!

I also learned some good practices that I can put to use in GJS. For example, we should avoid using JS::PersistentRooted except as a last resort, because it roots objects by putting them in a giant linked list, which is then traced during garbage collection. It’s often possible to store the objects more efficiently than that, and trace them from some other object, or the context.

Ending the tardy sweeps

In the second half of the afternoon we talked about some of the problems that I had with SpiderMonkey that were specific to GJS. Of course, the tardy sweep problem was one of them.

For advice on that, Ted introduced me to Nika Layzell, an engineer on the Gecko team. We looked at the XPCOM cycle collector and I was surprised to learn that Gecko uses a scheme similar to toggle references for some things. However, rather than GJS sticking with toggle references, she suggested a solution that had the advantage of being much simpler.

In “Avoiding toggle references” above, I mentioned that the only thing standing in the way of removing toggle references, is custom state on the JS objects. If there is custom state, the objects can’t be destroyed and recreated as needed. In Gecko, custom state properties on DOM objects are called “expandos” or “expando properties” and are troublesome in a similar way that they are in GJS’s toggle references.

Nika’s solution is to separate the JS object from the expandos, putting the expandos on a separate JS object which has a different lifetime from the JS object that represents the GObject in the JS code. We can then make the outer JS objects into JS Proxies so that when you get or set an expando property on the JS object, it delegates transparently to the expando object.

Kind of like this:

In the “before” diagram, there is a reference cycle which we have to solve with toggle references, and in the “after” diagram, there is no reference cycle, so everything can simply be taken care of by the garbage collector.

In cases where an object doesn’t have any expando properties set on it, we don’t even need to have an expando object at all. It can be created on demand, just like the JS object. It’s also important to note that the expando objects can never be accessed directly from JS code; the GObject is the sole conduit by which they can be accessed.

Recasting our GUI from the beginning of the post with a tree of GUI elements where the top-level window has an expando property pointing to the bottom-level button, and where the window was just closed, gives us this:

Most of these GObjects don’t even need to have expando objects, or JS objects!

At first glance this might seem to be garbage-collectable all at once, but we have to remember that GObjects aren’t integrated with the garbage collector, because they can’t be traced, they can only have their reference counts decremented. And the JS engine doesn’t allow you to make new garbage in the middle of a garbage collector sweep. So a naive implementation would have to collect this in two passes, leaving the window’s expando object and the button for the second pass:

This would require an extra garbage collector pass for every expando property that referred to another GObject via its JS object. Still a lot better than the previous situation, but it would be nice if we could collect the whole thing at once.

We can’t walk the whole tree of GObjects in the garbage collector’s marking phase; remember, GObject references are nondirectional, so there’s no generic way to ask a GObject which other GObjects it references. What we can do is partially integrate with the marking phase so that when a GObject has only one reference left, we make it so that the JS object traces the expando object directly, instead of the GObject rooting the expando object. Think of it as a “toggle reference lite”. This would solve the above case, but there are still some more corner cases that would require more than one garbage collection pass. I’m still thinking about how best to solve this.

What’s next

All together, this should make the horrible toggle reference code in GJS a lot simpler, and improve performance as well.

I started writing the code for this last weekend. If you would like to help, please get in touch with me. You can help by writing code, figuring out the corner cases, or testing the code by running GNOME Shell with the branch of GJS where this is being implemented. Follow along at issue #217.

Additionally, since I am in Toronto again, I’ll be visiting the Mozilla office again this week, and hopefully more good things will come out of that!

Acknowledgements

Thanks to Ted Campbell, Nika Layzell, and Kannan Vijayan of Mozilla for making me feel welcome at Mozilla Toronto, and taking some time out of their workday to talk to me; and thanks to my employer Endless for letting me take some time out of my workday to go there.

Thank you to Ted Campbell and Georges Stavracas for reading and commenting on a draft version of this post.

The diagrams in this post were made with svgbob, a nifty tool; hat tip to Federico Mena Quintero.

Daniel García Moreno: GNOME Translation Editor 3.30.0

Sht, 03/11/2018 - 12:53md

I'm pleased to announce the new GNOME Translation Editor Release. This is the new release of the well known Gtranslator. I talked about the Gtranslator Ressurection some time ago and this is the result:

This new release isn't yet in flathub, but I'm working on it so we'll have a flatpak version really soon. Meantime you can test using the gnome nightly flatpak repo.

New release 3.30.0

This release doesn't add new functionality. The main change is in the code, and in the interface.

We've removed the toolbar and move the main useful buttons to the headerbar. We've also removed the statusbar and replaced it with a new widget that shows the document translation progress.

The plugin system and the dockable window system has been removed to simplify the code and make the project more maintenable. The only plugin that is maintained for now is the translation memory, that's now integrated. I'm planning to migrate other useful plugins, but that's for the future.

Other minor changes that we've made are in the message table, we've removed some columns and now we only show two, the original message and the translated one and we use colors and text styles to show fuzzy status and untranslated.

The main work is a full code modernization, now we use meson to build, we've flatpak integration and this simplify the development because gtranslator know works by default in Gnome Builder without the need to install development dependencies.

There's others minor changes like the new look when you open the app without any file:

Or the new language selector that autofill all the profile fields using the language:

And for sure we've tried to fix most important bugs:

New name and new Icon

Following the modern GNOME app names, we've renamed the app from Gtranslator to GNOME Translation Editor. Internally we'll continue with the gtranslator name, so the app is gtranslator, but for the final user the name will be Translation Editor.

And following the App Icon Redesign Initiative we've a new icon that follows the new HIG.

Thanks

I'm not doing this alone. I became the gtranslator maintainer because Daniel Mustieles push to have a modern tool for GNOME translators, done with gnome technology and fully functional.

The GNOME Translation Editor is a project done by the GNOME community, there are other people helping with code, documentation, testing, design ideas and much more and any help is always welcome. If you're insterested, don't hesitate and come to the gnome gitlab and collaborate with this great project.

And maybe it's a bit late, but I've publish a project to the outreachy.org, so maybe someone can work on this as an intern for three months. I'll try to get more people involved here using following outreachy and maybe GSoC, so if you're a student, now is the right time to start contributing to be able to be selected for the next year internship programs.

Ismael Olea: Running EPF Composer in Fedora Linux, v3

Pre, 02/11/2018 - 11:30md

Well, finally I succeed with native instalation of the EPF (Eclipse Process Framework) Composer in my Linux system thanks to Bruce MacIsaac and the development team help. I’m happy. This is not trivial since EPFC is a 32 bits application running in a modern 64 bits Linux system.

My working configuration:

In my system obviously I can install all rpm packages using DNF. For different distros look for the equivalent packages.

Maybe I’m missing some minor dependency, I didn’t checked in a clean instalation.

Download EPFC and xulrunner and extract each one in the path of your choice. I’m using xulrunner-10.0.2.en-US.linux-i686/ as directory name to be more meaninful.

The contents of epf.ini file:

-data @user.home/EPF/workspace.152 -vmargs -Xms64m -Xmx512m -Dorg.eclipse.swt.browser.XULRunnerPath=/PATHTOXULRUNNER/xulrunner-10.0.2.en-US.linux-i686/

I had to write the full system path for the -Dorg.eclipse.swt.browser.XULRunnerPath property to get Eclipse recognize it.

And to run EPF Composer:

cd $EPF_APP_DIR $ epf -vm /usr/lib/jvm/java-1.8.0-oracle-1.8.0.181/jre/bin/java

If you want some non trivial work with Composer in Linux you’ll need xulrunner since it’s used extensively for editing contents.

I had success running the Windows EPF version using Wine and I can do some work with it, but at some point the program gets inestable and needs to reboot. Other very interesting advantage of running native is I can use the GTK+ filechooser which is really lot better than the simpler native Java one.

I plan to practice a lot modeling with EPF Composer in the coming weeks. Hopefully I’ll share some new artifacts authored by me.

Carlos Soriano: GNOME Internships interns has been elected

Enj, 01/11/2018 - 5:37md

Hello all,

GNOME Internships projects and interns has been elected!

We had have strong applicants and quite a big amount of applications. If you are not the elected don’t be discouraged, it wasn’t an easy choice.

This round we have to congratulate Ludovico de Nittis who will work in the “USB Protection” project with his mentor Tobias Mueller. Congrats Ludovico!

The project goal is to increase the robustness against attacks via malicious USB devices. Certainly a challenging goal! You can read extensive information in the project wiki linked above, it’s definitely quite interesting.

This round of internships is fund by the privacy and security campaign we did at GNOME a few years ago, and we are happy we can put them into good use to improve the security and privacy of GNOME users and donors.

Deeply thanks to all the donors! Withouth you this initiative wouldn’t have been possible. Also thanks to the mentors and applicants.

More on the admin side, it has been a little bumpy since its the first time doing these internships. And well, this last week I have been a bit worried by other stuff you already probably know about :)

If you have any question or feedback, don’t hesitate to contact me on IRC as csoriano or send an email to internships-admin@gnome.org

Arun Raghavan: Update from the PipeWire hackfest

Mër, 31/10/2018 - 5:06md

As the third and final day of the PipeWire hackfest draws to a close, I thought I’d summarise some of my thoughts on the goings-on and the future.

Thanks

Before I get into the details, I want to send out a big thank you to:

  • Christian Schaller for all the hard work of organising the event and Wim Taymans for the work on PipeWire so far (and in the future)
  • The GNOME Foundation, for sponsoring the event as a whole
  • Qualcomm, who are funding my presence at the event
  • Collabora, for sponsoring dinner on Monday
  • Everybody who attended and participate for their time and thoughtful comments
Background

For those of you who are not familiar with it, PipeWire (previously Pinos, previously PulseVideo) was Wim’s effort at providing secure, multi-program access to video devices (like webcams, or the desktop for screen capture). As he went down that rabbit hole, he wrote SPA, a lightweight general-purpose framework for representing a streaming graph, and this led to the idea of expanding the project to include support for low latency audio.

The Linux userspace audio story has, for the longest time, consisted of two top-level components: PulseAudio which handles consumer audio (power efficiency, wide range of arbitrary hardware), and JACK which deals with pro audio (low latency, high performance). Consolidating this into a good out-of-the-box experience for all use-cases has been a long-standing goal for myself and others in the community that I have spoken to.

An Opportunity

From a PulseAudio perspective, it has been hard to achieve the 1-to-few millisecond latency numbers that would be absolutely necessary for professional audio use-cases. A lot of work has gone into improving this situation, most recently with David Henningsson’s shared-ringbuffer channels that made client/server communication more efficient.

At the same time, as application sandboxing frameworks such as Flatpak have added security requirements of us that were not accounted for when PulseAudio was written. Examples including choosing which devices an application has access to (or can even know of) or which applications can act as control entities (set routing etc., enable/disable devices). Some work has gone into this — Ahmed Darwish did some key work to get memfd support in PulseAudio, and Wim has prototyped an access-control mechanism module to enable a Flatpak portal for sound.

All this said, there are still fundamental limitations in architectural decisions in PulseAudio that would require significant plumbing to address. With Wim’s work on PipeWire and his extensive background with GStreamer and PulseAudio itself, I think we have an opportunity to revisit some of those decisions with the benefit of a decade’s worth of learning deploying PulseAudio in various domains starting from desktops/laptops to phones, cars, robots, home audio, telephony systems and a lot more.

Key Ideas

There are some core ideas of PipeWire that I am quite excited about.

The first of these is the graph. Like JACK, the entities that participate in the data flow are represented by PipeWire as nodes in a graph, and routing between nodes is very flexible — you can route applications to playback devices and capture devices to applications, but you can also route applications to other applications, and this is notionally the same thing.

The second idea is a bit more radical — PipeWire itself only “runs” the graph. The actual connections between nodes are created and managed by a “session manager”. This allows us to completely separate the data flow from policy, which means we could write completely separate policy for desktop use cases vs. specific embedded use cases. I’m particularly excited to see this be scriptable in a higher-level language, which is something Bastien has already started work on!

A powerful idea in PulseAudio was rewinding — the ability to send out huge buffers to the device, but the flexibility to rewind that data when things changed (a new stream got added, or the stream moved, or the volume changed). While this is great for power saving, it is a significant amount of complexity in the code. In addition, with some filters in the data path, rewinding can break the algorithm by introducing non-linearity. PipeWire doesn’t support rewinds, and we will need to find a good way to manage latencies to account for low power use cases. One example is that we could have the session manager bump up the device latency when we know latency doesn’t matter (Android does this when the screen is off).

There are a bunch of other things that are in the process of being fleshed out, like being able to represent the hardware as a graph as well, to have a clearer idea of what is going on within a node. More updates as these things are more concrete.

The Way Forward

There is a good summary by Christian about our discussion about what is missing and how we can go about trying to make a smooth transition for PulseAudio users. There is, of course, a lot to do, and my ideal outcome is that we one day flip a switch and nobody knows that we have done so.

In practice, we’ll need to figure out how to make this transition seamless for most people, while folks with custom setup will need to be given a long runway and clear documentation to know what to do. It’s way to early to talk about this in more specifics, however.

Configuration

One key thing that PulseAudio does right (I know there are people who disagree!) is having a custom configuration that automagically works on a lot of Intel HDA-based systems. We’ve been wondering how to deal with this in PipeWire, and the path we think makes sense is to transition to ALSA UCM configuration. This is not as flexible as we need it to be, but I’d like to extend it for that purpose if possible. This would ideally also help consolidate the various methods of configuration being used by the various Linux userspaces.

To that end, I’ve started trying to get a UCM setup on my desktop that PulseAudio can use, and be functionally equivalent to what we do with our existing configuration. There are missing bits and bobs, and I’m currently focusing on the ones related to hardware volume control. I’ll write about this in the future as the effort expands out to other hardware.

Onwards and upwards

The transition to PipeWire is unlikely to be quick or completely-painless or free of contention. For those who are worried about the future, know that any switch is still a long way away. In the mean time, however, constructive feedback and comments are welcome.

Bastien Nocera: Pipewire Hackfest 2018

Mër, 31/10/2018 - 12:44md
Good morning from Edinburgh, where the breakfast contains haggis, and the charity shops have some interesting finds.

My main goal in attending this hackfest was to discuss Pipewire integration in the desktop, and how it will eventually replace PulseAudio as the audio daemon.

The main problem GNOME has had over the years with PulseAudio relate mostly to how PulseAudio was a black box when it came to its routing policy. What happens when you plug in an HDMI cable into your laptop? Or turn on your Bluetooth headset? I've heard the stories of folks with highly mobile workstations having to constantly visit the Sound settings panel.

PulseAudio has policy scattered in a number of places (do a "git grep routing" inside the sources to see that): some are in the device manager, then modules themselves can set priorities for their outputs and inputs. But there's nothing to take all the information in, and take a decision based on the hardware that's plugged in, and the applications currently in use.

For Pipewire, the policy decisions would be split off from the main daemon. Pipewire, as it gains PulseAudio compatibility layers, will grow a default/example policy engine that will try to replicate PulseAudio's behaviour. At the very least, that will mean that Pipewire won't regress compared to PulseAudio, and might even be able to take better decisions in the short term.

For GNOME, we still wanted to take control of that part of the experience, and make our own policy decisions. It's very possible that this engine will end up being featureful and generic enough that it will be used by more than just GNOME, or even become the default Pipewire one, but it's far too early to make that particular decision.

In the meanwhile, we wanted the GNOME policies to not be written in C, difficult to experiment with for power users, and for edge use cases. We could have started writing a configuration language, but it would have been too specific, and there are plenty of embeddable languages around. It was also a good opportunity for me to finally write the helper library I've been meaning to write for years, based on my favourite embedded language, Lua.

So I'm introducing Anatole. The goal of the project is to make it trivial to write chunks of programs in Lua, while the core of your project is written in C (we might even be able to embed it in Python or Javascript, once introspection support is added).

It's still in the very early days, and unusable for anything as of yet, but progress should be pretty swift. The code is mostly based on Victor Toso's incredible "Lua factory" plugin in Grilo. (I'm hoping that, once finished, I won't have to remember on which end of the stack I need to push stuff for Lua to do something with it ;)

Olav Vitters: First responder: fire alarm

Mar, 30/10/2018 - 5:35md

Within Netherlands each company is by law required to have first responders. These handle various situations until the professionals arrive. It’s usually one of (possible) fire, medical or an evacuation. Normally I’d post this at Google+ but as that’s going away I’ll put it on this blog. I prefer writing it down so later on I still can see the details.

While having lunch I noticed a notification about the fire department going to my (huge) office building. As this was during lunch time and there might be way less first responders available I headed back. The message says “Handmelder” which is Dutch for those manually operated red boxes. As these are manual the fire department assumes someone verified that there’s a fire.

Screenshot of an P2000 app

According to Google Maps the fire department is a 7 minute drive away, so a maximum of 5 minutes for them. I saw them using a road which was partly closed for construction. A bit strange as they’ve been informed & there are signs. Anyway, I went inside. At this point I don’t know much. You hear people asking what is going on. I can guess but better not to assume too much. I do see that one of the fire doors closed itself. I notice the lack of an evacuation alarm though the pager indicated it’s either cellar, parking level or ground floor.

To help out I need a bright vest and a walkie talkie. The bright vest is very important a) for fire department to recognize whom to approach b) force people to listen to me. A walkie talkie I have at my desk, that’s useless as the elevators won’t work, I have no idea what is going on and it’ll take forever using the stairs. To start I take a bright jacket as well as ask and get for a spare walkie talkie from security. It takes a bit until I notice the walkie talkie battery is dead. I want to find another but there’s 2 unneeded people in the security room standing in the way so quickest is to ask again. Normally security is a hell during these things so I really don’t want to distract security. Fortunately there’s another working walkie talkie.

I announce myself and ask for instructions. I get told to check parking level 4 east side. Normally I easily know east vs west but at the moment not so much, I’m more thinking on how to approach safely but quickly. Last week security mentioned that the new fire detection cables have east and west mixed up. It seems easier and safer to check the entire parking level.

On parking level 4 I initially see nothing strange plus I’m the only one. This is strange as I was late to arrive to the incident. I missed all of the previous conversations. It’s a waste of time to ask about this so I skip it. I don’t see any fire at all, though there is a hell of a noise. I first check if it’s one of the cars (super easy). Nothing. There’s also various doors for building related things. Normally I’d have keys for that but alas, not now. I relay that first impression is no fire. I get told to check everything as per request fire department. I don’t get why they don’t come up but pointless to wonder. It takes me a few minutes to check the various doors. I check for fire indicator lights (fire behind a door) as well as a door check (heat, smoke). Nothing to be seen. Meanwhile a car enters the parking level. That should not be possible and usually cannot be done (parking gates close). Something to tell security. I communicate that nothing found except a really loud running airco.

As of a month ago the building has a fire detection cable on all parking levels. It is very sensitive to temperature changes. It’s also installed above the parking places close to two airco outlets. They thought ahead of the potential problem and they said they addressed it (made it less sensitive). My guess is that it’s still too sensitive.

One incident leads too loads of questions. Some answered during the incident, some just after, some take a while. There’s been enough learnings in this one.