You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - http://planet.gnome.org/
Përditësimi: 3 javë 3 ditë më parë

Timm Bäder: GTK+ Hackfest 2018

Hën, 05/02/2018 - 1:25md

As some of you may know, we had a GTK+ Hackfest on February 1st and 2nd in Brussels, Belgium. Matthias has already blogged and blogged again about the two days and detailed notes about all the things we discussed can be found here and here. He also has some nice pictures.


From everything we discussed I'm mostly looking forward to migrating to GitLab so I can file a few GTK+4 bugs and mark quite a few of them as blockers for a 4.0 release. I hope this will happen as soon as possible since there are quite a few usability regressions in current gtk+ master compared to gtk3 and those need time to get ironed out.

During the quiet time I've also managed to find a bunch of bugs in the GL renderer which I will tackle in the next days. Benjamin wrote a performance benchmark that made the GL renderer crash when using gsk_renderer_render_texture multiple times in a row but I've already fixed that one. Unfortunately, it also broke rendering textures from a different GL context but solving that requires a deeper surgery and deciding on how we handle sharing (or not sharing) GL resources across different renderers.

I'd like to thank everyone who organized this event as well as the GNOME foundation for sponsoring my trip there. Spending nine hours in a bus was horrible but at least I arrived :) It was nice talking to the people in person instead of just over IRC. Also, they can't ignore you as well as on IRC!




Also, if you're reading this and have some application using gtk3, maybe you want to consider porting to gtk4 early? Doing that and reporting the problems you encounter will ensure a smoother experience once you fully switch to gtk4.

I've already ported all of Corebird to gtk4 in a branch that I will merge to master as soon as some of the bigger problems are fixed. I also have some automation for it, in case your project is written in C.

Federico Mena-Quintero: Writing a command-line program in Rust

Sht, 03/02/2018 - 6:41md

As a library writer, it feels a bit strange, but refreshing, to write a program that actually has a main() function.

My experience with Rust so far has been threefold:

  • Porting chunks of C to Rust for librsvg - this is all work on librsvg's internals and no users are exposed to it directly.

  • Working on gnome-class, the procedural macro ("a little compiler") to generate GObject boilerplate from Rust. This feels like working on the edge of the exotic; it is something that runs in the Rust compiler and spits code on behalf of the programmer.

  • A few patches to the gtk-rs ecosystem. Again, work on the internals, or something that feels library-like.

But other than toy programs to test things, I haven't written a stand-alone tool until rsvg-bench. It's quite a thrill to be able to just run the thing instead of waiting for other people to write code to use it!

Parsing command-line arguments

There are quite a few Rust crates ("libraries") to parse command-line arguments. I read about structopt via Robert O'Callahan's blog; structopt lets you define a struct to hold the values of your command-line options, and then you annotate the fields in that struct to indicate how they should be parsed from the command line. It works via Rust's procedural macros. Internally it generates stuff for the clap crate, a well-established mechanism for dealing with command-line options.

And it is quite pleasant! This is basically all I needed to do:

#[derive(StructOpt, Debug)] #[structopt(name = "rsvg-bench", about = "Benchmarking utility for librsvg.")] struct Opt { #[structopt(short = "s", long = "sleep", help = "Number of seconds to sleep before starting to process SVGs", default_value = "0")] sleep_secs: usize, #[structopt(short = "p", long = "num-parse", help = "Number of times to parse each file", default_value = "100")] num_parse: usize, #[structopt(short = "r", long = "num-render", help = "Number of times to render each file", default_value = "100")] num_render: usize, #[structopt(long = "pixbuf", help = "Render to a GdkPixbuf instead of a Cairo image surface")] render_to_pixbuf: bool, #[structopt(help = "Input files or directories", parse(from_os_str))] inputs: Vec<PathBuf> } fn main() { let opt = Opt::from_args(); if opt.inputs.len() == 0 { eprintln!("No input files or directories specified\n"); process.exit(1); } ... }

Each field in the Opt struct above corresponds to one command-line argument; each field has annotations for structopt to generate the appropriate code to parse each option. For example, the render_to_pixbuf field has a long option name called "pixbuf"; that field will be set to true if the --pixbuf option gets passed to rsvg-bench.

Handling errors

Command-line programs generally have the luxury of being able to just exit as soon as they encounter an error.

In C this is a bit cumbersome since you need to deal with every place that may return an error, find out what to print, and call exit(1) by hand or something. If you miss a single place where an error is returned, your program will keep running with an inconsistent state.

In languages with exception handling, it's a bit easier - a small script can just let exceptions be thrown wherever, and if it catches them at the toplevel, it can just print the exception and abort gracefully. However, these nonlocal jumps make me uncomfortable; I think exceptions are hard to reason about.

Rust makes this easy: it forces you to handle every call that may return an error, but it lets you bubble errors up easily, or handle them in-place, or translate them to a higher-level error.

In the Rust world the [failure] crate is getting a lot of traction as a convenient, modern way to handle errors.

In rsvg-bench, errors can come from several places:

  • I/O errors when reading files and directories.

  • Errors from librsvg's parsing stage; you get a GError.

  • Errors from the rendering stage. This can be a Cairo error (a cairo_status_t), or a simple "something bad happened; can't render" from librsvg's old convenience api in C. Don't you hate it when C code just gives up and returns NULL or a boolean false, without any further details on what went wrong?

For rsvg-bench, I just needed to be able to represent Cairo errors and generic rendering errors. Everything else, like an io::Error, is automatically wrapped by the failure crate's mechanism. I just needed to do this:

extern crate failure; #[macro_use] extern crate failure_derive; #[derive(Debug, Fail)] enum ProcessingError { #[fail(display = "Cairo error: {:?}", status)] CairoError { status: cairo::Status }, #[fail(display = "Rendering error")] RenderingError }

Whenever the code gets a Cairo error, I can translate it to a ProcessingError::CairoError and bubble it up:

fn render_to_cairo(handle: &rsvg::Handle) -> Result<(), Error> { let dim = handle.get_dimensions(); let surface = cairo::ImageSurface::create(cairo::Format::ARgb32, dim.width, dim.height) .map_err(|e| ProcessingError::CairoError { status: e })?; ... }

And when librsvg returns a "couldn't render" error, I translate that to a ProcessingError::RenderingError:

fn render_to_cairo(handle: &rsvg::Handle) -> Result<(), Error> { ... let cr = cairo::Context::new(&surface); if handle.render_cairo(&cr) { Ok(()) } else { Err(Error::from(ProcessingError::RenderingError)) } }

Here, the Ok() case of the Result does not contain any value — it's just (), as the generated images are not stored anywhere: they are just rendered to get some timings, not to be saved or anything.

Up to where do errors bubble?

This is the "do everything" function:

fn run(opt: &Opt) -> Result<(), Error> { ... for path in &opt.inputs { process_path(opt, &path)?; } Ok(()) }

For each path passed in the command line, process it. The program sees if the path corresponds to a directory, and it will scan it recursively. Or if the path is an SVG file, the program will load the file and render it.

Finally, main() just has this:

fn main() { let opt = Opt::from_args(); ... match run(&opt) { Ok(_) => (), Err(e) => { eprintln!("{}", e); process::exit(1); } } }

I.e. process command line arguments, run the whole thing, and print an error if there was one.

I really appreciate that most places that can return an error an just put a ? for the error to bubble up. This is much more legible than in C, where every call must have an if (something_bad_happened) { deal_with_it; } after it... and Rust won't let me get away with ignoring an error, but it makes it easy to actually deal with it properly.

Reading an SVG file quickly

Why, just mmap() it and feed it to librsvg, to avoid buffer copies. This is easy in Rust:

fn process_file<P: AsRef<Path>>(opt: &Opt, path: P) -> Result<(), Error> { let file = File::open(path)?; let mmap = unsafe { MmapOptions::new().map(&file)? }; let bytes = &mmap; let handle = rsvg::Handle::new_from_data(bytes)?; ... }

Many things can go wrong here:

  • File::open() can return an io::Error.
  • MmapOptions::map() can return an io::Error from the mmap(2) system call, or from the fstat(2) to read the file's size to map it.
  • rsvg::Handle::new_from_data() can return a GError from parsing the file.

The little ? characters after each call that can return an error mean, just give me back the result, or convert the error to a failure::Error that can be examined later. This is beautifully legible to me.

Summary

Writing command-line programs in Rust is fun! It's nice to have neurotically-safe scripts that one can trust in the future.

Rsvg-bench is available here.

Tristan Van Berkom: BuildStream Hackfest and FOSDEM

Sht, 03/02/2018 - 6:06md

Hi all !

In case you happen to be here at FOSDEM, we have a BuildStream talk in the distributions track tomorrow at noon.

BuildStream Hackfest

I also wanted to sum up a last minute BuildStream hackfest which occurred in Manchester just a week ago. Bloomberg sent some of their Developer Experience engineering team members over to the Codethink office in Manchester where the whole BuildStream team was present, and we split up into groups to plan upcoming coding sprints, land some outstanding work and fix some bugs.

Here are some of the things we’ve been talking about…

Caching the build tree

Currently we only cache the output in a given artifact, but caching the build tree itself, including the sources and intermediate object files can bring some advantages to the table.

Some things which come to mind:

  • When opening a workspace, if the element has already been built; it becomes interesting to checkout the whole built tree instead of only the sources. This can turn your first workspace build into an incremental build.
  • Reconstruct build environment to test locally. In case you encounter failures in remote CI runners, it’s interesting to be able to reproduce the shell environment locally in order to debug the build.
Benchmarking initiative

There is an initiative started by Angelos Evripiotis to get a benchmarking process in place to benchmark BuildStream’s performance in various aspects such as load times for complex projects, staging time for preparing a build sandbox etc, so we can keep an eye on performance improvements and regressions over time.

We talked a bit about the implementation details of this, I especially like that we can use BuildStream’s logging output to drive benchmarks as this eliminates the observer side effects.

Build Farm

We talked about farming out tasks and discussed the possibility of leveraging the remote execution framework which the Bazel folks are working on. We noticed that if we could leverage such a framework to farm out the execution of a command against a specific filesystem tree, we could essentially use the same protocol to farm out a command to a worker with a specified CPU architecture, solving the cross build problem at the same time.

Incidentally, we also spent some time with some of the Bazel team here at FOSDEM, and seeing as there are some interesting opportunities for these projects to compliment eachother, we will try to see about organizing a hackfest for these two projects.

Incremental Builds

Chandon Singh finally landed his patches which mount the workspaces into the build environment at build time instead of copying them in, this is the first big step to giving us incremental builds for workspaced elements.

Besides this, we need to handle the case where configure commands should not be run after they have already been run once, with an option to reset or force them to be run again (i.e. avoid recreating config.h and rebuilding everything anyway), also there is another subtle bug to investigate here.

But more than this, we also got to talking about a possible workflow where one runs CI on the latest of everything in a given mainline, and runs the builds incrementally, without workspaces. This would require some client state which would recall what was used in the last build in order to feed in the build tree for the next build and then apply/patch the differences to that tree, causing an incremental build. This can be very interesting when you have highly active projects which want to run CI on every commit (or branch merge), and then scale that to thousands of modules.

Reproducibility Testing

BuildStream strives for bit-for-bit reproducibility in builds, but should have some minimal support for allowing one to easily check just how reproducible their builds are.

Jim MacArthur has been following up on this.

 

In Summary

The hackfest was a great success but was on a little short notice, we will try to organize another hackfest well in advance and give more people a chance to participate next time around.

 

And… here is a group photo !

BuildStream Hackfest Group Photo

 

If you’re at FOSDEM… then see you in a bit at the GNOME beer event !

Benjamin Otte: builders

Sht, 03/02/2018 - 2:05md

An idiom that has shown up in GTK4 development is the idea of immutable objects and builders. The idea behind an immutable object is that you can be sure that it doesn’t change under you, so you don’t need to track changes, you can expose it in your API without having to fear users of the API are gonna change that object under you, you can use it as a key when caching and last but not least you can pass it into multiple threads without requiring synchronization.
Examples of immutable objects in GTK4 are GdkCursor, GdkTexture, GdkContentFormats or GskRenderNode. An example outside of GTK would be GBytes

Sometimes these objects are easy to create using a series of constructors, but oftentimes these objects are more complex. And because those objects are immutable, we can’t just provide setters for the various properties. Instead, we provide builder objects. They are short-lived objects whose only purpose is to manage the construction of an immutable object.
Here’s an example of how that would look:

Sandwich * make_me_a_sandwich (void) { SandwichBuilder *builder; /* create builder */ builder = sandwich_builder_new (); /* setup the object to create */ sandwich_builder_set_bread (builder, white_bread); sandwich_builder_add_ingredient (builder, cheese); sandwich_builder_add_ingredient (builder, ham); sandwich_builder_set_toasted (builder, TRUE); /* free the builder, create the object and return it */ return sandwich_builder_free_to_sandwich (builder); }

This approach works well in C, but does not work when trying to make the builder accessible to bindings. Bindings need no explicit memory management, so they want to write code like this:

def make_me_a_sandwich(): # create builder builder = SandwichBuilder () # setup the object to create builder.set_bread (white_bread) builder.add_ingredient (cheese) builder.add_ingredient (ham) builder.set_toasted (True) # create the object and return it return builder.to_sandwich ()

We spent the hackfest arguing about how to create a C API that works for both of those use cases and the consensus so far has been to turn builders into refcounted boxed types that provide both the above APIs – but advertises the C-specific parts only to the C APIs and the binding-specific parts only to bindings. So the C header for the above examples would look like this:

SandwichBuilder *sandwich_builder_new (void); /* (skip) in bindings */ Sandwich *sandwich_builder_free_to_sandwich (SandwichBuilder *builder) G_GNUC_WARN_UNUSED_RESULT; /* to be used by bindings only */ GType *sandwich_builder_get_type (void) G_GNUC_CONST; SandwichBuilder *sandwich_builder_ref (SandwichBuilder *builder); void sandwich_builder_unref (SandwichBuilder *builder); Sandwich sandwich_builder_to_sandwich (SandwichBuilder *builder); /* shared API */ void sandwich_builder_set_bread (SandwichBuilder *builder, Bread *bread); void sandwich_builder_add_ingredient (SandwichBuilder *builder, Ingredient *ingredient); void sandwich_builder_set_toasted (SandwichBuilder *builder, gboolean should_toast); /* and in the .c file: */ G_DEFINE_BOXED_TYPE (SandwichBuilder, sandwich_builder, sandwich_builder_ref, sandwich_builder_unref)

And now I’m off to review all our builder APIs so they conform to this idea.

Richard Hughes: Firmware Telemetry for Vendors

Enj, 01/02/2018 - 1:20md

We’ve shipped nearly 1.2 MILLION firmware updates out to Linux users since we started the LVFS project.

I found out this nugget of information using a new LVFS vendor feature, soon to be deployed: Telemetry. This builds on the previously discussed success/failure reporting and adds a single page for the vendor to get statistics about each bit of hardware. Until more people are running the latest fwupd and volunteering to share their update history it’s less useful, but still interesting until then.

Richard Hughes: No new batches of ColorHug2

Enj, 01/02/2018 - 10:51pd

I was informed by AMS (the manufacturer that makes the XYZ sensor that’s the core of the CH2 device) that the AS73210 (aka MTCSiCF) and the MTI08D are end of life products. The replacement for the sensor the vendor offers is the AS73211, which of course is more expensive and electrically incompatible with the AS73210.

The somewhat-related new AS7261 sensor does look interesting as it somewhat crosses the void between a colorimeter and something that can take non-emissive readings, but it’s a completely different sensor to the one on the ColorHug2, and mechanically to the now-abandoned ColorHug+. I’m also feeling twice burned buying specialist components from single-source suppliers.

Being a parents to a 16 week old baby doesn’t put Ania and I in a position where I can go through the various phases of testing, prototypes, test batch, production batch etc for a device refresh like we did with the ColorHug -> ColorHug2. I’m hoping I can get a chance to play with some more kinds of sensors from different vendors, although that’s not going to happen before I start getting my free time back. At the moment I have about 50 fully completed ColorHug2 devices in boxes ready to be sold.

In the true spirit of OpenHardware and free enterprise, if anyone does want to help with the design of a new ColorHug device I’m open for ideas. ColorHug was really just a hobby that got out of control, and I’d love for someone else to have the thrill and excitement of building a nano-company from scratch. Taking me out the equation completely, I’d be as equally happy referring on people who want to buy a ColorHug upgrade or replacement to a different project, if the new product met with my approval :)

So, 50 ColorHugs should last about 3 months before stock runs out, but I know a few people are using devices on production lines and other sorts of industrial control — if that sounds familiar, and you’d like to buy a spare device, now is the time to do so. Of course, I’ll continue supporting all the existing 3162 devices well into the future. I hope to be back building OpenHardware soon, and hopefully with a new and improved ColorHug3.

Joaquim Rocha: Attending FOSDEM 2018

Mër, 31/01/2018 - 11:53md

On Friday I will attend FOSDEM 2018 after having missing last year’s (but I had a good excuse).

It’s pretty sad that there’s no longer a Desktop devroom, but there is still plenty of other interesting topics to follow.

Thanks to my company Endless for making my attendance easier, and if you’re willing to know more about the problems we’re solving just reach out, as FOSDEM is the perfect place for that!

Jeremy Bicha: logo.png for default avatar for GitLab repos

Mër, 31/01/2018 - 4:24pd

Debian and GNOME have both recently adopted self-hosted GitLab for their git hosting. GNOME’s service is named simply https://gitlab.gnome.org/ ; Debian’s has the more intriguing name https://salsa.debian.org/ . If you ask the Salsa sysadmins, they’ll explain that they were in a Mexican restaurant when they needed to decide on a name!

There’s a useful under-documented feature I found. If you place a logo.png in the root of your repository, it will be automatically used as the default “avatar” for your project (in other words, the logo that shows up on the web page next to your project).

I added a logo.png to GNOME Tweaks at GNOME and it automatically showed up in Salsa when I imported the new version.

Other Notes

I first tried with a symlink to my app icon, but it didn’t work. I had to actually copy the icon.

The logo.png convention doesn’t seem to be supported at GitHub currently.

Daniel Espinosa: Modify SVG using GSVGtk: First Report

Mër, 31/01/2018 - 3:09pd

GSVGtk is a library to provide GTK+ widgets you can use to access SVG files. It is powered by GSVG, in a way it can access each shape and its properties using a GObject API based on W3C SVG 1.1 specification.

Currently, GSVGtk uses Clutter to encapsulate SVG shapes, render them inside Clutter Actors, through librsvg, and maps events to source SVG in order to eventually modify original definitions, like its position.

In the following video, you can see GSVGtk’s Container based on Clutter, loading an SVG file, take some shapes from it to show on the scene.

Shapes can be moved around, with feedback about its position in millimeters. If it is moved out of the stage, you can pan the scene to reach it.

GSVG allows to add SVG transformations to shapes, like scale and translate; matrix, skew and rotate are a work in progress. Once finished, they can be used with Clutter Actors to render transformations on screen with GSVGtk.

All CSS properties required in the standard are present in GSVG, but a good UI to modify them is required, may some of you want to create mockups to be implemented.

GTk4 and GSVGtk

On GTK+’s IRC, I asked for GTK4 and the future of Clutter, I get some recommendations from GTK+ developers. I’ll keep Clutter for backward compatibility and to able to use GSVGtk in LTS distributions, but add interfaces, a la GDK, to be able to have different backends.

Clutter based backend will be the first to be finished; next or in parallel, I’ll port GSVGtk to GTK4, in order to eventually drop Clutter support when GTK+ 4.0 reach long term stability, around 4.6 release.

GSVGtk and PLogic

PLogic will be the first project to take advantage on GSVGtk and GSVG, in order to render logic diagrams, with basic edition capabilities.

All basic edition capabilities required for PLogic, will push ahead GSVG development to implement new features from specification, as long as GSVGtk to provide graphical tools to archive them.

SVG Rendering

GSVGtk, no GSVG, uses librsvg to render SVG shapes and text on screen. librsvg is fast, but for large projects I think the pipeline used currently will find a bottle neck.

For rendering, GSVGtk ask GSVG to generate string representation of SVG  in XML, then pass it to librsvg for rendering. This can be avoided if librsvg exposes internal rendering methods in order to call them directly using GSVG structures and objects.

A solution is to take a dive into librsvg source code, extract that methods, import to GSVG or GSVGtk (thanks free software), so rendering can get a speed up. Yes create MR for librsvg to expose them is an option, but I need C methods and maybe this is not in line with the progress of Rustify work in librsvg.

Rendering directly, will help to avoid rendering to Clutter Canvas, which imposes semi-transparent backgrounds for shapes, see the video. More importantly, a direct rendering, will help on shapes selection  as their sensitive area can follow its own form to avoid square areas and allow to select objects behind other with transparent fill out.

Again GTk4 will be a very interesting thing to checkout for SVG rendering, may it will expose a good API, a la Cairo or similar, to help implement or port librsvg methods on. No sure, because no time to check that API now.

Calum Benson: It’s alive!

Mar, 30/01/2018 - 9:35md

For those of you who like to peek over the Linux fence from time to time, Oracle has just publicly released the first version of Solaris with a GNOME 3 desktop (3.24, to be precise), in the shape of the Solaris 11.4 Public Beta.

It’s been a few years since I was part of the Solaris desktop team, so this was nothing to do with me. But I did work on the Solaris Analytics BUI for two or three years, before moving on to Big Red pastures new.

Peter Hutterer: tuhi - a daemon to support Wacom SmartPad devices

Mar, 30/01/2018 - 9:59pd

For the last few weeks, Benjamin Tissoires and I have been working on a new project: Tuhi [1], a daemon to connect to and download data from Wacom SmartPad devices like the Bamboo Spark, Bamboo Slate and, eventually, the Bamboo Folio and the Intuos Pro Paper devices. These devices are not traditional graphics tablets plugged into a computer but rather smart notepads where the user's offline drawing is saved as stroke data in vector format and later synchronised with the host computer over Bluetooth. There it can be converted to SVG, integrated into the applications, etc. Wacom's application for this is Inkspace.

There is no official Linux support for these devices. Benjamin and I started looking at the protocol dumps last year and, luckily, they're not completely indecipherable and reverse-engineering them was relatively straightforward. Now it is a few weeks later and we have something that is usable (if a bit rough) and provides the foundation for supporting these devices properly on the Linux desktop. The repository is available on github at https://github.com/tuhiproject/tuhi/.

The main core is a DBus session daemon written in Python. That daemon connects to the devices and exposes them over a custom DBus API. That API is relatively simple, it supports the methods to search for devices, pair devices, listen for data from devices and finally to fetch the data. It has some basic extras built in like temporary storage of the drawing data so they survive daemon restarts. But otherwise it's a three-way mapper from the Bluez device, the serial controller we talk to on the device and the Tuhi DBus API presented to the clients. One such client is the little commandline tool that comes with tuhi: tuhi-kete [2]. Here's a short example:


$> ./tools/tuhi-kete.py
Tuhi shell control
tuhi> search on
INFO: Pairable device: E2:43:03:67:0E:01 - Bamboo Spark
tuhi> pair E2:43:03:67:0E:01
INFO: E2:43:03:67:0E:01 - Bamboo Spark: Press button on device now
INFO: E2:43:03:67:0E:01 - Bamboo Spark: Pairing successful
tuhi> listen E2:43:03:67:0E:01
INFO: E2:43:03:67:0E:01 - Bamboo Spark: drawings available: 1516853586, 1516859506, [...]
tuhi> list
E2:43:03:67:0E:01 - Bamboo Spark
tuhi> info E2:43:03:67:0E:01
E2:43:03:67:0E:01 - Bamboo Spark
Available drawings:
* 1516853586: drawn on the 2018-01-25 at 14:13
* 1516859506: drawn on the 2018-01-25 at 15:51
* 1516860008: drawn on the 2018-01-25 at 16:00
* 1517189792: drawn on the 2018-01-29 at 11:36
tuhi> fetch E2:43:03:67:0E:01 1516853586
INFO: Bamboo Spark: saved file "Bamboo Spark-2018-01-25-14-13.svg"
I won't go into the details because most should be obvious and this is purely a debugging client, not a client we expect real users to use. Plus, everything is still changing quite quickly at this point.

The next step is to get a proper GUI application working. As usual with any GUI-related matter, we'd really appreciate some help :)

The project is young and relying on reverse-engineered protocols means there are still a few rough edges. Right now, the Bamboo Spark and Slate are supported because we have access to those. The Folio should work, it looks like it's a re-packaged Slate. Intuos Pro Paper support is still pending, we don't have access to a device at this point. If you're interested in testing or helping out, come on over to the github site and get started!

[1] tuhi: Maori for "writing, script"
[2] kete: Maori for "kit"

Fabiano Fidencio: Fleet Commander!

Hën, 29/01/2018 - 10:11md
A really short update!I've presented a talk about Fleet Commander at DevConf CZ'2018, which basically show-cases the current status of the project after having the whole integration with FreeIPA and SSSD done!

Please, take a look at the presentation and slides.

While preparing this presentation we've found some issues on SSSD side, which already have some PRs opened: #495 and #497.

Also, fc-vagans project has been created to help people to easily test and develop for Fleet Commander.

Hopefully we'll be able to get the SSSD patches merged and backported to Fedora27. Meanwhile, I'd strongly recommend people to use the fc-vagans, as the patches are present there.


So, give it a try and, please, talk to us (#fleet-commander at freenode.net)!


And ... a similar talk will be given at FOSDEM'2018! Take a look at our DevRoom schedule and join us there!

Jeremy Bicha: GNOME Tweaks 3.28 Progress Report 1

Hën, 29/01/2018 - 10:09md

A few days ago, I released GNOME Tweaks 3.27.4, a development snapshot on the way to the next stable version 3.28 which will be released alongside GNOME 3.28 in March. Here are some highlights of what’s changed since 3.26.

New Name (Part 2)

For 3.26, we renamed GNOME Tweak Tool to GNOME Tweaks. It was only a partial rename since many underlying parts still used the gnome-tweak-tool name. For 3.28, we have completed the rename. We have renamed the binary, the source tarball releases, the git repository, the .desktop, and app icons. For upgrade compatibility, the autostart file and helper script for the Suspend on Lid Close inhibitor keeps the old name.

New Home

GNOME Tweaks has moved from the classic GNOME Git and Bugzilla to the new GNOME-hosted gitlab.gnome.org. The new hosting includes git hosting, a bug tracker and merge requests. Much of GNOME Core has moved this cycle, and I expect many more projects will move for the 3.30 cycle later this year.

Dark Theme Switch Removed

As promised, the Global Dark Theme switch has been removed. Read my previous post for more explanation of why it’s removed and a brief mention of how theme developers should adapt (provide a separate Dark theme!).

Improved Theme Handling

The theme chooser has been improved in several small ways. Now that it’s quite possible to have a GNOME desktop without any gtk2 apps, it doesn’t make sense to require that a theme provide a gtk2 version to show up in the theme chooser so that requirement has been dropped.

The theme chooser will no longer show the same theme name multiple times if you have a system-wide installed theme and a theme in your user theme directory with the same name. Additionally, GNOME Tweaks does better at supporting the  XDG_DATA_DIRS standard in case you use custom locations to store your themes or gsettings overrides.

GNOME Tweaks 3.27.4 with the HighContrastInverse theme

Finally, gtk3 still offers a HighContrastInverse theme but most people probably weren’t aware of that since it didn’t show up in Tweaks. It does now! It is much darker than Adwaita Dark.

Several of these theme improvements (including HighContrastInverse) have also been included in 3.26.4.

For more details about what’s changed and who’s done the changing, see the project NEWS file.

Sam Thursfield: How BuildStream uses OSTree

Hën, 29/01/2018 - 7:09md

I’ve been asked a few times about the relationship between BuildStream and OSTree. The answer is a bit complicated so I decided to answer the question here.

OSTree is a content-addressed content store, inspired in many ways by Git but optimized for storing trees of binary files rather than trees of text files.

BuildStream is an integration tool which deals with trees of binary files, and at present it uses OSTree to help with storing, identifying and transferring these trees of binary files.

I’m deliberately using the abstract term “trees of binary files” here because neither BuildStream or OSTree limit themselves to a particular use case. BuildStream itself uses the term “artifact” to describe the output of a build job and in practice this could be the set of development headers and documentation for library, a package file such as a .deb or .rpm, a filesystem for a whole operating system, a bootable VM disk image, or whatever else.

Anyway let’s get to the point! There are actually four ways that BuildStream directly makes use of OSTree.

The `ostree` source plugin

The `ostree` source plugin allows pulling arbitrary data from a remote OSTree repository. It is normally used with an `import` element as a way of importing prebuilt binaries into a build pipeline. For example BuildStream’s integration tests currently run on top of the Freedesktop SDK binaries (which were originally intended for use with Flatpak applications but are equally useful as a generic platform runtime). The gnome-build-meta project uses this mechanism to import a prebuilt Debian base image, which is currently manually pushed to an OSTree repo (this is a temporary measure, in future we want to base gnome-build-meta on top of the upcoming Freedesktop SDK 1.8 instead).

It’s also possible to import binaries using the `tar` and `local` source types of course, and you can even use the `git` or `bzr` plugins for this if you really get off on using the wrong tools for the wrong job.

In future we will likely add other source plugins for importing binaries, for example from the Docker Registry and perhaps using casync.

Storing artifacts locally

Once a build has completed, BuildStream needs to store the results somewhere locally. The results go in the exciting-sounding “local artifact cache”, which is usually located inside your home directory at ​~/.cache/buildstream/artifacts.

There are actually two implementions of the local artifact cache, one using OSTree and one using .tar files. There are several advantages to the OSTree implementation, a major one being that it deduplicates files that are present in multiple artifacts which can save huge amounts of disk space if you do many builds of a large component. The biggest disadvantage to using OSTree is that it currently relies on a bunch of features that are specific to the Linux kernel and so it can only run on Linux OSes. BuildStream needs to support other UNIX-like operating systems and we found the simplest route for now to solve this was to implement a second type of local artifact cache which stores each artifact as a separate .tar file. This is less efficient in terms of disk space but much more portable.

So the fact that we use OSTree for caching artifacts locally should be considered an implementation detail of BuildStream. If a better tool for the job is found then we will switch to that. The precise structure of the artifacts should also be considered an internal detail — it’s possible to check artifacts out from the cache by poking around in the  ​~/.cache/buildstream/artifacts directory but there’s no stability guarantee in how you do this or what you might get out as a result. If you want to see the results of a build, use the `bst checkout` command.

It’s worth noting that we don’t yet support automated cleanups of the local artifact cache; that is issue #135.

Storing artifacts remotely

As a way of saving everyone from building the same things, BuildStream supports downloading prebuilt artifacts from a remote cache.

Currently the recommended way of setting up a remote artifact cache requires that you use OSTree. In theory, any storage mechanism could be used but that is currently not trivial because we also make use of OSTree’s transfer protocols, as described below.

We currently lack a way to do automated artifact expiry on remote caches.

Pushing and pulling artifacts

Of course there needs to be a way to push and pull artifacts between the local cache and the remote cache.

OSTree is designed to support downloading artifacts over HTTP or HTTPS and this is how `bst pull` works. The `bst push` command is more complex because officially OSTree does not support pushing, however we have a rather intricate push mechanism based off Dan  Nicholson’s ostree-push project which tunnels the data over SSH in order to get it onto the remote server.

Users of the tar cache cannot currently interact with remote artifact shares at all, which is an unfortunate issue that we aim to solve this year. The solution may be to switch away from using OSTree’s transfer protocols but marshalling the data into some other format in order to transfer it instead. We are particularly keen to make use of the Bazel
content-addressable store protocol although there may be too much of an impedence mismatch there.

Indirect uses of OSTree

It may be that you also end up deploying stuff into an OSTree repository somewhere. BuildStream itself is only interested with building and integrating your project — once that is done you run `bst checkout` and are rewarded with a tree of files on your local machine. What if, let’s say, your project aims to build a Flatpak application?

Flatpak actually uses OSTree as well and so your deployment step may involve committing those files into yet another OSTree repo ready for Flatpak to run them. (This can be a bit long winded at present so there will likely be some better integration appearing here at some point).

So, is anywhere safe from the rise of OSTree or is it going to take over completely? Something you might not know about me is that I grew up outside a town in north Shropshire called Oswestry. Is that a coincidence? I can’t say.

 

Oswestry, from Wikipedia.

Julita Inca: We are back! #LinuXatUNI on the stage

Hën, 29/01/2018 - 7:08md

Yesterday, we had an ‘extreme’ workshop-day at Villa el Salvador to prepare ourselves to the LFCS Certification Exam in 2018. We had received a-year-scholarships from the Linux Foundation because they were the prizes to our winners of the programs the group LinuXatUNI organized last year, and we are more than happy to learn Linux Administration in deep.   

We differentiated the root shell from the user environments by using commands mixed with pipes and redirection commands. As well, we learned more about the configuration of the PATH, wildcards, ulimit, ipcs to check semaphores, shared memory process, Message queues, and the disk usage command experiences with regular expressions.

It was a journey of almost 5 hours roundtrip to Toto’s house, we enjoyed the sightseen, though. Thanks to Fiorella Effio and Carlos Aznaran for your effort and kindness! 

Debarshi Ray: GNOME Photos: an overview of thumbnailing

Hën, 29/01/2018 - 4:23md

From time to time, I find myself being asked about various details about how content is thumbnailed in GNOME Photos, and the reasons behind various implementation decisions. I can never remember all the details, and always have to dig through Git history and bug reports across multiple modules to come up with an answer. I am hoping that this brain dump will be more persistent than my memory, and more holistic than random comments here and there.

Feel free to read and comment, or you can also happily ignore it.

Background

Having accurate and quality thumbnails is absolutely crucial for Photos. The main user interface is a grid of thumbnails. By design, it tries hard not to expose the filesystem, which means that the user doesn’t have the path or directory hierarchy to complement the contents of the grid. In comparison, thumbnails can be optional in a file manager. Note how Files has settings to disable thumbnailing, and defaults to not thumbnailing remote content, but users can still go about interacting with their files.

Thumbnailing in GNOME is spread across GIO, GVfs, GnomeDesktopThumbnailFactory, and together they implement the Thumbnail Managing Standard. Usually, one uses GIO to lookup thumbnails from the cache and the state they are in, while GnomeDesktopThumbnailFactory is used to create and store the thumbnail files. These thumbnails are stored in the global thumbnail cache in $XDG_CACHE_HOME/thumbnails, and are often, but not necessarily, created by the thumbnailers listed under /usr/share/thumbnailers. This is how most components (eg., GTK+’s GtkFileChooserWidget), and applications (eg., Files and Videos) show thumbnails.

Then there are those “odd” ones that have their own custom setup.

Prior to version 3.24, Photos entirely relied on the global cache and the aforementioned GNOME APIs for its thumbnails. That changed in 3.24 when it switched to its own custom thumbnailer and application specific cache.

Requirements

Ever since editing was added in 3.20, we felt the need to ensure that the thumbnail represents the current state of each item. Being a non-destructive editor, Photos never modifies the original file but separately serializes the edits to disk. The image is rendered by loading the original file, deserializing the edits into objects in memory and running the pixels through them [1]. Therefore, to have the thumbnails accurately represent the current state of the item, it would have to do something similar. However, the edits are application-specific [2], so it is not reasonable to expect the generic OS-wide thumbnailers to be able to handle them.

I believe this is a requirement that all non-destructive image editors have [3]. Notable examples are Darktable and Shotwell.

Secondly, it is important to be able to create and lookup thumbnails of a specific size, as opposed to enumerated constants with pre-determined presets.

The standard specifies two sizes – normal, which is 128×128, and large, which is 256×256. I think this was alright in a world without HiPPI, and is also fine if the thumbnails are either too small or are not an existential necessity for the application. For a HiPPI display with a scaling factor of N, we want to make the thumbnail grid as visually appealing as possible by pumping in NxN times more pixels. Since Photos wants the thumbnails to be 256×256 logical pixels, they should be 256Nx256N raw device pixels on HiPPI. To make things complicated, the cache might get used across different scaling factors – either display or disk got switched, multi-monitor with different resolutions, etc..

Upscaling the low-resolution counterpart of a thumbnail by N is still passable, but it looks much worse if the thumbnail is significantly smaller. Although, I must note that this was the easiest hurdle to surmount. It originates from GIO’s desire to fallback to 128×128 thumbnails, even if the application asked for 256×256. This is pretty straightforward to fix, if necessary.

Last but not the least, I find it important to version the cache to tide over bugs in the thumbnailer. If the cache isn’t versioned, then it is difficult to discard thumbnails that might have been generated by a broken thumbnailer. Hopefully, such bugs would be rare enough that it won’t be necessary to invalidate the cache very often, but when they do happen, it is very reassuring to be able to bump the version, and be guaranteed that users won’t be looking at a broken user interface.

Solution

Starting from version 3.24, Photos uses its own out-of-process thumbnailer and cache [4]. The cache is at $XDG_CACHE_HOME/gnome-photos/thumbnails/$SIZE-$GENERATION, where SIZE is the thumbnail size in raw device pixels and GENERATION is the cache’s version. The main application talks to the thumbnailer over peer-to-peer D-Bus and a simple, cancellable private D-Bus API.

The thumbnailer isn’t separately sandboxed, though. It might be an interesting thing to look at for those who don’t use Flatpak, or to restrict it even more than the main application when running inside Flatpak’s sandbox.

Known bugs

Photos’ thumbnailing code can be traced back to its origins in GNOME Documents. They don’t persistently track thumbnailing failures, and will attempt to re-thumbnail an item that had previously failed when any metadata change is detected. In short, they don’t use G_FILE_ATTRIBUTE_THUMBNAILING_FAILED. The current behaviour might help to overcome a temporary glitch in the network, or it can be simply wasteful.

They predate the addition of G_FILE_ATTRIBUTE_THUMBNAIL_IS_VALID and don’t update the thumbnail once an item gets updated. This could have still been done using GnomeDesktopThumbnailFactory, but that’s water under the bridge, and should possibly be fixed. Although, images don’t tend to get updated so often, which is probably why nobody notices it.

Related to the above point, currently the modification time of the original doesn’t get stored in the thumbnail. It slipped through the cracks while I was reading the sources of the various modules involved in creating thumbnails in GNOME. However, a versioned cache makes it possible to fix it.

[1] If you are reading between the lines, then you might be thinking that it is serializing and deserializing GeglOperations, and you’d be right.

[2] GEGL might be a generic image processing library with its set of built-in operations, but for various reasons, an application can end up carrying its own custom operations.

[3] The idea of an application storing its edits separately from the original can strike as unusual, but this is how most modern image editors work.

[4] Both Darktable and Shotwell have similar thumbnailing infrastructure. You can read about them here and here respectively.

Philip Chimento: Geek tip: g_object_new and constructors

Dje, 28/01/2018 - 9:09md

tl;dr Don’t put any code in your foo_label_new() function other than g_object_new(), and watch out with Vala.

From this GJS bug report I realized there’s a trap that GObject library writers can fall into,

Avoid code at your construction site.

that I don’t think is documented anywhere. So I’m writing a blog post about it. I hope readers from Planet GNOME can help figure out where it needs to be documented.

For an object (let’s call it FooLabel) that’s part of the public API of a library (let’s call it libfoo), creating the object via its foo_label_new() constructor function should be equivalent to creating it via g_object_new().

If foo_label_new() takes no arguments then it should literally be only this:

FooLabel * foo_label_new(void) { return g_object_new(FOO_TYPE_LABEL, NULL); }

If it does take arguments, then they should correspond to construct properties, and they should get set in the g_object_new() call. (It’s customary to at least put all construct-only properties as arguments to the constructor function.) For example:

FooLabel * foo_label_new(const char *text) { return g_object_new(FOO_TYPE_LABEL, "text", text, NULL); }

Do not put any other code in foo_label_new(). That is, don’t do this:

FooLabel * foo_label_new(void) { FooLabel *retval = g_object_new(FOO_TYPE_LABEL, NULL); retval->priv->some_variable = 5; /* Don't do this! */ return retval; }

The reason for that is because callers of your library will expect to be able to create FooLabels using g_object_new() in many situations. This is done when creating a FooLabel in JS and Python, but also when creating one from a Glade file, and also in plain old C when you need to set construct properties. In all those situations, the private field some_variable will not get initialized to 5!

Instead, put the code in foo_label_init(). That way, it will be executed regardless of how the object is constructed. And if you need to write code in the constructor that depends on construct properties that have been set, use the constructed virtual function. There’s a code example here.

If you want more details about what function is called when, Allison Lortie has a really useful blog post.

This trap can be easy to fall into in Vala. Using a construct block is the right way to do it:

namespace Foo { public class Label : GLib.Object { private int some_variable; construct { some_variable = 5; } } }

This is the wrong way to do it:

namespace Foo { public class Label : GLib.Object { private int some_variable; public Label() { some_variable = 5; // Don't do this! } } }

This is tricky because the wrong way seems like the most obvious way to me!

This has been a public service announcement for the GNOME community, but here’s where you come in! Please help figure out where this should be documented, and whether it’s possible to enforce it through automated tools.

For example, the Writing Bindable APIs page seems like a good place to warn about it, and I’ve already added it there. But this should probably go into Vala documentation in the appropriate place. I have no idea if this is a problem with Rust’s gobject_gen! macro, but if it is then it should be documented as well.

Documented pitfalls are better than undocumented pitfalls, but removing the pitfall altogether is better. Is there a way we can check this automatically?

Jo Shields: Hello PGO

Dje, 28/01/2018 - 12:58pd

Assuming the Planet configuration change was correct, this should be my first post aggregated on Planet GNOME.

Hello!

I’m Jo.

I used to work on Free Software at Collabora, until I sold out, and now I work on Free Software at Microsoft. Specifically, I divide my time between administration of various Xamarin engineering services (primarily the public Jenkins server and its build agents); develop and manage the release of the Mono framework on Windows/Linux and MonoDevelop IDE on Linux; and occasionally work on internal proprietary projects which definitely don’t include Visual Studio Enterprise for Linux. I’m based in the Microsoft office in Cambridge, Mass, along with the Xamarin Release Engineering team, and most of the Xamarin engineering team.

Whilst it hasn’t had the highest profile in the GNOME community for a while, Mono is still out there, in its current niches – in 2018 that would primarily be on smartphones in a wider context, and for games (either via Unity3D or MonoGame/FNA) on the Linux desktop. But hey, it’s still there for desktop apps on Linux if you want it to be! I still use Smuxi as my IRC client. Totally still a thing. And there’s the MonoDevelop IDE, which nowadays I’m trying to release on Linux via Flatpak.

So, um, hi. You’ll see blog posts from me occasionally about Linux software releasing from an ISV perspective, packaging, etc. It’ll be fun for all concerned.

Umang Jain: No more hunched back

Sht, 27/01/2018 - 11:10pd

I have been looking into laptop stands as I spend considerable hours sitting with my laptop which has led to hunched back. But when you open amazon, I wasn’t able to justify the cost for such a simple thing.

So, little preface. I experimented with thick books and what not, so that the laptop base could be raised as an inclined plane but they had some problems and I didn’t generally like it. Therefore, what I did? Ask the DIY internet.

/me goes and watches videos on DIY laptop stands.

Now, these videos’ people have all sort of fancy machines/tools/workshop to make stuff like this. Obviously, I didn’t have those so the most I could do is find a carpenter and ask him to do it.

BUT even that sounds like a lot of work and most important, will take time! :-/

I have some friends over hardware labs which has access to things like laser cutter, CNC, 3-D printer; that could work but again, I wanted something fast and almost at no effort cost.

All this time, I had a voice in mind going on, “It can’t be this tricky, it has to simple, it has to simple!!”

Finally, I got it !! PVC pipe parts. You can get them very easily in India and they are really really cheap and gives sturdy construction. So, the total cost incurred is Rs. 120/- (1/6th of the amazon thing, whatever that was) and I built it within half an hour. It’s very simple so pictures will do the talking. So, no more hunched back problem.

This does pretty good job given the cost incurred.

Tobias Bernard: Introducing the CSD Initiative

Pre, 26/01/2018 - 4:24md

tl;dr: Let’s get rid of title bars. Join the revolution!

Unless you’re one of a very lucky few, you probably use apps with title bars. In case you’ve never come across that term, title bars are the largely empty bars at the top of some application windows. They contain only the window title and a close button, and are completely separate from the window’s content. This makes them very inflexible, as they can not contain any additional UI elements, or integrate with the application window’s content.

Blender, with its badly integrated and pretty much useless title bar Luckily, the GNOME ecosystem has been moving away from title bars in favor of header bars. This is a newer, more flexible pattern that allows putting window controls and other UI elements in the same bar. Header bars are client-side decorations (CSD), which means they are drawn by the app rather than the display server. This allows for better integration between application and window chrome.

 

GNOME Builder, an app that makes heavy use of the header bar for UI elements

All GNOME apps (except for Terminal) have moved to header bars over the past few years, and so have many third-party apps. However, there are still a few holdouts. Sadly, these include some of the most important productivity apps people rely on every day (e.g. LibreOffice, Inkscape, and Blender).

There are ways to hide title bars on maximized and tiled windows, but these do not (and will never) work on Wayland (Note: I’m talking about GNOME Shell on Wayland here, not other desktops). All window decorations are client-side on Wayland (even when they look like title bars), so there is no way to hide them at a window manager level.

The CSD Initiative

The only way to solve this problem long-term is to patch applications upstream to not use title bars. So this is what we’ll have to do.

That is why I’m hereby announcing the CSD Initiative, an effort to get as many applications as possible to drop title bars in favor of client-side decorations. This won’t be quick or easy, and will require work on many levels. However, with Wayland already being shipped as the default session by some distros, it’s high time we got started on this.

For a glimpse at what this kind of transition will look like in practice, we can look to Firefox and Chromium. Chromium has recently shipped GNOME-style client-side decorations in v63, and Firefox has them in experimental CSD builds. These are great examples for other apps to follow, as they show that apps don’t have to be 100% native GTK in order to use CSD effectively.

Chromium 63 with CSD Chromium 63 with window buttons on the left What is the goal?

This initiative doesn’t aim to make all apps look and act exactly like native GNOME apps. If an app uses GTK, we do of course want it to respect the GNOME HIG. However, it isn’t realistic to assume that apps like Blender or Telegram will ever look like perfectly native GNOME apps. In these cases, we’re are aiming for functional, not visual consistency. For example, it’s fine if an Electron app has custom close/maximize/minimize icons, as long as they use the same metaphors as the native icons.

Thus, our goal is for as many apps as possible to have the following properites:

  • No title bar
  • Native-looking close/maximize/minimize icons
  • Respects the setting for showing/hiding minimize and maximize
  • Respects the setting for buttons to be on the left/right side of the window
Which apps are affected?

Basically, all applications not using GTK3 (and a few that do use GTK3). That includes GTK2, Qt, and Electron apps. There’s a list of some of the most popular affected apps on this initiative’s Wiki page.

The process will be different for each app, and the changes required will range from “can be done in a weekend” to “holy shit we have to redesign the entire app”. For example, GTK3 apps are relatively easy to port to header bars because they can just use the native GTK component. GTK2 apps first need to be ported to GTK3, which is a major undertaking in itself. Some apps will require major redesigns, because removing the title bar goes hand in hand with moving from old-style menu bars to more modern, contextual menus.

Many Electron apps might be low-hanging fruit, because they already use CSD on macOS. This means it should be possible to make this happen on GNU/Linux as well without major changes to the app. However, some underlying work in Electron to expose the necessary settings to apps might be required first.

Slack, like many Electron apps, uses CSD on macOS The same Slack app on Ubuntu (with a title bar)

Apps with custom design languages will have to be evaluated on a case-by-case basis. For example, Telegram’s design should be easy to adapt to a header bar layout. Removing the title bar and adding window buttons in the toolbar would come very close to a native GNOME header bar functionally.

Telegram as it looks currently, with a title bar Telegram mockup with no title bar How can I help?

The first step will be making a list of all the apps affected by this initiative. You can add apps to the list on this Wiki page.

Then we’ll need to do the following things for each app:

  1. Talk to the maintainers and convince them that this is a good idea
  2. Do the design work of adapting the layout
  3. Figure out what is required at a technical level
  4. Implement the new layout and get it merged

In addition, we need to evaluate what we can do at the toolkit level to make it easier to implement CSD (e.g. in Electron or Qt apps). This will require lots of work from lots of people with different skills, and nothing will happen overnight. However, the sooner we start, the sooner we’ll live in an awesome CSD-only future.

And that’s where you come in! Are you a developer who needs help updating your app to a header bar layout? A designer who would like to help redesign apps? A web developer who’d like to help make CSD work seamlessly in Electron apps? Come to #gnome-design on IRC/Matrix and talk to us. We can do this!

Happy hacking!

 

Update:

There have been some misunderstandings about what I meant regarding server-side decorations on Wayland. As far as I know (and take this with a grain of salt), Wayland uses CSD by default, but it is possible to add SSD support via protocol extensions. KDE has proposed such a protocol, and support for this protocol has been contributed to GTK by the Sway developers. However, GNOME Shell does not support it and its developers have stated that they have no plans to support it at the moment.

This is what I was referring to by saying that “it will never work on Wayland”. I can see how this could be misinterpreted from the point of view of other desktop environments but that was not my intention, it was simply unfortunate wording. I have updated the relevant part of this post to clarify.

Also, some people seem to have taken from this that we plan on lobbying for removing title bar support from third-party apps in a way that affects other desktops. The goal of this initiative is for GNOME users to get a better experience by having fewer applications with badly integrated title bars on their systems. That doesn’t preclude applications from having title bars on different desktops, or having a preference for this (like Chromium does, for example).

Faqet