You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - http://planet.gnome.org/
Përditësimi: 1 ditë 13 orë më parë

Hans de Goede: Testing Flicker Free Boot on Fedora 29

Enj, 28/02/2019 - 2:54md
For those of you who want to give the new Flicker Free Boot enhancements for Fedora 30 a try on Fedora 29, this is possible now since the latest F29 bugfix update for plymouth also includes the new theme used in Fedora 30.

If you want to give this a try, add "plymouth.splash_delay=0 i915.fastboot=1" to your kernel commandline:

  1. Edit /etc/default/grub, add "plymouth.splash_delay=0 i915.fastboot=1" to GRUB_CMDLINE_LINUX

  2. Run "sudo grub2-mkconfig -o /etc/grub2-efi.cfg"

Note that i915.fastboot=1 causes the backlight to not work on Haswell CPUs (e.g. i5-42xx CPUs), this is fixed in the 5.0 kernels which are currently available in rawhide/F30.

Run the following commands to get the updated plymouth and the new theme and to select the new theme:

  1. "sudo dnf update plymouth*"

  2. "sudo dnf install plymouth-theme-spinner"

  3. "sudo cp /usr/share/pixmaps/fedora-gdm-logo.png /usr/share/plymouth/themes/spinner/watermark.png"

  4. "sudo plymouth-set-default-theme -R bgrt"

Now on the next boot / installing of offline-updates you should get the new theme.

Ludovico de Nittis: GNOME Security Internship - The end?

Enj, 28/02/2019 - 2:10md

Here you can find the introduction, the update 1, the update 2, the update 3, the update 4 and the update 5.

The end? Is the internship already ended?

Yes, incredibly these three months went by so fast. It was an awesome experience, so I’d like to thank the whole GNOME Foundation for giving me this opportunity.

What’s the project status?

The first part regarding protecting the system from potentially unwanted new USB devices can be considered completed. Probably now it will requires just bug fixing and minor changes, if necessary. The required merge requests are up.

The second part regarding limiting the number of usable keys for untrusted keyboards reached a working stage. However it’s still under evaluation which is the best way to achieve it, because even if with the current solution works it doesn’t mean that this is the desirable way to do it.

What will happen from tomorrow?

Even if the internship will be officially ended I’m still pledged to carry on these two features until they’ll be merged in upstream.

Right now I’m actively searching for a job (preferably in the open source world), so in the next few days I’ll still be able to spend a considerable amount of time in this project, at least until I find a full time job.

GNOME Control Center USB protection switch

Old:

New:

As I mentioned in the last blog post we realized that the always on USB protection was marginally better (or worse) than the protection with lock screen. So we decided to leave in Control Center only an on/off switch that by default controls the USB lock screen protection.

If necessary it’s still possible to enable the always on protection editing the usb-protection-level desktop schema.

This UI change allowed us to remove the ambiguous protection on switch with the drop down protection level “never block”.

Now when you disable the protection from Control Center, in USBGuard we set InsertedDevicePolicy to apply-policy and we add an allow everything in the rules file. In this way we try to leave USBGuard in a clean state. From there USBGuard will never block USB devices anymore. You are then free to leave it as is or, if you want a stringent or different protection behaviour, use a third parties GUI or even write your own script to handle USBGuard.

Limit untrusted USB keyboards

Finally in this front we reached a first working implementation.

In the new Control Center tab we show an on/off switch and a list of currently plugged in keyboards. This list is automatically updated when keyboards gets added or removed.

We store the authorization property with hwdb. In this way we can bound it to a specific device product and retrieve this information directly from libinput.

When a new keyboard is added it is limited by default until you manually set it to be fully trusted.

While this implementation is working as expected, we are currently evaluating other alternatives.

For example in mutter every time we receive a new keystroke we check if it is a dangerous key. As an alternative we could use the kernel EVIOCSMASK instead.

Another thing that we are evaluating is if we should replace the hwdb with another db created ad-hoc for this purpose.

What to expect next and comments

In a couple of weeks I’ll return hopefully with a few interesting updates regarding the untrusted USB keyboards.

Feedback is welcome!

Sebastian Pölsterl: scikit-survival 0.7 released

Mër, 27/02/2019 - 10:51md
scikit-survival 0.7 released

This is a long overdue maintenance release of scikit-survival 0.7 that adds compatibility with Python 3.7 and scikit-learn 0.20. For a complete list of changes see the release notes.

Download

Pre-built conda packages are available for Linux, OSX and Windows:

conda install -c sebp scikit-survival

Alternatively, scikit-survival can be installed from source via pip:

pip install -U scikit-survival sebp Wed, 02/27/2019 - 22:42 Comments

Federico Mena-Quintero: Rust build scripts vs. Meson

Mër, 27/02/2019 - 7:14md

One of the pain points in trying to make the Meson build system work with Rust and Cargo is Cargo's use of build scripts, i.e. the build.rs that many Rust programs use for doing things before the main build. This post is about my exploration of what build.rs does.

Thanks to Nirbheek Chauhan for his comments and additions to a draft of this article!

TL;DR: build.rs is pretty ad-hoc and somewhat primitive, when compared to Meson's very nice, high-level patterns for build-time things.

I have the intuition that giving names to the things that are usually done in build.rs scripts, and creating abstractions for them, can make it easier later to implement those abstractions in terms of Meson. Maybe we can eliminate build.rs in most cases? Maybe Cargo can acquire higher-level concepts that plug well to Meson?

(That is... I think we can refactor our way out of this mess.)

What does build.rs do?

The first paragraph in the documentation for Cargo build scripts tells us this:

Some packages need to compile third-party non-Rust code, for example C libraries. Other packages need to link to C libraries which can either be located on the system or possibly need to be built from source. Others still need facilities for functionality such as code generation before building (think parser generators).

That is,

  • Compiling third-party non-Rust code. For example, maybe there is a C sub-library that the Rust crate needs.

  • Link to C libraries... located on the system... or built from source. For example, in gtk-rs, the sys crates link to libgtk-3.so, libcairo.so, etc. and need to find a way to locate those libraries with pkg-config.

  • Code generation. In the C world this could be generating a parser with yacc; in the Rust world there are many utilities to generate code that is later used in your actual program.

In the next sections I'll look briefly at each of these cases, but in a different order.

Code generation

Here is an example, in how librsvg generates code for a couple of things that get autogenerated before compiling the main library:

  • A perfect hash function (PHF) of attributes and CSS property names.
  • A pair of lookup tables for SRGB linearization and un-linearization.

For example, this is main() in build.rs:

fn main() { generate_phf_of_svg_attributes(); generate_srgb_tables(); }

And this is the first few lines of of the first function:

fn generate_phf_of_svg_attributes() { let path = Path::new(&env::var("OUT_DIR").unwrap()).join("attributes-codegen.rs"); let mut file = BufWriter::new(File::create(&path).unwrap()); writeln!(&mut file, "#[repr(C)]").unwrap(); // ... etc }

Generate a path like $OUT_DIR/attributes-codegen.rs, create a file with that name, a BufWriter for the file, and start outputting code to it.

Similarly, the second function:

fn generate_srgb_tables() { let linearize_table = compute_table(linearize); let unlinearize_table = compute_table(unlinearize); let path = Path::new(&env::var("OUT_DIR").unwrap()).join("srgb-codegen.rs"); let mut file = BufWriter::new(File::create(&path).unwrap()); // ... print_table(&mut file, "LINEARIZE", &linearize_table); print_table(&mut file, "UNLINEARIZE", &unlinearize_table); }

Compute two lookup tables, create a file named $OUT_DIR/srgb-codegen.rs, and write the lookup tables to the file.

Later in the actual librsvg code, the generated files get included into the source code using the include! macro. For example, here is where attributes-codegen.rs gets included:

// attributes.rs extern crate phf; // crate for perfect hash function // the generated file has the declaration for enum Attribute include!(concat!(env!("OUT_DIR"), "/attributes-codegen.rs"));

One thing to note here is that the generated source files (attributes-codegen.rs, srgb-codegen.rs) get put in $OUT_DIR, a directory that Cargo creates for the compilation artifacts. The files do not get put into the original source directories with the rest of the library's code; the idea is to keep the source directories pristine.

At least in those terms, Meson and Cargo agree that source directories should be kept clean of autogenerated files.

The Code Generation section of Cargo's documentation agrees:

In general, build scripts should not modify any files outside of OUT_DIR. It may seem fine on the first blush, but it does cause problems when you use such crate as a dependency, because there's an implicit invariant that sources in .cargo/registry should be immutable. cargo won't allow such scripts when packaging.

Now, some things to note here:

  • Both the build.rs program and the actual library sources look at the $OUT_DIR environment variable for the location of the generated sources.

  • The Cargo docs say that if the code generator needs input files, it can look for them based on its current directory, which will be the toplevel of your source package (i.e. your toplevel Cargo.toml).

Meson hates this scheme of things. In particular, Meson is very systematic about where it finds input files and sources, and where things like code generators are allowed to place their output.

The way Meson communicates these paths to code generators is via command-line arguments to "custom targets". Here is an example that is easier to read than the documentation:

gen = find_program('generator.py') outputs = custom_target('generated', output : ['foo.h', 'foo.c'], command : [gen, '@OUTDIR@'], ... )

This defines a target named 'generated', which will use the generator.py program to output two files, foo.h and foo.c. That Python program will get called with @OUTDIR@ as a command-line argument; in effect, meson will call /full/path/to/generator.py @OUTDIR@ explicitly, without any magic passed through environment variables.

If this looks similar to what Cargo does above with build.rs, it's because it is similar. It's just that Meson gives a name to the concept of generating code at build time (Meson's name for this is a custom target), and provides a mechanism to say which program is the generator, which files it is expected to generate, and how to call the program with appropriate arguments to put files in the right place.

In contrast, Cargo assumes that all of that information can be inferred from an environment variable.

In addition, if the custom target takes other files as input (say, so it can call yacc my-grammar.y), the custom_target() command can take an input: argument. This way, Meson can add a dependency on those input files, so that the appropriate things will be rebuilt if the input files change.

Now, Cargo could very well provide a small utility crate that build scripts could use to figure out all that information. Meson would tell Cargo to use its scheme of things, and pass it down to build scripts via that utility crate. I.e. to have

// build.rs extern crate cargo_high_level; let output = Path::new(cargo_high_level::get_output_path()).join("codegen.rs"); // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ this, instead of: let output = Path::new(&env::var("OUT_DIR").unwrap()).join("codegen.rs"); // let the build system know about generated dependencies cargo_high_level::add_output(output);

A similar mechanism could be used for the way Meson likes to pass command-line arguments to the programs that deal with custom targets.

Linking to C libraries on the system

Some Rust crates need to link to lower-level C libraries that actually do the work. For example, in gtk-rs, there are high-level binding crates called gtk, gdk, cairo, etc. These use low-level crates called gtk-sys, gdk-sys, cairo-sys. Those -sys crates are just direct wrappers on top of the C functions of the respective system libraries: gtk-sys makes almost every function in libgtk-3.so available as a Rust-callable function.

System libraries sometimes live in a well-known part of the filesystem (/usr/lib64, for example); other times, like in Windows and MacOS, they could be anywhere. To find that location plus other related metadata (include paths for C header files, library version), many system libraries use pkg-config. At the highest level, one can run pkg-config on the command line, or from build scripts, to query some things about libraries. For example:

# what's the system's installed version of GTK? $ pkg-config --modversion gtk+-3.0 3.24.4 # what compiler flags would a C compiler need for GTK? $ pkg-config --cflags gtk+-3.0 -pthread -I/usr/include/gtk-3.0 -I/usr/include/at-spi2-atk/2.0 -I/usr/include/at-spi-2.0 -I/usr/include/dbus-1.0 -I/usr/lib64/dbus-1.0/include -I/usr/include/gtk-3.0 -I/usr/include/gio-unix-2.0/ -I/usr/include/libxkbcommon -I/usr/include/wayland -I/usr/include/cairo -I/usr/include/pango-1.0 -I/usr/include/harfbuzz -I/usr/include/pango-1.0 -I/usr/include/fribidi -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/pixman-1 -I/usr/include/freetype2 -I/usr/include/libdrm -I/usr/include/libpng16 -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/libmount -I/usr/include/blkid -I/usr/include/uuid -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include # and which libraries? $ pkg-config --libs gtk+-3.0 -lgtk-3 -lgdk-3 -lpangocairo-1.0 -lpango-1.0 -latk-1.0 -lcairo-gobject -lcairo -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

There is a pkg-config crate which build.rs can use to call this, and communicate that information to Cargo. The example in the crate's documentation is for asking pkg-config for the foo package, with version at least 1.2.3:

extern crate pkg_config; fn main() { pkg_config::Config::new().atleast_version("1.2.3").probe("foo").unwrap(); }

And the documentation says,

After running pkg-config all appropriate Cargo metadata will be printed on stdout if the search was successful.

Wait, what?

Indeed, printing specially-formated stuff on stdout is how build.rs scripts communicate back to Cargo about their findings. To quote Cargo's docs on build scripts; the following is talking about the stdout of build.rs:

Any line that starts with cargo: is interpreted directly by Cargo. This line must be of the form cargo:key=value, like the examples below:

# specially recognized by Cargo cargo:rustc-link-lib=static=foo cargo:rustc-link-search=native=/path/to/foo cargo:rustc-cfg=foo cargo:rustc-env=FOO=bar # arbitrary user-defined metadata cargo:root=/path/to/foo cargo:libdir=/path/to/foo/lib cargo:include=/path/to/foo/include

One can use the stdout of a build.rs program to add additional command-line options for rustc, or set environment variables for it, or add library paths, or specific libraries.

Meson hates this scheme of things. I suppose it would prefer to do the pkg-config calls itself, and then pass that information down to Cargo, you guessed it, via command-line options or something well-defined like that. Again, the example cargo_high_level crate I proposed above could be used to communicate this information from Meson to Cargo scripts. Meson also doesn't like this because it would prefer to know about pkg-config-based libraries in a declarative fashion, without having to run a random script like build.rs.

Building C code from Rust

Finally, some Rust crates build a bit of C code and then link that into the compiled Rust code. I have no experience with that, but the respective build scripts generally use the cc crate to call a C compiler and pass options to it conveniently. I suppose Meson would prefer to do this instead, or at least to have a high-level way of passing down information to Cargo.

In effect, Meson has to be in charge of picking the C compiler. Having the thing-to-be-built pick on its own has caused big problems in the past: GObject-Introspection made the same mistake years ago when it decided to use distutils to detect the C compiler; gtk-doc did as well. When those tools are used, we still deal with problems with cross-compilation and when the system has more than one C compiler in it.

Snarky comments about the Unix philosophy

If part of the Unix philosophy is that shit can be glued together with environment variables and stringly-typed stdout... it's a pretty bad philosophy. All the cases above boil down to having a well-defined, more or less strongly-typed way to pass information between programs instead of shaking proverbial tree of the filesystem and the environment and seeing if something usable falls down.

Would we really have to modify all build.rs scripts for this?

Probably. Why not? Meson already has a lot of very well-structured knowledge of how to deal with multi-platform compilation and installation. Re-creating this knowledge in ad-hoc ways in build.rs is not very pleasant or maintainable.

Related work

Christian Kellner: Thunderclap and Linux

Mër, 27/02/2019 - 7:05md

Thunderbolt security has been in the news recently: researchers presented a set of new vulnerabilities involving Thunderbolt which they named Thunderclap1. The authors built a "fake" network card2 and performed various DMA attacks and were able to temper with memory regions that their network card should have no access to whatsoever. In some way this is not all that surprising because the foundation of Thunderbolt are PCIe tunnels to external hardware and one of the reasons that PCIe is fast is because it can do direct memory access (DMA).

Use boltctl domains to inspect the current security level.

The current primary defense against DMA attacks for Thunderbolt 3 are the security levels: if enabled (the default on most systems) it gives the software the ability to decide on a per device level to allow or deny PCIe tunnels (and with that potentially access to all the memory via DMA)3. While not protecting from DMA attacks per se it protects from some — maybe the most — prominent threat scenarios4: 1) somebody plugging that evil device into your computer while you are away or 2) you have to plug in a device into your computer that you don't trust, i.e. a projector at a conference. On GNU/Linux boltd will authorize a plugged-in device only if an admin user is logged in and the screen is unlocked. For untrusted environments the authorization by boltd can be disabled entirely, e.g. when you go to a conference, via the GNOME settings panel. The toggle is called "Direct Access" (see screenshot below).

The panel has a global switch to toggle all authorization performed by bolt.

This is not the full story though, because there is a second way to prevent DMA attacks utilizing the input–output memory management unit (IOMMU). The general idea is to assign a specific memory region to a device which is the only area the device can directly access. Mika Westerberg from Intel has worked on kernel patches (will be in 5.0) to use firmware supported IOMMU virtualization (Intel calls this VT-d) which should make DMA attacks harder. Having said that, Mika was pointing out (thanks!) that there are still two possible issues mentioned in the paper that are not yet addressed (I paraphrase Mika): 1) IOMMU window size granularity is such that it may open a "too big" IOMMU window and 2) IOMMU mappings are not immediately torn down, leaving memory exposed for some time. Intel (Lu Baolu) is working on patches to mitigate those issues. On the bolt side of things, I have recently merged MR!137 which adds the necessary IOMMU plumbing. The big caveat for this is that it needs hardware/firmware support and for most (all?) currently shipping systems we are out of luck.

Security levels and IOMMU based DMA protection are orthogonal and the way support is implemented in Linux 5.0 is that userspace (bolt) is still required to authorize devices (if the security level demands it, i.e. user or secure). This means that you still have the ability to globally disable authorization of any device, e.g. for when you go to a conference.

Thunderbolt 3 Security Levels in the BIOS: Enable at least "User Authorization"

To sum it all up: For now, make sure you have Thunderbolt 3 security enabled in the BIOS and whenever you are in an untrusted environment (e.g. conference) disable device authorization completely (via the Settings panel). In the future, with Linux 5.0 and new hardware Linux will also get IOMMU based DMA protection that should greatly reduce the risk of DMA attacks and work is underway to plug the remaining known issues.

Footnotes:
  1. Thunderclap: Exploring Vulnerabilities in Operating System IOMMU Protection via DMA from Untrustworthy Peripherals A. Theodore Markettos, Colin Rothwell, Brett F. Gutstein, Allison Pearce, Peter G. Neumann, Simon W. Moore, Robert N. M. Watson. Proceedings of the Network and Distributed Systems Security Symposium (NDSS), 24-27 February 2019, San Diego, USA.
  2. How they did it is pretty cool: "[W]e extracted a software model of an Intel E1000 from the QEMU full-system emulator and ran it on an FPGA". More details in the paper on page 5. The whole platform is also available at github.com/thunderclap-io
  3. The paper states that situation for Linux is: "patches for approval of hotplug devices have been produced by Intel and distributions are beginning to implement user interfaces." So this is not super up to date (long paper review process I guess). The current situation is: kernel level support landed in 4.13; bolt was included in Fedora 27, RHEL 7.6 and is now included in many distributions. IOMMU support will land in 5.0 (but see the text for remaining issues).
  4. Assuming that you trust manufacturers of hardware (attacks can also happen at the "supply chain").

Alexandru Băluț: FOSDEM impressions

Mër, 27/02/2019 - 10:50pd
Earlier this month I got to FOSDEM in Brussels for the first time. Below is how I remember it. Overall it has been great.

I quickly learned to look for a different room when there was a queue at the door. But this once I decided to wait in queue, hoping somebody would get out so I can enter. I and the person in front of me got close to the door, but unfortunately we did not get in. Luckily the next speaker and his colleague were also waiting, so we got a 1-1 (or more accurately 2 on 2) on quantum computing. That was quite cool.

I was hoping to talk more with the GNOME devs, about Pitivi and things. At the GNOME stand it might have been possible, but it was not ideal. The stand table was between two others and it was pretty crowded. I think with some small space between the stands people would be able to get in front of them, to talk with the interested people, which would be nicer. These discussions over-the-table are not very inspiring.

It was much better for KDE who had their stand at the end of the row, and also space between their table and the neighbor table to fit a person demoing stuff. Speaking of which, I stumbled upon the Kdenlive demo, seems the team also cares about stability, like we do.

At the GSoC stand I got to talk with Stephanie about applying. BTW, we just found out we're accepted, so yay! We were actually in quite a good position, since in case we were not accepted, we had a second chance by taking part under GNOME's umbrella. If your project is using GNOME technologies, make sure to talk with the GNOME GSoC admins about taking part in GSoC under them!

Saturday evening I went first to the GSoC meetup where I was hoping to talk with many people, but I only got to talk with 4. Always great to discuss about what people are hacking on! The following days I got an email with the attendees list and I noticed two other GNOME people, but unfortunately I did not get to meet them.

The GNOME beers event later was the best, since I got to meet real contributors.. That is, after I drank a long beer with some GNOME fans, which is cool, but just not what I planned for, as I was expecting to meet only some sort of developers there. Then after many lovely cats stories (or was it a single one?) I called it a Caturday!

Sunday was a bit more relaxed.

I don't think I'll go to FOSDEM again very soon, it's very inefficient as I see it. Still, very cool to attend one. Looking forward to attend the much smaller and focused GUADEC.

Jussi Pakkanen: Could Linux be made "the best development environment" for games?

Mar, 26/02/2019 - 4:12md
It is fairly well established that Linux is not the #1 game development platform at this point in time. It is also unlikely to be the most popular one any time soon due to reasons of economics and inertia. A more interesting question, then, would be can it be made the best one? The one developers favour over all others? The one that is so smooth and where things work so well together that developers feel comfortable and productive in it? So productive that whenever they use some other environment, they get aggravated by being forced to use such a substandard platform?

Maybe.
The requirementsIn this context we use "game development" as a catch all term for software development that has the following properties:
  1. The code base is huge, millions or tens of millions lines of code.
  2. Non-code assets are up to tens of gigabytes in size
  3. A program, once built, needs to be tested on a bunch of various devices (for games, using different 3D graphics cards, amounts of memory, processors etc).
What we already haveA combination of regular Linux userland and Flatpak already provides a lot. Perhaps the most unique feature is that you can get the full source code of everything in the system all the way down to the graphic cards' device drivers (certain proprietary hardware vendors notwithstanding). There is no need to guess what is happening inside the graphics stack, you can just add breakpoints and step inside it with full debug info.
Linux as a platform is also faster than competing game development systems at most things. Process invocation is faster, file systems are faster, compilations are faster, installing dependencies is faster. These are the sorts of small gains that translate to better productivity and developer happiness.
Flatpak as a deployment mechanism is also really nice.What needs to be improved?Many tools in the Linux development chain assume (directly or indirectly) that doing something from scratch for every change is "good enough". Once you get into large enough scale this no longer works. As an example flatpak-builder builds its packages by copying the entire source tree inside the build container. If your repository is in the gigabyte range this does not work, but instead something like bind mounting should be used. (AFAIK GNOME Builder does something like this already.) Basically every operation needs to be O(delta) instead of O(whole_shebang):
  • Any built program that runs on the developer's machine must be immediately deployable on test machines without needing to do a rebuild on a centralised build server.
  • Code rebuilds must be minimal.
  • Installs must skip files that have not changed since the last install.
  • Package creation must only account for changed files.
  • All file transfers must be delta based. Flatpak already does this for package downloads but building the repo seems to take a while.
A simple rule of thumb for this is that changing one texture in a game and deploying the result on a remote machine should not take more than 2x the amount of time it would take to transfer the file directly over with scp.Other tooling supportObviously there needs to be native support for distributed builds. Either distcc, IceCream or something fancier, but even more important is great debugging support.
By default the system should store full debug info and corresponding source code. It should also log all core dumps. Pressing one button should then open up the core file in an IDE with up to date source code available and ready to debug. This same functionality should also be usable for debugging crashes in the field. No crash should go unstored (assuming that there are no privacy issues at play).
Perhaps the hardest part is the tooling for non-coders. It should be possible to create new game bundles with new assets without needing to touch any "dev" tools, even when running a proprietary OS. For example there could be a web service where you could do things like "create new game install on machine X and change model file Y with this uploaded file Z". Preferably this should be doable directly from the graphics application via some sort of a plugin. Does something like this already exist?Other platforms have some of these things built in and some can be added with third party products. There are probably various implementations of these ideas inside the closed doors of many current game development studios. AFAICT there does not exist a fully open product that would combine all of these in a single integrated whole. Creating that would take a fair bit of work, but once done we could say that the simplest way to set up the infrastructure to run a game studio is to get a bunch of hardware, open a terminal and type:
sudo apt install gamestudio

Andres Gomez: Review of Igalia’s Graphics activities (2018)

Hën, 25/02/2019 - 3:51md

This is the first report about Igalia’s activities around Computer Graphics, specifically 3D graphics and, in particular, the Mesa3D Graphics Library (Mesa), focusing on the year 2018.

GL_ARB_gl_spirv and GL_ARB_spirv_extensions

GL_ARB_gl_spirv is an OpenGL extension whose purpose is to enable an OpenGL program to consume SPIR-V shaders. In the case of GL_ARB_spirv_extensions, it provides a mechanism by which an OpenGL implementation would be able to announce which particular SPIR-V extensions it supports, which is a nice complement to GL_ARB_gl_spirv.

As both extensions, GL_ARB_gl_spirv and GL_ARB_spirv_extensions, are core functionality in OpenGL 4.6, the drivers need to provide them in order to be compliant with that version.

Although Igalia picked up the already started implementation of these extensions in Mesa back in 2017, 2018 is a year in which we put a big deal of work to provide the needed push to have all the remaining bits in place. Much of this effort provides general support to all the drivers under the Mesa umbrella but, in particular, Igalia implemented the backend code for Intel‘s i965 driver (gen7+). Assuming that the review process for the remaining patches goes without important bumps, it is expected that the whole implementation will land in Mesa during the beginning of 2019.

Throughout the year, Alejandro Piñeiro gave status updates of the ongoing work through his talks at FOSDEM and XDC 2018. This is a video of the latter:

ETC2/EAC

The ETC and EAC formats are lossy compressed texture formats used mostly in embedded devices. OpenGL implementations of the versions 4.3 and upwards, and OpenGL/ES implementations of the versions 3.0 and upwards must support them in order to be conformant with the standard.

Most modern GPUs are able to work directly with the ETC2/EAC formats. Implementations for older GPUs that don’t have that support but want to be conformant with the latest versions of the specs need to provide that functionality through the software parts of the driver.

During 2018, Igalia implemented the missing bits to support GL_OES_copy_image in Intel’s i965 for gen7+, while gen8+ was already complying through its HW support. As we were writing this entry, the work has finally landed.

VK_KHR_16bit_storage

Igalia finished the work to provide support for the Vulkan extension VK_KHR_16bit_storage into Intel’s Anvil driver.

This extension allows the use of 16-bit types (half floats, 16-bit ints, and 16-bit uints) in push constant blocks, and buffers (shader storage buffer objects).  This feature can help to reduce the memory bandwith for Uniform and Storage Buffer data accessed from the shaders and / or optimize Push Constant space, of which there are only a few bytes available, making it a precious shader resource.

shaderInt16

Igalia added Vulkan’s optional feature shaderInt16 to Intel’s Anvil driver. This new functionality provides the means to operate with 16-bit integers inside a shader which, ideally, would lead to better performance when you don’t need a full 32-bit range. However, not all HW platforms may have native support, still needing to run in 32-bit and, hence, not benefiting from this feature. Such is the case for operations associated with integer division in the case of Intel platforms.

shaderInt16 complements the functionality provided by the VK_KHR_16bit_storage extension.

SPV_KHR_8bit_storage and VK_KHR_8bit_storage

SPV_KHR_8bit_storage is a SPIR-V extension that complements the VK_KHR_8bit_storage Vulkan extension to allow the use of 8-bit types in uniform and storage buffers, and push constant blocks. Similarly to the the VK_KHR_16bit_storage extension, this feature can help to reduce the needed memory bandwith.

Igalia implemented its support into Intel’s Anvil driver.

VK_KHR_shader_float16_int8

Igalia implemented the support for VK_KHR_shader_float16_int8 into Intel’s Anvil driver. This is an extension that enables Vulkan to consume SPIR-V shaders that use Float16 and Int8 types in arithmetic operations. It extends the functionality included with VK_KHR_16bit_storage and VK_KHR_8bit_storage.

In theory, applications that do not need the range and precision of regular 32-bit floating point and integers, can use these new types to improve performance. Additionally, its implementation is mostly API agnostic, so most of the work we did should also help to have a proper mediump implementation for GLSL ES shaders in the future.

The review process for the implementation is still ongoing and is on its way to land in Mesa.

VK_KHR_shader_float_controls

VK_KHR_shader_float_controls is a Vulkan extension which allows applications to query and override the implementation’s default floating point behavior for rounding modes, denormals, signed zero and infinity.

Igalia has coded its support into Intel’s Anvil driver and it is currently under review before being merged into Mesa.

VkRunner

VkRunner is a Vulkan shader tester based on shader_runner in Piglit. Its goal is to make it feasible to test scripts as similar as possible to Piglit’s shader_test format.

Igalia initially created VkRunner as a tool to get more test coverage during the implementation of GL_ARB_gl_spirv. Soon, it was clear that it was useful way beyond the implementation of this specific extension but as a generic way of testing SPIR-V shaders.

Since then, VkRunner has been enabled as an external dependency to run new tests added to the Piglit and VK-GL-CTS suites.

Neil Roberts introduced VkRunner at XDC 2018. This is his talk:

freedreno

During 2018, Igalia has also started contributing to the freedreno Mesa driver for Qualcomm GPUs. Among the work done, we have tackled multiple bugs identified through the usual testing suites used in the graphic drivers development: Piglit and VK-GL-CTS.

Khronos Conformance

The Khronos conformance program is intended to ensure that products that implement Khronos standards (such as OpenGL or Vulkan drivers) do what they are supposed to do and they do it consistently across implementations from the same or different vendors.

This is achieved by producing an extensive test suite, the Conformance Test Suite (VK-GL-CTS or CTS for short), which aims to verify that the semantics of the standard are properly implemented by as many vendors as possible.

In 2018, Igalia has continued its work ensuring that the Intel Mesa drivers for both Vulkan and OpenGL are conformant. This work included reviewing and testing patches submitted for inclusion in VK-GL-CTS and continuously checking that the drivers passed the tests. When failures were encountered we provided patches to correct the problem either in the tests or in the drivers, depending on the outcome of our analysis or, even, brought a discussion forward when the source of the problem was incomplete, ambiguous or incorrect spec language.

The most important result out of this significant dedication has been successfully passing conformance applications.

OpenGL 4.6

Igalia helped making Intel’s i965 driver conformant with OpenGL 4.6 since day zero. This was a significant achievement since, besides Intel Mesa, only nVIDIA managed to do this too.

Igalia specifically contributed to achieve the OpenGL 4.6 milestone providing the GL_ARB_gl_spirv implementation.

Vulkan 1.1

Igalia also helped to make Intel’s Anvil driver conformant with Vulkan 1.1 since day zero, too.

Igalia specifically contributed to achieve the Vulkan 1.1 milestone providing the VK_KHR_16bit_storage implementation.

Mesa Releases

Igalia continued the work that was already carrying on in Mesa’s Release Team throughout 2018. This effort involved a continuous dedication to track the general status of Mesa against the usual test suites and benchmarks but also to react quickly upon detected regressions, specially coordinating with the Mesa developers and the distribution packagers.

The work was obviously visible by releasing multiple bugfix releases as well as doing the branching and creating a feature release.

CI

Continuous Integration is a must in any serious SW project. In the case of API implementations it is even critical since there are many important variables that need to be controlled to avoid regressions and track the progress when including new features: agnostic tests that can be used by different implementations, different OS platforms, CPU architectures and, of course, different GPU architectures and generations.

Igalia has kept a sustained effort to keep Mesa (and Piglit) CI integrations in good health with an eye on the reported regressions to act immediately upon them. This has been a key tool for our work around Mesa releases and the experience allowed us to push the initial proposal for a new CI integration when the FreeDesktop projects decided to start its migration to GitLab.

This work, along with the one done with the Mesa releases, lead to a shared presentation, given by Juan Antonio Suárez during XDC 2018. This is the video of the talk:

XDC 2018

2018 was the year that saw A Coruña hosting the X.Org Developer’s Conference (XDC) and Igalia as Platinum Sponsor.

The conference was organized by GPUL (Galician Linux User and Developer Group) together with University of A Coruña, Igalia and, of course, the X.Org Foundation.

Since A Coruña is the town in which the company originated and where we have our headquarters, Igalia had a key role in the organization, which was greatly benefited by our vast experience running events. Moreover, several Igalians joined the conference crew and, as mentioned above, we delivered talks around GL_ARB_gl_spirv, VkRunner, and Mesa releases and CI testing.

The feedback from the attendees was very rewarding and we believe the conference was a great event. Here you can see the Closing Session speech given by Samuel Iglesias:

Other activities Conferences

As usual, Igalia was present in many graphics related conferences during the year:

New Igalians in the team

Igalia’s graphics team kept growing. Two new developers joined us in 2018:

  • Hyunjun Ko is an experienced Igalian with a strong background in multimedia. Specifically, GStreamer and Intel’s VAAPI. He is now contributing his impressive expertise into our Graphics team.
  • Arcady Goldmints-Orlov is the latest addition to the team. His previous expertise as a graphics developer around the nVIDIA GPUs fits perfectly for the kind of work we are pushing currently in Igalia.
Conclusion

Thank you for reading this blog post and we look forward to more work on graphics in 2019!

Jussi Pakkanen: What if everything you know is wrong?

Sht, 23/02/2019 - 2:33md
In interesting intellectual design challenge is to take a working thing (library, architecture, etc) and then see what would happen if you would reimplement it with the exact opposite way. Not because you'd use the end result anywhere, but just to see if you can learn something new. Or, in other words:
As an example let's apply this approach to C++'s fixed size array object or std::array. Some of its core design principles include:
  1. Indexing is by default unsafe, user is responsible for providing valid indexes.
  2. Errors are reported via exceptions.
  3. Iterators may be invalid and invoking invalid iterators is allowed but UB.
  4. Indexing must be exactly as fast as accessing a C array.
Inverting all of these give us the following design principles:
  1. Out of bound accesses must be impossible.
  2. No exceptions (assuming contained objects do not throw exceptions).
  3. All iterator dereferences are guarded and may never lead to bad accesses.
  4. It's ok to use some (but not much) processor time to ensure safety. Aim for zero overhead when possible.
So how does it look like?An experimental PoC implementation can be found in this Github repo. Note that the code is intentionally unpolished. There are silly choices made. That is totally fine, it's not meant for actual use, only to explore the problem space.
The most important operation for an array type is indexing. It must work for all index values, even for those out of bounds. As no exceptions are allowed, the natural way to make this work is to return a Maybe type. This could be a std::optional<std::reference_wrapper<T>>, but for reasons that will become apparent later, we use a custom type. The outcome for this is kind of nice, allowing you to do things like (assuming an array of size 20):
assert(x[0]);       // check if index is valid.assert(!x[20]);     // OOB indexes are invalid*x[0] = 5           // assignmentassert(*x[0] == 5); // dereferenceint i = *x[20];     // aborts program
The overall usability is kind of nice. This is similar to languages that have int? variables. The biggest problem here is that there is no way to prevent dereferencing an invalid maybe object, leading to termination. A typical bug would look like this:
if(maybe) {  *maybe = 3;} else {  *maybe = 4; // Legal code but should not be.}
There are at least three possible architectural ways to solve this:
  1. Pattern matching (a switch/case on the object type) with language enforcement.
  2. A language construct for "if attempting to use an invalid maybe object, exit the current function with an error". There have been talks of a try statement that would do something like this.
  3. Maybes can not be dereferenced, only called with a method like visit(if_valid_function, if_not_valid_function).
#3 can be done today but is tedious and still does not permit automatically returning an error from the current function block.IterationCreating a safe iterator is fairly simple. This iterator has a pointer to the original object and an integer offset. Dereferencing it calls the indexing operation and returns the maybe to the caller. This works fine until you test it with std::sort and after a lot of debugging find out that the implementation has a code block looking like this:
T tmp = std::move(a);a = std::move(b);b = std::move(tmp);
Even if you have swap defined for your objects, it will call this at some point. The problem here is that since the temporary does not point to any existing object, it can not store the value of the first move. There does not seem to be a good solution for this. Swap-only types might work, but such a type can not be defined with C++'s type system (or at least std::sort can not handle such types). The solution used in this example is that if a maybe object is created so that it does not point to any existing object, it stores objects inside itself. This works (for this use case) but feels a bit iffy.
Another problem this exposes is that the STL does not really work with these kinds of types. The comparison function returns a boolean, but if you compare (for whatever reason) invalid iterators and don't use exceptions there should be three different return values: true, false and erroneous_comparison. That is, an std::optional<bool>. Fixing this properly would mean changing std::sort to handle failing comparisons.But what about performance?For something like sorting, checking every access might cause a noticeable performance hit. If you do something like this:
std::sort(coll.begin(), coll.end())
it is fairly easy to verify that the range is valid and all thus accesses will be valid (assuming no bugs in the sort implementation). It would be nice to be able to opt out of range checks in these cases. Here's one way of doing it:
std::sort(coll.unsafe().begin(), coll.unsafe().end())
Here unsafe is a helper class that returns raw pointers to the underlying object. In this way individual accesses (which are bug prone) are guarded but larger batch operations (which are safer) can optionally request a faster code path. The unsafe request is immediately apparent and easily greppable.Is this fully safe?No. There are at least three different ways of getting an invalid reference.
  1. Deleting the main object while iterators/maybes are outstanding. Would require tracking all outstanding objects. Probably too heavyweight. In Rust this is handled by the borrow checker at compile time.
  2. Unallocating the memory backing the object without running its destructor. This is actually legal. Can't be guarded against.
  3. Changing the object's backing memory to read only with a syscall. Can't be guarded against.

Tim Janik: GSlice considerations and possible improvements

Mër, 20/02/2019 - 1:50md
The paper Mesh: Compacting Memory Management for C/C Applications is about moving memory allocations for compaction, even though the memory pointers are exposed. The idea is to merge allocation blocks from different pages that are not overlapping at page offsets, and then letting multiple virtual…