You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - http://planet.gnome.org/
Përditësimi: 2 ditë 8 orë më parë

Sebastian Dröge: Improving GStreamer performance on a high number of network streams by sharing threads between elements with Rust’s tokio crate

Enj, 05/04/2018 - 5:22md

For one of our customers at Centricular we were working on a quite interesting project. Their use-case was basically to receive an as-high-as-possible number of audio RTP streams over UDP, transcode them, and then send them out via UDP again. Due to how GStreamer usually works, they were running into some performance issues.

This blog post will describe the first set of improvements that were implemented for this use-case, together with a minimal benchmark and the results. My colleague Mathieu will follow up with one or two other blog posts with the other improvements and a more full-featured benchmark.

The short version is that CPU usage decreased by about 65-75%, i.e. allowing 3-4x more streams with the same CPU usage. Also parallelization works better and usage of different CPU cores is more controllable, allowing for better scalability. And a fixed, but configurable number of threads is used, which is independent of the number of streams.

The code for this blog post can be found here.

Table of Contents
  1. GStreamer & Threads
  2. Thread-Sharing GStreamer Elements
  3. Available Elements
  4. Little Benchmark
  5. Conclusion
GStreamer & Threads

In GStreamer, by default each source is running from its own OS thread. Additionally, for receiving/sending RTP, there will be another thread in the RTP jitterbuffer, yet another thread for receiving RTCP (another source) and a last thread for sending RTCP at the right times. And RTCP has to be received and sent for the receiver and sender side part of the pipeline, so the number of threads doubles. In the sum this gives at least 1 + 1 + (1 + 1) * 2 = 6 threads per RTP stream in this scenario. In a normal audio scenario, there will be one packet received/sent e.g. every 20ms on each stream, and every now and then an RTCP packet. So most of the time all these threads are only waiting.

Apart from the obvious waste of OS resources (1000 streams would be 6000 threads), this also brings down performance as all the time threads are being woken up. This means that context switches have to happen basically all the time.

To solve this we implemented a mechanism to share threads, and in the end as a result we have a fixed, but configurable number of threads that is independent from the number of streams. And can run e.g. 500 streams just fine on a single thread with a single core, which was completely impossible before. In addition we also did some work to reduce the number of allocations for each packet, so that after startup no additional allocations happen per packet anymore for buffers. See Mathieu’s upcoming blog post for details.

In this blog post, I’m going to write about a generic mechanism for sources, queues and similar elements to share their threads between each other. For the RTP related bits (RTP jitterbuffer and RTCP timer) this was not used due to reuse of existing C codebases.

Thread-Sharing GStreamer Elements

The code in question can be found here, a small benchmark is in the examples directory and it is going to be used for the results later. A full-featured benchmark will come in Mathieu’s blog post.

This is a new GStreamer plugin, written in Rust and around the Tokio crate for asynchronous IO and generally a “task scheduler”.

While this could certainly also have been written in C around something like libuv, doing this kind of work in Rust is simply more productive and fun due to its safety guarantees and the strong type system, which definitely reduced the amount of debugging a lot. And in addition “modern” language features like closures, which make working with futures much more ergonomic.

When using these elements it is important to have full control over the pipeline and its elements, and the dataflow inside the pipeline has to be carefully considered to properly configure how to share threads. For example the following two restrictions should be kept in mind all the time:

  1. Downstream of such an element, the streaming thread must never ever block for considerable amounts of time. Otherwise all other elements inside the same thread-group would be blocked too, even if they could do any work now
  2. This generally all works better in live pipelines, where media is produced in real-time and not as fast as possible
Available Elements

So this repository currently contains the generic infrastructure (see the src/iocontext.rs source file) and a couple of elements:

  • an UDP source: ts-udpsrc, a replacement for udpsrc
  • an app source: ts-appsrc, a replacement for appsrc to inject packets into the pipeline from the application
  • a queue: ts-queue, a replacement for queue that is useful for adding buffering to a pipeline part. The upstream side of the queue will block if not called from another thread-sharing element, but if called from another thread-sharing element it will pause the current task asynchronously. That is, stop the upstream task from producing more data.
  • a proxysink/src element: ts-proxysrc, ts-proxysink, replacements for proxysink/proxysrc for connecting two pipelines with each other. This basically works like the queue, but split into two elements.
  • a tone generator source around spandsp: ts-tonesrc, a replacement for tonegeneratesrc. This also contains some minimal FFI bindings for that part of the spandsp C library.

All these elements have more or less the same API as their non-thread-sharing counterparts.

API-wise, each of these elements has a set of properties for controlling how it is sharing threads with other elements, and with which elements:

  • context: A string that defines in which group this element is. All elements with the same context are running on the same thread or group of threads,
  • context-threads: Number of threads to use in this context. -1 means exactly one thread, 1 and above used N+1 threads (1 thread for polling fds, N worker threads) and 0 sets N to the number of available CPU cores. As long as no considerable work is done in these threads, -1 has shown to be the most efficient. See also this tokio GitHub issue
  • context-wait: Number of milliseconds that the threads will wait on each iteration. This allows to reduce CPU usage even further by handling all events/packets that arrived during that timespan to be handled all at once instead of waking up the thread every time a little event happens, thus reducing context switches again

The elements are all pushing data downstream from a tokio thread whenever data is available, assuming that downstream does not block. If downstream is another thread-sharing element and it would have to block (e.g. a full queue), it instead returns a new future to upstream so that upstream can asynchronously wait on that future before producing more output. By this, back-pressure is implemented between different GStreamer elements without ever blocking any of the tokio threads. All this is implemented around the normal GStreamer data-flow mechanisms, there is no “tokio fast-path” between elements.

Little Benchmark

As mentioned above, there’s a small benchmark application in the examples directory. This basically sets up a configurable number of streams and directly connects them to a fakesink, throwing away all packets. Additionally there is another thread that is sending all these packets. As such, this is really the most basic benchmark and not very realistic but nonetheless it shows the same performance improvement as the real application. Again, see Mathieu’s upcoming blog post for a more realistic and complete benchmark.

When running it, make sure that your user can create enough fds. The benchmark will just abort if not enough fds can be allocated. You can control this with ulimit -n SOME_NUMBER, and allowing a couple of thousands is generally a good idea. The benchmarks below were running with 10000.

After running cargo build –release to build the plugin itself, you can run the benchmark with:

cargo run --release --example udpsrc-benchmark -- 1000 ts-udpsrc -1 1 20

and in another shell the UDP sender with

cargo run --release --example udpsrc-benchmark-sender -- 1000

This runs 1000 streams, uses ts-udpsrc (alternative would be udpsrc), configures exactly one thread -1, 1 context, and a wait time of 20ms. See above for what these settings mean. You can check CPU usage with e.g. top. Testing was done on an Intel i7-4790K, with Rust 1.25 and GStreamer 1.14. One packet is sent every 20ms for each stream.

Source Streams Threads Contexts Wait CPU udpsrc 1000 1000 x x 44% ts-udpsrc 1000 -1 1 0 18% ts-udpsrc 1000 -1 1 20 13% ts-udpsrc 1000 -1 2 20 15% ts-udpsrc 1000 2 1 20 16% ts-udpsrc 1000 2 2 20 27% Source Streams Threads Contexts Wait CPU udpsrc 2000 2000 x x 95% ts-udpsrc 2000 -1 1 20 29% ts-udpsrc 2000 -1 2 20 31% Source Streams Threads Contexts Wait CPU ts-udpsrc 3000 -1 1 20 36% ts-udpsrc 3000 -1 2 20 47%

Results for 3000 streams for the old udpsrc are not included as starting up that many threads needs too long.

The best configuration is apparently a single thread per context (see this tokio GitHub issue) and waiting 20ms for every iterations. Compared to the old udpsrc, CPU usage is about one third in that setting, and generally it seems to parallelize well. It’s not clear to me why the last test has 11% more CPU with two contexts, while in every other test the number of contexts does not really make a difference, and also not for that many streams in the real test-case.

The waiting does not reduce CPU usage a lot in this benchmark, but on the real test-case it does. The reason is most likely that this benchmark basically sends all packets at once, then waits for the remaining time, then sends the next packets.

Take these numbers with caution, the real test-case in Mathieu’s blog post will show the improvements in the bigger picture, where it was generally a quarter of CPU usage and almost perfect parallelization when increasing the number of contexts.

Conclusion

Generally this was a fun exercise and we’re quite happy with the results, especially the real results. It took me some time to understand how tokio works internally so that I can implement all kinds of customizations on top of it, but for normal usage of tokio that should not be required and the overall design makes a lot of sense to me, as well as the way how futures are implemented in Rust. It requires some learning and understanding how exactly the API can be used and behaves, but once that point is reached it seems like a very productive and performant solution for asynchronous IO. And modelling asynchronous IO problems based on the Rust-style futures seems a nice and intuitive fit.

The performance measurements also showed that GStreamer’s default usage of threads is not always optimal, and a model like in upipe or pipewire (or rather SPA) can provide better performance. But as this also shows, it is possible to implement something like this on top of GStreamer and for the common case, using threads like in GStreamer reduces the cognitive load on the developer a lot.

For a future version of GStreamer, I don’t think we should make the threading “manual” like in these two other projects, but instead provide some API additions that make it nicer to implement thread-sharing elements and to add ways in the GStreamer core to make streaming threads non-blocking. All this can be implemented already, but it could be nicer.

All this “only” improved the number of threads, and thus the threading and context switching overhead. Many other optimizations in other areas are still possible on top of this, for example optimizing receive performance and reducing the number of memory copies inside the pipeline even further. If that’s something you would be interested in, feel free to get in touch.

And with that: Read Mathieu’s upcoming blog posts about the other parts, RTP jitterbuffer / RTCP timer thread sharing, and no allocations, and the full benchmark.

Miguel de Icaza: Fixing Screenshots in MacOS

Mër, 04/04/2018 - 7:50md

This was driving me insane. For years, I have been using Command-Shift-4 to take screenshots on my Mac. When you hit that keypress, you get to select a region of the screen, and the result gets placed on your ~/Desktop directory.

Recently, the feature stopped working.

I first blamed Dropbox settings, but that was not it.

I read every article on the internet on how to change the default location, restart the SystemUIServer.

The screencapture command line tool worked, but not the hotkey.

Many reboots later, I disabled System Integrity Protection so I could use iosnoop and dtruss to figure out why screencapture was not logging. I was looking at the logs right there, and saw where things were different, but could not figure out what was wrong.

Then another one of my Macs got infected. So now I had two Mac laptops that could not take screenshots.

And then I realized what was going on.

When you trigger Command-Shift-4, the TouchBar lights up and lets you customize how you take the screenshot, like this:

And if you scroll it, you get these other options:

And I had recently used these settings.

If you change your default here, it will be preserved, so even if the shortcut is Command-Shift-4 for take-screenshot-and-save-in-file, if you use the TouchBar to make a change, this will override any future uses of the command.

Neil McGovern: ED Update – Week 14

Mar, 03/04/2018 - 3:22md

Last weekend, I was at LibrePlanet 2018, the FSF’s annual conference where I gave a talk about Free Software desktops and their continued importance. The videos are currently being uploaded, and there were some really interesting talks on a wide range of subjects. One particular highlight for me was that Karen Sandler (Software Freedom Conservancy ED, and former GNOME ED) won the Award for the Advancement of Free Software, which was very highly deserved. Additionally, the Award for Projects of Social Benefit went to Public Lab, who had a very interesting talk on attracting newcomers and building communities. They advocated the use of first-timers-only as a way to help introduce people to the project. It was good to catch up with GNOMErs and various people in the wider community.

I arrived a day early into Boston, as Deb Nicholson had kindly helped organise a SpinachCon. The idea behind these is to do some user testing and see actual people using GNOME. We were also accompanied by Dataverse and Debian. It was interesting to watch people try and accomplish some tasks (like “Set a wallpaper” and “start a screen recording”) and see what happens. This is probably worth a blog post all on its own, so I’ll write that up separately. For those who want a sneak peak, it wasn’t just usability improvements that could come out of it, but we discovered a couple of bugs as well.

Apart from that, both myself and Sriram Ramkrishna have been added as mods of reddit.com/r/gnome to help out there, and I gave a wide-ranging interview for Destination Linux!

Richard Hughes: The LVFS CDN will change soon

Mar, 03/04/2018 - 2:50md

tl;dr: If you have https://s3.amazonaws.com/lvfsbucket/downloads/firmware.xml.gz in /etc/fwupd/remotes.d/lvfs.conf then you need to nag your distribution to update the fwupd package to 1.0.6.

Slightly longer version:

The current CDN (~$100/month) is kindly sponsored by Amazon, but that won’t last forever and the donations I get for the LVFS service don’t cover the cost of using S3. Long term we are switching to a ‘dumb’ provider (currently BunnyCDN) which works out 1/10th of the cost — which is covered by the existing kind donations. We’ve switched to use a new CNAME to make switching CDN providers easy in the future, which means this should be the only time this changes client side.

If you want to patch older versions of fwupd, you can apply this commit.

Jordan Petridis: Flatpak builds in the CI

Hën, 02/04/2018 - 7:08md

This is a follow-up to Carlos Soriano’s blog post about the new GNOME workflow that has emerged following the transition to gitlab.gnome.org. The post is pretty damn good and if you haven’t read it already you should. In this post I will walk through setting up the Flatpak build and test job that runs on the nautilus CI. The majority of the work on this was done by Carlos Soriano and Ernestas Kulik.

Let’s start by defining what we want to accomplish. First of all we want to ensure that every commit commit will be buildable in a clean environment and against a Flatpak runtime. Second to that, we want to ensure that the each project’s test suite will be run and pass. After these succeed we want to be able to export the resulting Flatpak to install and/or test it locally. Lastly we don’t want to waste precious time of the shared CI runners from other projects so we want to utilize Flatpak’s ostree artifacts for caching.

To summarize we want to achieve the following:

  1. Build the project
  2. Run the Test-Suite
  3. Create a Flatpak package/bundle and export it
  4. Use a caching mechanism to reduce build times
Building the project

If your Flatpak manifest targets gnome-3.26/3.28 or Freedesktop-1.6 runtime you can use these container images, provided by the Flatpak project directly. They are a good fit for your stable release branches too. Nautilus master branch though, as most GNOME projects, targets the GNOME Nightly runtime. I created a custom image using the same process the stable 3.26/3.28 images are built. It will be updated every day an hour after the new Nightly runtime is composed. You can use it by changing the image: key in the .gitlab-ci.yml to point to registry.gitlab.com/alatiera/gnome-nightly-oci/gnome-master:latest. Currently we invoke Flatpak builder manually to build the resulting Flatpak, the reason this happens is that the manifest is always pointing at Gnome/nautilus/master branch which would ignore the fork/branch we want it to build. That’s why we do the following in nautilus to sidestep that.

script: - flatpak-builder --stop-at=nautilus app build-aux/flatpak/org.gnome.Nautilus.json # Make sure to keep this in sync with the Flatpak manifest, all arguments # are passed except the config-args because we build it ourselves - flatpak build app meson --prefix=/app --libdir=/app/lib -Dprofile=development -Dtests=all _build - flatpak build app ninja -C _build install - flatpak-builder --finish-only --repo=repo app build-aux/flatpak/org.gnome.Nautilus.json

This builds all the modules up till nautilus. Then we take over and build nautilus ourselves with from the local checkout. Finally we call again the manifest to finish the build.

It works with Sdk-extensions too

There’s a small caveat here, I was only able to use sdk-extensions with flatpak-builder.
It’s probably possible to use flatpak build too, but my knowledge of Flatpak
is limited. If you know of a way to do it better please let me know!

I’ve created a Rust-sdk image for my own use. Here is an example of how to use its used.
If you need any other sdk-extension, like C#, open an issue in this repo or even better make an MR!

Running tests inside the Flatpak environment

In order to run the nautilus test suite we will add the following line:

flatpak build app ninja -C _build test

If your testsuite requires a display, you could use Xvfb. Since it’s quite common for GNOME apps I’ve included it in the gnome-nightly container image directly so you won’t have it to install it. Hopefully you can just prefix the above command with xvfb-run ... args cmd

xvfb-run -a -s "-screen 0 1024x768x24" flatpak build app ninja -C _build test

Thanks to Emmanuele Bassi for showing me how to get the display tests up and running withxvfb.

Retrieving a Flatpak package

To export a Flatpak bundle, named nautilus-dev.flatpak, we will add the following line:

flatpak build-bundle repo nautilus-dev.flatpak --runtime-repo=https://sdk.gnome.org/gnome-nightly.flatpakrepo org.gnome.NautilusDevel

Then we will add an artfact: block in order to extract our Flatpak package and be able to download it and extract it locally.

artifacts: paths: - nautilus-dev.flatpak expire_in: 2 days

After that there should be a “Browse” and a “Download” button in the job’s logs from where you can download the Flatpak bundle. You can either get a zip with all the artifacts upon clicking “Download” or get the individual nautilus-dev.flatpak from “Browse”. After that to install it you can either open it with gnome-software (probably KDE discover too) or with the following command:

flatpak install --bundle nautilus-dev.flatpak

Caching in-between builds

In order to introduce caching in between CI runs we just need to add the following lines. In principle this should work, but there seem to be frequent cache misses that I am still investigating. If anyone is able to reduce the misses somehow please let me know.

cache: paths: - .flatpak-builder/cache Complete config

Here is how the nautilus .gitlab-ci.yml config for the Flatpak job looks like right now. It might have slightly changed depending on when you read this.

flatpak: image: registry.gitlab.com/alatiera/gnome-nightly-oci/gnome-master:latest stage: test script: - flatpak-builder --stop-at=nautilus app build-aux/flatpak/org.gnome.Nautilus.json # Make sure to keep this in sync with the Flatpak manifest, all arguments # are passed except the config-args because we build it ourselves - flatpak build app meson --prefix=/app --libdir=/app/lib -Dprofile=development -Dtests=all _build - flatpak build app ninja -C _build install - flatpak-builder --finish-only --repo=repo app build-aux/flatpak/org.gnome.Nautilus.json # Make a Flatpak Nautilus bundle for people to test - flatpak build-bundle repo nautilus-dev.flatpak --runtime-repo=https://sdk.gnome.org/gnome-nightly.flatpakrepo org.gnome.NautilusDevel # Run automatic tests inside the Flatpak env - xvfb-run -a -s "-screen 0 1024x768x24" flatpak build app ninja -C _build test artifacts: paths: - nautilus-dev.flatpak - _build/meson-logs/meson-log.txt - _build/meson-logs/testlog.txt expire_in: 2 days cache: paths: - .flatpak-builder/cache Going forward

The current config is not ideal yet. We have to keep it in sync with various parts of the Flatpak manifest and essentially replicate half of the functionality that’s specified in it. It would be nice if we could use the usual oneliner to build the Flatpak instead.

Also the cache misses issues mentioned above are driving me mad, when setting up gnome-builder‘s CI it would spend 11/12min rebuilding all the dependencies and half a minute building gnome-builder. What seems to happen is that it hits the ostree cache points for gnome-builder, but it is rejecting them for the modules/dependencies. Never figured out why sadly.

The Wikipedia page of Xvfb states that it has been replaced by xf86-video-dummy since X.org 7.8 and someone should probably look into using that for the tests instead.

But this setup should be good enough to get you started. If you are an app maintainer and would want to set this up but don’t have time or you are having trouble with something I want to hear from you! Feel free to ping me anytime at #gnome-hackers or send me an email.

Zeeshan Ali: Joining Collabora

Hën, 02/04/2018 - 12:35md
Last Thursday was my last day at Kinvolk. While Kinvolk is a great company run by very nice and talented folks and I really hoped to work there for a very long time, unfortunately it turned out to be not the best fit for me.

While I am sad to leave, I am also very excited to join Collabora in their multimedia team. Today is my first day there. Since Collabora does not exist in Germany, I will be working for them as a consultant and had to register my own one-man company. Yes, I will be staying in Berlin and work from home.

I have known Collabora from its very early days, when it was just a few developers with a vision and passion for Open Source. I also worked very closely with Collabora over the years during the good old Nokia Maemo/Meego times. I always had great appreciation for their work and commitment to Open Source, which is a huge challenge for a consulting company.

While I do not yet know which specific projects I will be involved in at Collabora, I'm most likely going to be working on/with GStreamer again and I'm especially excited about that. Also exciting for me is the fact that people at Collabora share my appreciation for the Rust programming language.

Umang Jain: Recipes hackfest and joining Endless

Sht, 31/03/2018 - 5:10md

Recipes hackfest took place in AMIKOM university, Yogyakarta in Indonesia last month which I was fortunate to be a part of. We planned a outreach event for the students at the university centered around of what open source is, getting involved and what we do as a whole in GNOME community. The event was terrific and followed with a great kick off for the hackfest.

Recipes is a content driven application. We brainstromed ideas initially to get a better hold of current state of things and create a of roadmap different areas that revolve around - Design, collecting/generating recipes from users, database layer, localization and translation.

Regarding the storage layer, there was quite a bit of discussion of taking leverage from Endless framework in the form of content-shards mapped to a custom data model. The data model is then binded to the concept of UI-cards which helps to generate the view. More nicer details on Emmanuele Bassi’s blog.

I was very interested to solve the problem of localization and translation of recipes. Both comes with a set of challenges in order to provide a good user experience. We tried to list down the use cases of the application and managed to talk to some students (who were in chef schools locally) to get a better understanding. It’s natural to think that to keep the original language in which the recipe is generated by the user as its default language. Although, it is also required to translate so that anyone referring to it has better options to read it in other languages. So it will be a need of contributors who know source language well enough to get it translated adequately into English atleast and from there to other languages maybe.

Similarly, localization of recipes is also an interesting case, in which we need to analyse what are the users looking for exactly, in the application. Are they interested in their local recipes or trying to find recipes in a certain cuisine or both? Certain interesting cases like, availability of ingredients in one’s country/area, substitutes (if any), are open ended questions that we tried to address.

To sum up, it was a valuable experience for me to connect with some of the pioneers of community. Thanks to AMIKOM university for hosting us. I would also like to thank GNOME community for sponsoring my travel for the hackfest.

Joining Endless

On a side note, this was my first week at Endless. The onboarding experience is great and I am very excited about Endless in general. Special thanks to Cosimo Cecchi who guided me all through the process. Delighted to start my career at a great FLOSS-oriented company!

Thank you for reading and happy hacking!

Marcus Lundblad: Maps, Gitlab, and Meson

Mër, 28/03/2018 - 10:28md
It's been a couple of weeks now since GNOME 3.28 was realeased. I´ve already written some about the new features in Maps for 3.28. But already now there´s some exiting news looking forward. The first is not related to code or features of Maps itself. But the project has been migrated to GNOME´s Gitlab instance (along with other projects now that mass-migration of the remaining projects from Cgit and Bugzilla is going on). I think this will simplify newcomer contributions and bug reporting quite a bit. Also the code review interface for the merge requests looks pretty nice.

The other big news is that Maps is now built using Meson. Even though the amount of compiled code (the private C library we use for interfacing with things like libxml2 and libfolks, which doesn´t natively support GIR bindings) is quite small, I still think build times are noticeably quicker now. I decided to remove support for building using Autotools right away, since we had some shell:ish magic going with installing our icons where shell sub process ran cut to parse file names into path destinations based on splitting on underscore characters. So I took the opportunity to clean this up and move the icons into suitable directory structures directly in the repo. I didn´t think it was worth the effort to “back-port” this the Autotools build system, so from now on master can now only be built with Meson (ofcourse on the stable “gnome_3_28” branch building is done the old way using Autotools).

Unfortunately we where out of tile access from Mapbox for little over a week recently, but a couple of weeks ago the GNOME Foundation board has voted to set aside a budget for tiles, so things should be good now.

And also since it´s boring writing blog posts without any screenshots, and in three weeks I´m going to San Fransico with work. I loaded transit data from MUNI in my OpenTripPlanner server instance and did some cable car “browsing”, notice the nice little icons (made by Andreas Nilsson):

Jiri Eischmann: Why I use Flatpak for 3rd party apps

Mër, 28/03/2018 - 5:22md

There are more reasons why to run applications as flatpaks. Someone wants to have the latest versions as soon as possible. For me as a user of Fedora which provides up-to-date versions of apps this is not a big motivation. Someone wants to run apps more securely. Again I usually trust software provided by Fedora and Flatpak sandbox is still not as strictly enforced as it should ideally be.

But where I really prefer using Flatpak to RPM packages are 3rd party applications. I’m usually running development versions of Fedora. Pre-releases on my work machine and Rawhide on my home laptop. They have been pretty stable for me, including applications in Fedora repositories. Unfortunately it’s not the case for 3rd party applications. Their authors usually don’t follow distro development closely and although many of them bundle as much as possible to avoid problems with changing dependencies, things break.

I used to use the Spotify client as a package from Negativo17 repos. But when I upgraded to F27, something broke and it stopped working. I’m pretty sure it was fixed later on after people started reporting it, but I didn’t want to stop using Spotify for the time being and didn’t have time to debug and report the issue. So I switched to Spotify flatpak and it has worked for me ever since.

The problem I had with very early stages of pre-released Fedora and Rawhide is that dependecies of GStreamer plugins in RPMFusion were usually broken. So I ended up without system multimedia codecs. It was often the case for VLC from the same repository, too. Then I switched to VLC and GNOME MPV flatpaks. Problem solved.

The last example is Telegram. Until recently I was using the official version. It’s not even provided as an RPM package. You have to download an archive, unpack its content to your home, run the binary which creates a desktop file… not very elegant in 2018, but once you do it, it just works. Well… until it doesn’t. I upgraded to F28 and Telegram suddenly took a lot of time to start up. It hung on some font config error until it timeouted and finally started. It easily took 1 minute. So I switched to Telegram flatpak as well. Works like a charm.

So what I really appreciate about Flatpak is that the apps don’t rely on the underlying system, so if the system changes e.g. due to upgrade to a newer major version apps don’t break. As I said it’s not such a major issue for distro-provided apps, but it’s certainly an issue for 3rd apps and Flatpak solved it for me.

Andre Klapper: Statistics, Google Code-in, Gitlab, Bugzilla

Sht, 24/03/2018 - 6:50md

Richard Hughes: LVFS Mailing List

Pre, 23/03/2018 - 8:35md

I have created a new low-volume lvfs-announce mailing list for the Linux Vendor Firmware Service, which will only be used to make announcements about new features and planned downtime. If you are interested in what’s happening with the LVFS you can subscribe here. If you need to contact me about anything LVFS-related, please continue to email me (not the mailing list) as normal.