You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME -
Përditësimi: 3 ditë 16 orë më parë

Will Thompson: Using Vundle from the Vim Flatpak

Mar, 07/08/2018 - 6:21md

I (mostly) use Vim, and it’s available on Flathub. However: my plugins are managed with Vundle, which shells out to git to download and update them. But git is not in the org.freedesktop.Platform runtime that the Vim Flatpak uses, so that’s not going to work!

If you’ve read any of my recent posts about Flatpak, you’ll know my favourite hammer. I allowed Vim to use flatpak-spawn by launching it as:

flatpak run --talk-name=org.freedesktop.Flatpak org.vim.Vim

I saved the following file to /tmp/git:

#!/bin/sh exec flatpak-spawn --host git "$@"

then ran the following Vim commands to make it executable, add it to the path, then fire up Vundle:

:r !chmod +x /tmp/git :let $PATH = '/tmp:/app/bin:/usr/bin' :VundleInstall

This tricks Vundle into running git outside the sandbox. It worked!

I’m posting this partly as a note to self for next time I want to do this, and partly to say “can we do better?”. In this specific case, the Vim Flatpak could use org.freedesktop.Sdk as its runtime, like many other editors do. But this only solves the problem for tools like git which are included in the relevant SDK. What if I’m writing Python and want to use pyflakes, which is not in the SDK?

Umang Jain: DevConf India 2018

Mar, 07/08/2018 - 1:10md

DevConf IN was organized at Christ University, Bangalore 04-05 August. It turned out to be totally fun-packed excited weekend for me. I really had a great time meeting people from various other open source communitites from India. I also delivered a talk on Flatpak mainly focusing on overall architecture, it’s benefits for the user and developers.

Honestly speaking, I didn’t expect much from Devconf India but the event has been flaw-less in every sense. Even though the event was sponsored and organized by Red Hat, 48% of the speakers were non-Redhatters coming from other organizations, academia and hobbists. It really brought out the vibe of the “community”.

The most popular track according to me was the “Cloud and Containers” track. There were other tracks like “Testing”, “Community” and “Design” which are often neglected at other conferences but are a big part of SDLC. The keynotes by Ric Wheeler and Karanbir Singh were full of inspiration. Approx. 1300 participants attended the conference.

My talk on Flatpak went smoothly and I managed to keep it under the designated time slot \o/. I also got some questions and follow-ups which I think was a good sign. I also plugged about EndlessOS during the first couple of minutes on what problems it is trying to address using open-source. Adrian also delivered a talk on Flatpak-ing apps.

I particularly engaged with folks(Sayan and Sinny) from teamsilverblue and fsmk. I also received this coffee mug swag from teamsilverblue :)

I am amazed and glad that Devconf organizers pulled off this event at such a scale. They did put in a lot of hard work and it paid off. Thank you organizers, volunteers, speakers and participants for making this such an amazing experience.

Next, I am leaving for GNOME Asia,Taipei in couple of days; another round of fun-packed weekend. I am really excited to meet people from GNOME and openSUSE community. I will also be visiting Endless’s Taipei office to meet my colleagues from the kernel team :)

Thank you for reading and happy hacking!

Rohit Kaushik: Improving todo.txt & Todoist plugin

Mar, 07/08/2018 - 12:08pd

The GSoC coding period just ended. I would first like to apologize for not updating about my work. I am working on improving Todo.txt and Todoist integration to GNOME To Do. During the coding period, a lot of improvements were added to Todo.txt and Todoist and in this blog post I write about my journey and describing the implementation details.

Todo.txt & Todoist Updates

For those not familiar with todo.txt, it is a text-based format for storing and Todoist is quite a popular task manager application that can be used on smartphones as well as Personal Computers.

My aim for the Todo.txt was to improve the current code, make the plugin feature similar to To Do and document the syntax of todo.txt. I was able to complete all the task.

  1. The first task was to implement the support for notes in the todo.txt plugin. Notes are basically a more lengthy description of the task. Since todo.txt format allows adding custom key: value pairs for task description, I introduced a new key “note” and the value follows the key in quotes “”.  The only major challenge was handling the quotes and special characters like newlines, quotes etc in the description. This was done storing the notes in escaped form (i.e adding an extra ‘/’ before the special character and removing it during parsing phase.
  2. Adding support for creation and completion date –  The completion date occurs just after priority and creation date occurs after the completion date as per the todo.txt format. i.e x (priority) completion-date creation-date. Parsing completion-date & creation was easy using the last seen token. I had to subclass GtdTask into GtdTodoTxtTask since we need support for setting creation date. As the task are stored as text, the creation date needs to be cached and set at first load. This was done by subclassing GtdTask and adding a setter for the completion date.
  3. Adding support for list background – I noticed that list background color was not being cached into todo.txt and hence the color was lost on exit and start of To Do. After discussion with my mentor we decided to add two hidden custom lines in todo.txt, here hidden just means a marker “h:1” at the start of the line in todo.txt and are used to denote ToDo specific features. 1 custom line for storing list background color and other for storing Lists. Below is an example. For this task, I had to change the parser a bit. It was slightly confusing at first because I didn’t have a clear understanding of function pointers (needed for parser vtable implementation) but feaneron helped me with it.
      h:1 Lists @test @test2
      h:1 Colors test:#729fcf test2:#5c3566
  4. And now the most important task was to implement the subtask feature. The subtask was supported in the initial stage of the plugin but was removed because of instability. It wasn’t very clear to me as to how to go about but eventually, we decided to use indentation to denote parent-child relation between task described in immediate lines. An indentation of 4 spaces means the task is a child of the previous task. The algorithm works by creating all the task and maintaining it in a hastable with key as list of tasks. Once all the tasks are created the subtask relation is determine using the indentation property.
  5. Finally I documented the syntax of todo.txt so that new user find it easier to use the plugin. The documentation is present here.

With these changes todo.txt is much more stable and has all the feature that ToDo supports. Todoist had 3 major work to be done, fixing the network issue, auto sync of tasks and removing GOA and making Todoist plugin handle accounts on it’s own.

  • The network connectivity issue was handled by adding a network monitor and listening the network changes. Incase To Do cannot connect to the Todoist the provider is removed and hence no data loss happens due to user changes. The below screenshot show that wifi connectivity loss/gain is now handled by Todoist.
  • The autosync patch is not merged yet but is very close to being merged in master. To reduce the number sync request with Todoist we manage time intervals at which to sync based on user window focus. When user focusses on Todoist for first time, sync request is generated immediately and timeout is set to 30 seconds. But if the focus is not on ToDo, the timeout is set to 10 minutes.
  • The final work is to making Todoist plugin handle it’s own accounts by using Keyring to store tokens. This work is still in progress and unfortunately I wasn’t able to get it merged. I have added patches for changes to preferances panel and implementation of keyring but still to integrate both these changes and also make modifications to plugins and providers. I will continue working on this and finish the implementation.

Finally, I would really like to thank my mentor Feaneron for guiding me and helping me whenever I was stuck, reviewing my code which was filled with code style errors and being patient enough to keep reminding me about my mistakes. I would also like to thank GNOME for giving me this great opportunity.

Tobias Mueller: Talking at GUADEC 2018 in Almería, Spain

Hën, 06/08/2018 - 6:07md

I’ve more or less just returned from this year’s GUADEC in Almeria, Spain where I got to talk about assessing and improving the security of our apps. My main point was to make people use ASan, which I think Michael liked Secondarily, I wanted to raise awareness for the security sensitivity of some seemingly minor bugs and how the importance of getting fixes out to the user should outweigh blame shifting games.

I presented a three-staged approach to assess and improve the security of your app: Compilation time, Runtime, and Fuzzing. First, you use some hardening flags to compile your app. Then you can use amazing tools such as ASan or Valgrind. Finally, you can combine this with afl to find bugs in your code. Bonus points if you do that as part of your CI.

I encountered a few problems, when going that route with Flatpak. For example, the is not in the Platform image, so you have to use an extension to have it loaded. It’s better than it used to be, though. I tried to compile loads of apps with ASan in the past and I needed to compile a custom GCC. And then mind the circular dependencies, e.g. libmfpr is needed by GCC. If I then compile a libmfpr with ASan, then GCC would stop working, because gcc itself is not linked against ASan. It seems silly to have those annoyances in the stack. And it is. I hope that by making people play around with these technologies a bit more, we can get to a point where we do not have to catch those time consuming bugs.

Panorama in Frigiliana

The organisation around the presentation was a bit confusing as the projector didn’t work for the first ten minutes. And it was a bit unclear who was responsible for making it work. In that room the audio also used to be wonky. I hope it went well alright after all.

Tobias Mueller: GNOME Keysign 0.9.8 released

Hën, 06/08/2018 - 6:07md

It’s been a while after my last post. This time, we have many exciting news to share. For one, we have a new release of GNOME Keysign which fixes a few bugs here and there as well as introduces Bluetooth support. That is, you can transfer your key with your buddy via Bluetooth and don’t need a network connection. In fact, it becomes more and more popular for WiFis to block clients talking to each other. A design goal is (or rather: was, see down below) to not require an Internet connection, simply because it opens up a can of worms with potential failures and attacks. Now you can transfer the key even if your WiFi doesn’t let you communicate with the other machine. Of course, both of you need have to have Bluetooth hardware and have it enabled.

The other exciting news is the app being on Flathub. Now it’s easier than ever to install the app. Simply go to Flathub and install it from there. This is a big step towards getting the app into users’ hands. And the sandbox makes the app a bit more trustworthy, I hope.

flatpak remote-add --if-not-exists flathub
flatpak install flathub org.gnome.Keysign

The future brings cool changes. We have already patches lined up that bring an Internet transport with the app. Yeah, that’s contrary to what I’ve just said a few paragraphs above. And it does cause some issues in the UI, because we do not necessarily want the user to use the Internet if the local transport just works. But that “if” is unfortunately getting bigger and bigger. So I’m happy to have a mix of transports now. I’m wondering what the best way is to expose that information to the user, though. Do we add a button for the potentially privacy invading act of connecting to the Internet? If we do, then why do we not offer buttons for the other transports like Bluetooth or the local network?

Anyway, stay tuned for future updates.

Eisha Chen-yen-su: Add a message context menu for Fractal

Hën, 06/08/2018 - 5:53md

Fractal is a Matrix client for GNOME and is written in Rust. Matrix is an open network for secure, decentralized communication.

As I’ve promised in the previous article, I’m going to talk about my implementation of the message context menu. My work started from this issue that were asking to add a “right-click” menu to interact with the messages. Pressing the secondary click would make a popover appear with a menu from witch you could:

  • Reply to a message. Or more exactly insert a quote of the message at the beginning of the message input (there is a new reply function in Matrix but it is not about this).
  • Copy the body of a message.
  • View the JSON source of a message. This may be removed in the future as it is mainly for debugging purpose.
  • Request the redaction of a message. Please note that messages cannot be deleted on Matrix homeservers as all the events of a room contains important information about the structure of this one (more information here). However, they can be “redacted”: all the information that are not essential to the room structure (so not the event ID, the sender ID, the date, etc…) can be removed from this event (like its body).

Here was the original design for this menu:

The message menu popover First implementation

I will first talk about how I implemented the popover for the context menu. I used glade to to design the popover menu and I’ve added a “View Source” button to it (see this commit). Then I had to figure out how to make the menu popups.

The room history is made with a GtkListBoxand each message is a GtkListBoxRow. Inside this GtkListBoxRow, there are various enclosed GtkBoxes that compose the layout of the message with the sender’s avatar, their display name, the date the message was went and the text of the message (or the image, when it is an image). Here is what it looks like:

What I wanted to do first was to make the menu popups right above the GtkListBoxRox. My first attempt to implement this was by individually connecting the popup function with the “button-press-event” signal (and only when it was the secondary click) of each widget composing the message but there were two major problems with this method:

  • You had to click exactly on the GtkEventBox enclosing the user’s display name or one of the GtkLabels composing the message body to have the popover actually appears. The avatar, date widgets and the GtkBoxes enclosing the message’s elements wasn’t receiving the “button-press-event” signal. I couldn’t figure out why.
  • The popover was appearing right above the widget which was receiving the “button-press-event” signal, not above the GtkListBoxRow of the message.

As a work around, instead of what was done in the first place, I’ve enclosed the GtkBoxes of the message layout into a GtkEventBox and connect the “button-press-event” signal to it. With that, I got the expected result. See this commit for more details.


I was asked to make the popover appears right where the pointer is (and positioned down by default) instead of being at the top of the GtkEventBox. I had some difficulties to make it but I’ve managed to do it in this MR and in this one.

The “View Source” dialog First implementation

In order to implement this functionality, I’ve firstly added a field source to the Message structure. And I’ve used the function serde_json::to_string_ pretty in order to have a well-formatted JSON source for the messages. More details in this commit.

Then I’ve made the interface of the GtkMessageDialog for displaying the message’s source (see it here). And I have simply connected the “View Source” button with the appearance of the dialog and the update of its content (in this commit).


There was a problem with the “View Source” dialog: when opening the source of a very long message, the GtkMessageDialog becomes very large. It would also be good to have JSON syntax highlight for the message source. So I’ve moved the place where the code is shown to a GtkSourceView wrapped in a GtkScrolledWindow and activated the JSON syntax highlight. See this MR for more details.

I’ve also modified the message source view in this MR to be a modal window instead of a dialog. Finally, I’ve made some visual fixes for it in order to have a better JSON syntax highlight, the style scheme “classic” was making the text pink and it was the only color of the highlight. So I’ve modified the style scheme to “kate” which has a proper JSON syntax highlight (have a look at this MR for more details).

The “Copy Text” action

I’ve simply connected the “Copy Text” button with a function that copies the content of the message’s body into the clipboard (the details are here).

The “Reply” action

I’ve connected the “Reply” button with a function that inserts the message’s body at the beginning of the message input but it also adds “> ” at the beginning of each lines and ensures that there are exactly two line returns at the end of this body (more details here).

The “Delete Message” action

To implement the redaction of messages, I had to add a command to the backend so that it can request the redaction of messages to homeservers. The redaction can be done with a PUT HTTP request to the server with this path at the end of the URL: /_matrix/client/r0/rooms/{roomId}/redact/{eventId}/{txnId} (where roomId and eventId allows to identify the event whose redaction is being requested and txnId is the transaction ID, a randomly generated string, more details here). You can see the commit which implements this command here.

Then I’ve connected the “Delete Message” button to a function that calls the backend command I’ve previously introduced (more details here).

Final result

This is what the popover finally looks like:

And for the message source view:

Daniel Espinosa: Vala+GDA: An experiment

Hën, 06/08/2018 - 3:53md

I’m working on GNOME Data Access, now on its GTK+ library, specially on its Browser, a GTK+ application more as a demo than a real database access tool.

GDA’s Browser, spots notable features in GDA’s non-GUI library. For example, support to create a single connection binding for two or more connections to different databases and from different providers (SQLite, PostgreSQL, MySQL), so you can create a single query and get a table result combining data from different sources. More details in another post.

The Experiment

Browser is getting back to life, but in the process I found crashes here and there. Fix them requires to walk around the code, found lots of opportunities to port it to new available technics to define GObject/GInterface API. Do that work may it’s too much work for a single man.

So, What about to use Vala to produce C code faster in order to rewrite existing or new objects?

This experiment, will be for 7.0 development cycle. For next 6.0 release, I would like to fix bugs as far as possible.

This experiment may help to answer another question: Is possible to port Vala code to C in order to be accepted by upstream projects like GXml’s DOM4 API to GLib?

Aditya Manglik: The CPU (Consuming Power Unlimited)

Dje, 05/08/2018 - 11:08md

The CPU forms the core of any modern computing machine. To be able to perform operations at clocks as high as 4.5 GHz, CPUs need enormous power. In the electronics world, the (dynamic) power consumed by a component is estimated as:

Power = Capacitance * Voltage2 * frequency

As we can observe, an increase in clock speed directly translates to increased power draw. An Ivy Bridge i7 die having 177 mm2 area, dissipating 100 W implies a power density of 565 kW/m2, while power density of the sun’s radiation on the surface of the earth is approximately 1.4 kW/m2(which is why overclocking burns the chip). Modern CPUs consume more than 200 W of power at peak performance, and often represent the biggest drain on your laptop’s battery, followed by the GPU and then display screens.

To save power, manufacturers developed the concept of P-states and C-states, which basically shut down some parts of the CPU according to the available load. Any modern CPU (assuming recent manufacturing process <45 nm) should consume less than 10 W at idle state. An interesting fact to note here would be the power consumption of fans running to keep the system cool. A single fan can eat up to 5 W, and a group (common use case) together may consume as much as 20 Watts, which can overshadow the motherboard+RAM+CPU combined.

The users have indirect influence over the CPU’s power conditions and battery use through their selection of power settings. The TDP numbers associated with a chip imply the maximum power according to which the cooling systems need to be designed, not the actual power in day-to-day scenarios. Any reports on power consumption which give statements like “the processor’s power consumption is 35 watts” are both false and imprecise, because the actual number is far more dynamic and dependent on multiple factors.

One might assume that a CPU heavy application should increase only CPU power usage, but the motherboard also has to supply the data at higher rates, which translates to increased disk I/O. This implies that the motherboard, buses, RAM and data disks, all consume more power to deliver the higher data throughput. Since these subsystems are intertwined and hardware manufacturers rarely provide actual power consumption numbers, our best bet to estimate the individual power draw would be to develop regression models to predict the numbers. This is precisely what is done by PowerTop.


Tim Janik: Beast 0.12.0-beta.1

Sht, 04/08/2018 - 4:05md
Long time no see. It’s been a while since the last Beast release, mainly because we have a lot of code migrations going on, some of which caused severe regressions. Here are the tarball and Debian package:…

Fabian: UI polishing and auto completion

Sht, 04/08/2018 - 2:52md

During my Google Summer of Code project I implement message search for Dino, a XMPP client focusing on ease of use and security.

Jumping to results

For each search hit in the results, three messages are (partially) displayed: The actual hit and the messages before and after that. The hit is clickable and clicking it will jump to the hit in the conversation history. The clicked message is highlighted with a CSS animation by fading a background color in and out.

Empty placeholders

I added placeholders to clarify the states where no results are shown because nothing was searched yet and where no results are shown because there where no matching results. Following the GNOME HIG, the placeholders contain an icon, a state description and suggestions on how to proceed.

Minor change of plans

In my UI mockups I planed to collapse the search sidebar into only displaying the search entry after a hit was clicked. This was supposed to let the user know that a search is still active and that he/she can resume the search without requiring much screen space.

However, this introduced more states to the search feature and thus leaves more room for confusion. Also, reopening the search from the collapsed state would need a click/shortcut, while just completely reopening the search would require the same amount of interaction. Thus, the collapsed state does not save the user interaction steps and might be harder to understand.

Instead, the search text is simply not removed from the search entry when clicking on a result. When the user desires to resume the search to explore other results, he/she can open the search again and will find the old search results. The text is marked, thus simply typing over it is possible. The search text is still reset when changing conversations.

Bug hunting

I had to hunt down and fix some displaying issues in the history search and other bugs I introduced while refactoring parts of the conversation view structure.

During the final days

I have been working on user name auto-completion and nice displaying of filters. There is some final work and testing to be done on them and then I can open a PR!

Matthias Clasen: Flatpak portal experiments

Sht, 04/08/2018 - 5:21pd

One of the signs that a piece of software is reaching a mature state is its ability to serve  use cases that nobody had anticipated when it was started. I’ve recently had this experience with Flatpak.

We have been discussing some possible new directions for the GTK+ file chooser. And it occurred to me that it might be convenient to use the file chooser portal as a way to experiment with different file choosers without having to change either GTK+ itself or the applications.

To verify this idea, I wrote a quick portal implementation that uses the venerable GTK+ 2 file chooser.

Here is Corebird (a GTK+ 3 application) using the GTK+ 2 file chooser to select an image.

Daniel García Moreno: mdl

Pre, 03/08/2018 - 1:22md

The last month I wrote a blog post about the LMDB Cache database and my wish to use that in Fractal. To summarize, LMDB is a memory-mapped key-value database that persist the data to the filesystem. I want to use this in the Fractal desktop application to replace the current state storage system (we're using simple json files) and as a side effect we can use this storage system to share data between threads because currently we're using a big struct AppOp shared with Arc<Mutex<AppOp>> and this cause some problems because we need to share and lock and update the state there.

The main goal is to define an app data model with smaller structs and store this using LMDB, then we can access to the same data querying the LMDB and we can update the app state storing to the LMDB.

With this change we don't need to share these structs, we only need to query to the LMDB to get the data and the work with that, and this should simplify our code. The other main benefit will be that we'll have this state in the filesystem by default so when we open the app after close, we'll stay in the same state.

Take a look to the gtk TODO example app to view how to use mdl with signals in a real gtk app.

What is mdl

mdl is Data model library to share app state between threads and process and persist the data in the filesystem. Implements a simple way to store structs instances in a LMDB database and other methods like BTreeMap.

I started to play with the LMDB rust binding and writing some simple tests. After some simple tests, I decided to write a simple abstraction to hide the LMDB internals and to provide a simple data storage and to do that I created the mdl crate.

The idea is to be able to define your app model as simple rust structs. LMDB is a key-value database so every struct instance will have an unique key to store in the cache.

The keys are stored in the cache ordered, so we can use some techniques to store related objects and to retrieve all objects of a kind, we only need to build keys correctly, following an scheme. For example, for fractal we can store rooms, members and messages like this:

  • rooms with key "room:roomid", to store all the room information, title, topic, icon, unread msgs, etc.
  • members with key "member:roomid:userid", to store all member information.
  • messages with key "msg:roomid:msgid" to store room messages.

Following this key assignment we can iterate over all rooms by querying all objects that starts with "room", we can get all members and all messages from a room.

This have some inconveniences, because we can't query directly an message by id if we don't know the roomid. If we need that kind of queries, we need to think about another key assignment or maybe we should duplicate data. key-value are simple databases so we don't have the power of relational databases.


LMDB is fast and efficient, because it's in memory so using this cache won't add a lot of overhead, but to make it simple to use I've to add some overhead, so mdl is easy by default and can be tuned to be really fast.

This crate has three main modules with traits to implement:

  • model: This contains the Model trait that should implement every struct that we want to make cacheable.
  • store: This contains the Store trait that's implemented by all the cache systems.
  • signal: This contains the Signaler trait and two structs that allow us to emit/subscribe to "key" signals.

And two more modules that implements the current two cache systems:

  • cache: LMDB cache that implements the Store trait.
  • bcache: BTreeMap cache that implements the Store trait. This is a good example of other cache system that can be used, this doesn't persist to the filesystem.

So we've two main concepts here, the Store and the Model. The model is the plain data and the store is the container of data. We'll be able to add models to the store or to query the store to get stored models. We store our models as key-value where the key is a String and the value is a Vec<u8>, so every model should be serializable.

This serialization is the bigger overhead added. We need to do this because we need to be able to store this in the LMDB database. Every request will create a copy of the object in the database, so we're not using the same data. This can be tuned to use pointers to the real data, but to do that we'll need to use unsafe code and I think that the performance that we'll get with this doesn't deserve the complexity that this will add.

By default, the Model trait has two methods fromb and tob to serialize and deserialize using bincode, so any struct that implements the Model trait and doesn't reimplement these two methods should implement Serialize and Deserialize from serde.

The signal system is an addition to be able to register callbacks to keys modifications in the store, so we can do something when a new objects is added, modified or deleted from the store. The signaler is optional and we should use it in a explicit way.

How to use it

First of all, you should define your data model, the struct that you want to be able to store in the database:

#[derive(Serialize, Deserialize, Debug)] struct A { pub p1: String, pub p2: u32, }

In this example we'll define a struct called A with two attributes, p1, a String, and p2, an u32. We derive Serialize and Deserialize because we're using the default fromb and tob from the Model trait.

Then we need to implement the Model trait:

impl Model for A { fn key(&self) -> String { format!("{}:{}", self.p1, self.p2) } }

We only reimplement the key method to build a key for every instance of A. In this case our key will be the String followed by the number, so for example if we've something like let a = A { p1: "myk", p2: 42 }; the key will be "myk:42".

Then, to use this we need to have a Store, in this example, we'll use the LMDB store that's the struct Cache:

// initializing the cache. This str will be the fs persistence path let db = "/tmp/mydb.lmdb"; let cache = Cache::new(db).unwrap();

We pass the path to the filesystem where we want to persist the cache as the first argument, in this example we'll persist to "/tmp/mydb.lmdb". When we ran the program for the first time a directory will be created there. The next time, that cache will be used with the information from the previous execution.

Then, with this cache object we can instantiate an A object and store in the cache:

// create a new *object* and storing in the cache let a = A{ p1: "hello".to_string(), p2: 42 }; let r =; assert!(r.is_ok());

The store method will serialize the object and store a copy of that in the cache.

After the store, we can query for this object from other process, using the same lmdb path, or from the same process using the cache:

// querying the cache by key and getting a new *instance* let a1: A = A::get(&cache, "hello:42").unwrap(); assert_eq!(a1.p1, a.p1); assert_eq!(a1.p2, a.p2);

We'll get a copy of the original one.

This is the full example:

extern crate mdl; #[macro_use] extern crate serde_derive; use mdl::Cache; use mdl::Model; use mdl::Continue; #[derive(Serialize, Deserialize, Debug)] struct A { pub p1: String, pub p2: u32, } impl Model for A { fn key(&self) -> String { format!("{}:{}", self.p1, self.p2) } } fn main() { // initializing the cache. This str will be the fs persistence path let db = "/tmp/mydb.lmdb"; let cache = Cache::new(db).unwrap(); // create a new *object* and storing in the cache let a = A{ p1: "hello".to_string(), p2: 42 }; let r =; assert!(r.is_ok()); // querying the cache by key and getting a new *instance* let a1: A = A::get(&cache, "hello:42").unwrap(); assert_eq!(a1.p1, a.p1); assert_eq!(a1.p2, a.p2); } Iterations

When we store objects with the same key prefix we can iterate over all of the objects, because we don't know the full key of all objects.

Currently there's two ways to iterate over all objects with the same prefix in a Store:

  • all

This is the simpler way, calling the all method we'll receive a Vec<T> so we've all the objects in a vector.

let hellows: Vec<A> = A::all(&cache, "hello").unwrap(); for h in hellows { println!("hellow: {}", h.p2); }

This has a little problem, because if we've a lot of objects, this will use a lot of memory for the vector and we'll be iterating over all objects twice. To solve this problems, the iter method was created.

  • iter

The iter method provides a way to call a closure for every object with this prefix in the key. This closure should return a Continue(bool) that will indicates if we should continue iterating of if we should stop the iteration here.

A::iter(&cache, "hello", |h| { println!("hellow: {}", h.p2); Continue(true) }).unwrap();

Using the Continue we can avoid to iterate over all the objects, for example if we're searching for one concrete object.

We're copying every object, but the iter method is better than the all, because if we don't copy or move the object from the closure, this copy only live in the closure scope, so we'll use less memory and also, we only iterate one. If we use all, we'll iterate over all objects with that prefix to build the vector so if we iterate over that vector another time this will cost more than the iter version.

Signal system

As I said before, the signal system provide us a way to register callbacks to key modifications. The signal system is independent of the Model and Store and can be used independently:

extern crate mdl; use mdl::Signaler; use mdl::SignalerAsync; use mdl::SigType; use std::sync::{Arc, Mutex}; use std::{thread, time}; fn main() { let sig = SignalerAsync::new(); sig.signal_loop(); let counter = Arc::new(Mutex::new(0)); // one thread for receive signals let sig1 = sig.clone(); let c1 = counter.clone(); let t1: thread::JoinHandle<_> = thread::spawn(move || { let _ = sig1.subscribe("signal", Box::new(move |_sig| { *c1.lock().unwrap() += 1; })); }); // waiting for threads to finish t1.join().unwrap(); // one thread for emit signals let sig2 = sig.clone(); let t2: thread::JoinHandle<_> = thread::spawn(move || { sig2.emit(SigType::Update, "signal").unwrap(); sig2.emit(SigType::Update, "signal:2").unwrap(); sig2.emit(SigType::Update, "signal:2:3").unwrap(); }); // waiting for threads to finish t2.join().unwrap(); let ten_millis = time::Duration::from_millis(10); thread::sleep(ten_millis); assert_eq!(*counter.lock().unwrap(), 3); }

In this example we're creating a SignalerAsync that can emit signal and we can subscribe a callback to any signal. The sig.signal_loop(); init the signal loop thread, that wait for signals and call any subscribed callback when a signal comes.

let _ = sig1.subscribe("signal", Box::new(move |_sig| { *c1.lock().unwrap() += 1; }));

We subscribe a callback to the signaler. The signaler can be cloned and the list of callbacks will be the same, if you emit a signal in a clone and subscribe in other clone, that signal will trigger the callback.

Then we're emiting some signals:

sig2.emit(SigType::Update, "signal").unwrap(); sig2.emit(SigType::Update, "signal:2").unwrap(); sig2.emit(SigType::Update, "signal:2:3").unwrap();

All of this three signals will trigger the previous callback because the subscription works as a signal starts with. This allow us to subscribe to all new room messages insertion if we follow the previous described keys, subscribing to "msg:roomid" and if we only want to register a callback to be called only when one message is updated we can subscribe to "msg:roomid:msgid" and this callback won't be triggered for other messages.

The callback should be a Box<Fn(signal)> where signal is the following struct:

#[derive(Clone, Debug)] pub enum SigType { Update, Delete, } #[derive(Clone, Debug)] pub struct Signal { pub type_: SigType, pub name: String, }

Currently only Update and Delete signal types are supported.

Signaler in gtk main loop

All the UI operations in a gtk app should be executed in the gtk main loop so we can't use the SignalerAsync in a gtk app, because this signaler creates one thread for the callbacks so all callbacks should implement the Send trait and if we want to modify, for example, a gtk::Label in a callback, that callback won't implement Send because gtk::Label can't be send between threads safely.

To solve this problem, I've added the SignalerSync. That doesn't launch any threads and where all operations runs in the same thread, even the callback. This is a problem if one of your callbacks locks the thread, because this will lock your interface in a gtk app, so any callback in the sync signaler should be non blocking.

This signaler should be used in a different way, so we should call from time to time to the signal_loop_sync method, that will check for new signals and will trigger any subscribed callback. This signaler doesn't have a signal_loop because we should do the loop in our thread.

This is an example of how to run the signaler loop inside a gtk app:

let sig = SignalerSync::new(); let sig1 = sig.clone(); gtk::timeout_add(50, move || { gtk::Continue(sig1.signal_loop_sync()) }); // We can subscribe callbacks using the sig here

In this example code we're registering a timeout callback, every 50ms this closure will be called, from the gtk main thread, and the signal_loop_sync will check for signals and call the needed callbacks.

This method returns a bool that's false when the signaler stops. You can stop the signaler calling the stop method.

Point of extension

I've tried to make this crate generic to be able to extend in the future and provide other kind of cache that can be used changing little code in the apps that uses mdl.

This is the main reason to use traits to implement the store, so the first point of extension is to add more cache systems, we're currently two, the LMDB and the BTreeMap, but it would be easy to add more key-value storages, like memcached, unqlie, mongodb, redis, couchdb, etc.

The signaler is really simple, so maybe we can start to think about new signalers that uses Futures and other kind of callbacks registration.

As I said before, mdl does a copy of the data on every write and on every read, so it could be cool to explore the implication of these copies in the performance and try to find methods to reduce this overhead.

Nick Richards: Pinpoint Flatpak

Pre, 03/08/2018 - 11:53pd

A while back I made a Pinpoint COPR repo in order to get access to this marvelous tool in Fedora. Well, now I work for Endless and the only way you can run apps on our system is in a Flatpak container. So I whipped up a quick Pinpoint Flatpak in order to give a talk at GUADEC this year.

Flatpak is actually very helpful here, since the libraries required are rapidly becoming antique, and carrying them around on your base system is gross as well as somewhat insecure. There isn’t a GUI to create or open files, and it’s somewhat awkward to use if you’re not already an expert, so I didn’t submit the app to Flathub, however you can easily download and install the bundle locally. I hope the two people for whom this is useful find it as useful as I did to make.

Nick Richards: Pinpoint COPR Repo

Pre, 03/08/2018 - 11:53pd

A few years ago I worked with a number of my former colleagues to create Pinpoint, a quick hack that made it easier for us to give presentations that didn’t suck. Now that I’m at Collabora I have a couple of presentations to make and using pinpoint was a natural choice. I’ve been updating our internal templates to use our shiny new brand and wanted to use some newer features that weren’t available in Fedora’s version of pinpoint.

There hasn’t been an official release for a little while and a few useful patches have built up on the master branch. I’ve packaged a git snapshot and created a COPR repo for Fedora so you can use these snapshots yourself. They’re good.

Matthew Garrett: Porting Coreboot to the 51NB X210

Pre, 03/08/2018 - 3:35pd
The X210 is a strange machine. A set of Chinese enthusiasts developed a series of motherboards that slot into old Thinkpad chassis, providing significantly more up to date hardware. The X210 has a Kabylake CPU, supports up to 32GB of RAM, has an NVMe-capable M.2 slot and has eDP support - and it fits into an X200 or X201 chassis, which means it also comes with a classic Thinkpad keyboard . We ordered some from a Facebook page (a process that involved wiring a large chunk of money to a Chinese bank which wasn't at all stressful), and a couple of weeks later they arrived. Once I'd put mine together I had a quad-core i7-8550U with 16GB of RAM, a 512GB NVMe drive and a 1920x1200 display. I'd transplanted over the drive from my XPS13, so I was running stock Fedora for most of this development process.

The other fun thing about it is that none of the firmware flashing protection is enabled, including Intel Boot Guard. This means running a custom firmware image is possible, and what would a ridiculous custom Thinkpad be without ridiculous custom firmware? A shadow of its potential, that's what. So, I read the Coreboot[1] motherboard porting guide and set to.

My life was made a great deal easier by the existence of a port for the Purism Librem 13v2. This is a Skylake system, and Skylake and Kabylake are very similar platforms. So, the first job was to just copy that into a new directory and start from there. The first step was to update the Inteltool utility so it understood the chipset - this commit shows what was necessary there. It's mostly just adding new PCI IDs, but it also needed some adjustment to account for the GPIO allocation being different on mobile parts when compared to desktop ones. One thing that bit me - Inteltool relies on being able to mmap() arbitrary bits of physical address space, and the kernel doesn't allow that if CONFIG_STRICT_DEVMEM is enabled. I had to disable that first.

The GPIO pins got dropped into gpio.h. I ended up just pushing the raw values into there rather than parsing them back into more semantically meaningful definitions, partly because I don't understand what these things do that well and largely because I'm lazy. Once that was done, on to the next step.

High Definition Audio devices (or HDA) have a standard interface, but the codecs attached to the HDA device vary - both in terms of their own configuration, and in terms of dealing with how the board designer may have laid things out. Thankfully the existing configuration could be copied from /sys/class/sound/card0/hwC0D0/init_pin_configs[2] and then hda_verb.h could be updated.

One more piece of hardware-specific configuration is the Video BIOS Table, or VBT. This contains information used by the graphics drivers (firmware or OS-level) to configure the display correctly, and again is somewhat system-specific. This can be grabbed from /sys/kernel/debug/dri/0/i915_vbt.

A lot of the remaining platform-specific configuration has been split out into board-specific config files. and this also needed updating. Most stuff was the same, but I confirmed the GPE and genx_dec register values by using Inteltool to dump them from the vendor system and copy them over. lspci -t gave me the bus topology and told me which PCIe root ports were in use, and lsusb -t gave me port numbers for USB. That let me update the root port and USB tables.

The final code update required was to tell the OS how to communicate with the embedded controller. Various ACPI functions are actually handled by this autonomous device, but it's still necessary for the OS to know how to obtain information from it. This involves writing some ACPI code, but that's largely a matter of cutting and pasting from the vendor firmware - the EC layout depends on the EC firmware rather than the system firmware, and we weren't planning on changing the EC firmware in any way. Using ifdtool told me that the vendor firmware image wasn't using the EC region of the flash, so my assumption was that the EC had its own firmware stored somewhere else. I was ready to flash.

The first attempt involved isis' machine, using their Beaglebone Black as a flashing device - the lack of protection in the firmware meant we ought to be able to get away with using flashrom directly on the host SPI controller, but using an external flasher meant we stood a better chance of being able to recover if something went wrong. We flashed, plugged in the power and… nothing. Literally. The power LED didn't turn on. The machine was very, very dead.

Things like managing battery charging and status indicators are up to the EC, and the complete absence of anything going on here meant that the EC wasn't running. The most likely reason for that was that the system flash did contain the EC's firmware even though the descriptor said it didn't, and now the system was very unhappy. Worse, the flash wouldn't speak to us any more - the power supply from the Beaglebone to the flash chip was sufficient to power up the EC, and the EC was then holding onto the SPI bus desperately trying to read its firmware. Bother. This was made rather more embarrassing because isis had explicitly raised concern about flashing an image that didn't contain any EC firmware, and now I'd killed their laptop.

After some digging I was able to find EC firmware for a related 51NB system, and looking at that gave me a bunch of strings that seemed reasonably identifiable. Looking at the original vendor ROM showed very similar code located at offset 0x00200000 into the image, so I added a small tool to inject the EC firmware (basing it on an existing tool that does something similar for the EC in some HP laptops). I now had an image that I was reasonably confident would get further, but we couldn't flash it. Next step seemed like it was going to involve desoldering the flash from the board, which is a colossal pain. Time to sleep on the problem.

The next morning we were able to borrow a Dediprog SPI flasher. These are much faster than doing SPI over GPIO lines, and also support running the flash at different voltage. At 3.5V the behaviour was the same as we'd seen the previous night - nothing. According to the datasheet, the flash required at least 2.7V to run, but flashrom listed 1.8V as the next lower voltage so we tried. And, amazingly, it worked - not reliably, but sufficiently. Our hypothesis is that the chip is marginally able to run at that voltage, but that the EC isn't - we were no longer powering the EC up, so could communicated with the flash. After a couple of attempts we were able to write enough that we had EC firmware on there, at which point we could shift back to flashing at 3.5V because the EC was leaving the flash alone.

So, we flashed again. And, amazingly, we ended up staring at a UEFI shell prompt[3]. USB wasn't working, and nor was the onboard keyboard, but we had graphics and were executing actual firmware code. I was able to get USB working fairly quickly - it turns out that Linux numbers USB ports from 1 and the FSP numbers them from 0, and fixing that up gave us working USB. We were able to boot Linux! Except there were a whole bunch of errors complaining about EC timeouts, and also we only had half the RAM we should.

After some discussion on the Coreboot IRC channel, we figured out the RAM issue - the Librem13 only has one DIMM slot. The FSP expects to be given a set of i2c addresses to probe, one for each DIMM socket. It is then able to read back the DIMM configuration and configure the memory controller appropriately. Running i2cdetect against the system SMBus gave us a range of devices, including one at 0x50 and one at 0x52. The detected DIMM was at 0x50, which made 0x52 seem like a reasonable bet - and grepping the tree showed that several other systems used 0x52 as the address for their second socket. Adding that to the list of addresses and passing it to the FSP gave us all our RAM.

So, now we just had to deal with the EC. One thing we noticed was that if we flashed the vendor firmware, ran it, flashed Coreboot and then rebooted without cutting the power, the EC worked. This strongly suggested that there was some setup code happening in the vendor firmware that configured the EC appropriately, and if we duplicated that it would probably work. Unfortunately, figuring out what that code was was difficult. I ended up dumping the PCI device configuration for the vendor firmware and for Coreboot in case that would give us any clues, but the only thing that seemed relevant at all was that the LPC controller was configured to pass io ports 0x4e and 0x4f to the LPC bus with the vendor firmware, but not with Coreboot. Unfortunately the EC was supposed to be listening on 0x62 and 0x66, so this wasn't the problem.

I ended up solving this by using UEFITool to extract all the code from the vendor firmware, and then disassembled every object and grepped them for port io. x86 systems have two separate io buses - memory and port IO. Port IO is well suited to simple devices that don't need a lot of bandwidth, and the EC is definitely one of these - there's no way to talk to it other than using port IO, so any configuration was almost certainly happening that way. I found a whole bunch of stuff that touched the EC, but was clearly depending on it already having been enabled. I found a wide range of cases where port IO was being used for early PCI configuration. And, finally, I found some code that reconfigured the LPC bridge to route 0x4e and 0x4f to the LPC bus (explaining the configuration change I'd seen earlier), and then wrote a bunch of values to those addresses. I mimicked those, and suddenly the EC started responding.

It turns out that the writes that made this work weren't terribly magic. PCs used to have a SuperIO chip that provided most of the legacy port functionality, including the floppy drive controller and parallel and serial ports. Individual components (called logical devices, or LDNs) could be enabled and disabled using a sequence of writes that was fairly consistent between vendors. Someone on the Coreboot IRC channel recognised that the writes that enabled the EC were simply using that protocol to enable a series of LDNs, which apparently correspond to things like "Working EC" and "Working keyboard". And with that, we were done.

Coreboot doesn't currently have ACPI support for the latest Intel graphics chipsets, so right now my image doesn't have working backlight control.Backlight control also turned out to be interesting. Most modern Intel systems handle the backlight via registers in the GPU, but the X210 uses the embedded controller (possibly because it supports both LVDS and eDP panels). This means that adding a simple display stub is sufficient - all we have to do on a backlight set request is store the value in the EC, and it does the rest.

Other than that, everything seems to work (although there's probably a bunch of power management optimisation to do). I started this process knowing almost nothing about Coreboot, but thanks to the help of people on IRC I was able to get things working in about two days of work[4] and now have firmware that's about as custom as my laptop.

[1] Why not Libreboot? Because modern Intel SoCs haven't had their memory initialisation code reverse engineered, so the only way to boot them is to use the proprietary Intel Firmware Support Package.
[2] Card 0, device 0
[3] After a few false starts - it turns out that the initial memory training can take a surprisingly long time, and we kept giving up before that had happened
[4] Spread over 5 or so days of real time