You are here

Planet GNOME

Subscribe to Feed Planet GNOME
Planet GNOME - https://planet.gnome.org/
Përditësimi: 1 ditë 23 orë më parë

Richard Hughes: Growing the fwupd ecosystem

Mar, 19/11/2019 - 1:26md

Yesterday I wrote a blog about what hardware vendors need to provide so I can write them a fwupd plugin. A few people contacted me telling me that I should make it more generic, as I shouldn’t be the central point of failure in this whole ecosystem. The sensible thing, of course, is growing the “community” instead, and building up a set of (paid) consultants that can help the OEMs and ODMs, only getting me involved to review pull requests or for general advice. This would certainly reduce my current feeling of working at 100% and trying to avoid burnout.

As a first step, I’ve created an official page that will list any consulting companies that I feel are suitable to recommend for help with fwupd and the LVFS. The hardware vendors would love to throw money at this stuff, so they don’t have to care about upstream project release schedules and dealing with a gumpy maintainer like me. I’ve pinged the usual awesome people like Igalia, and hopefully more companies will be added to this list during the next few days.

If you do want your open-source consultancy to be added, please email me a two paragraph corporate-friendly blurb I can include on that new page, also with a link I can use for the “more details” button. If you’re someone I’ve not worked with before, you should be in a position to explain the difference between a capsule update and a DFU update, and be able to tell me what a version format is. I don’t want to be listing companies that don’t understand what fwupd actually is :)

Richard Hughes: Google and fwupd sitting in a tree

Hën, 18/11/2019 - 4:41md

I’ve been told by several sources (but not by Google directly, heh) that from Christmas onwards the “Designed for ChromeBook” sticker requires hardware vendors to use fwupd rather than random non-free binaries. This does make a lot of sense for Google, as all the firmware flash tools I’ve seen the source for are often decades old, contain layer-on-layers of abstractions, have dubious input sanitisation and are quite horrible to use. Many are setuid, which doesn’t make me sleep well at night, and I suspect the security team at Google also. Most vendor binaries are built for the specific ODM hardware device, and all of them but one doesn’t use any kind of source control or formal review process.

The requirement from Google has caused mild panic among silicon suppliers and ODMs, as they’re having to actually interact with an open source upstream project and a slightly grumpy maintainer that wants to know lots of details about hardware that doesn’t implement one of the dozens of existing protocols that fwupd supports. These are companies that have never had to deal with working with “outside” people to develop software, and it probably comes as quite a shock to the system. To avoid repeating myself these are my basic rules when adding support for a device with a custom protocol in fwupd:

  • I can give you advice on how to write the plugin if you give me the specifications without signing an NDA, and/or the existing code under a LGPLv2+ license. From experience, we’ll probably not end up using any of your old code in fwupd but the error defines and function names might be similar, and I don’t anyone to get “tainted” from looking at non-free code, so it’s safest all round if we have some reference code marked with the right license that actually compiles on Fedora 31. Yes, I know asking the legal team about releasing previously-nonfree code with a GPLish licence is difficult.
  • If you are running Linux, and want our help to debug or test your new plugin, you need to be running Fedora 30 or 31. If you run Ubuntu you’ll need to use the snap version of fwupd, and I can’t help you with random Ubuntu questions or interactions between the snap version and the distro version. I know your customer might be running Debian Stable or Ubuntu LTS, but that’s not what I’m paid to support. If you do use Fedora 29+ or RHEL 7+ you can also use the nice COPR I provide with git snapshots of master.
  • Please reflect the topology of your device. If writes have to go through another interface, passthru or IC, please give us access to documentation about that device too. I’m fed up having to reverse engineer protocols from looking at the “wrong side” of the client source code. If the passthru is implemented by different vendor, they’ll need to work on the same terms as this.
  • If you want to design and write all of the plugin yourself, that’s awesome, but please follow the existing style and don’t try to wrap your existing code base with the fwupd plugin API. If your device has three logical children with different version numbers or firmware formats, we want to see three devices in fwupdmgr. If you want to restrict the child devices to a parent vendor, that’s fine, we now support that in fwupd and on the LVFS. If you’re adding custom InstanceIDs, these have to be documented in the README.md file.
  • If you’re using an nonstandard firmware format (as in, not DFU, Intel HEX or Motorola SREC) then you’ll need to write a firmware parser that’s going to be valgrind’ed and fuzzed. We will need all the header/footer documentation so we can verify the parser and add some small redistributable fuzz targets. If the blob is being passed to the hardware without parsing, you still might need to know the format of the header so that the plugin can do a sanity check that the firmware is suitable for the hardware, and that any internal CRC is actually correct. All the firmware parsers have to be paranoid and written defensively, because it’s me that looks bad on LWN if CVEs get issued.
  • If you want me to help with the plugin, I’m probably going to ask for test hardware, and two different versions of the firmware that can actually be flashed to the hardware you sent. A bare PCB is fine, but if you send me something please let me know so I can give you my personal address rather than have to collect it from a Red Hat office. If you send me hardware, ensure you also include a power supply that’s going to work in the UK, e.g. 240V. If you want it back, you’ll also need to provide me with UPS/DHL collection sticker.
  • You do need to think how to present your device version number. e.g. is 0x12345678 meant to be presented as “12.34.5678” or “18.52.86.120” – the LVFS really cares if this is correct, and users want to see the “same” version numbers as on the OEM web-page.
  • You also need to know if the device is fully functional during the update, or if it operates in a degraded or bootloader mode. We also need to know what happens if flashing fails, e.g. is the device a brick, or is there some kind of A/B partition that makes a flash failure harmless? If the device is a brick, how can it be recovered without an RMA?
  • After the update is complete fwupd need to “restart” the device so that the new firmware version can be verified, so there needs to be some kind of command the device understands – we can ask the user to reboot or re-plug the device if this is the only way to do this, although in 2019 we can really do better than that.
  • If you’re sharing a huge LGPLv2+ lump of code, we need access to someone who actually understands it, preferably the person that wrote it in the first place. Typically the code is uncommented and a recipe for a headache so being able to ask a human questions is invaluable. For this, either IRC, email or even just communicating via a shared Google doc (more common than you would believe…) is fine. I can’t discuss this stuff on Telegram, Hangouts or WhatsApp, sorry.
  • Once a plugin exists in fwupd and is upstream, we will expect pull requests to add either more VID/PIDs, #defines or to add variations to the protocol for new versions of the hardware. I’m going to be grumpy if I just get sent a random email with demands about backporting all the VID/PIDs to Debian stable. I have zero control on when Debian backports anything, and very little influence on when Ubuntu does a SRU. I have a lot of influence on when various Fedora releases get a new fwupd, and when RHEL gets backports for new hardware support.

Now, if all this makes me sound like a grumpy upstream maintainer then I apologize. I’m currently working with about half a dozen silicon suppliers who all failed some or all of the above bullets. I’m multiplexing myself with about a dozen companies right now, and supporting fwupd isn’t actually my entire job at Red Hat. I’m certainly not going to agree to “signing off a timetable” for each vendor as none of the vendors actually pay me to do anything…

Given interest in fwupd has exploded in the last year or so, I wanted to post something like this rather than have a 10-email back and forth about my expectations with each vendor. Some OEMs and even ODMs are now hiring developers with Linux experience, and I’m happy to work with them as fwupd becomes more important. I’ve already helped quite a few developers at random vendors get up to speed with fwupd and would be happy to help more. As the importance of fwupd and the LVFS grows more and more, vendors will need to hire developers who can build, extend and support their hardware. As fwupd grows, I’ll be asking vendors to do more of the work, as “get upstream to do it” doesn’t scale.

Matthew Garrett: Extending proprietary PC embedded controller firmware

Hën, 18/11/2019 - 9:19pd
I'm still playing with my X210, a device that just keeps coming up with new ways to teach me things. I'm now running Coreboot full time, so the majority of the runtime platform firmware is free software. Unfortunately, the firmware that's running on the embedded controller (a separate chip that's awake even when the rest of the system is asleep and which handles stuff like fan control, battery charging, transitioning into different power states and so on) is proprietary and the manufacturer of the chip won't release data sheets for it. This was disappointing, because the stock EC firmware is kind of annoying (there's no hysteresis on the fan control, so it hits a threshold, speeds up, drops below the threshold, turns off, and repeats every few seconds - also, a bunch of the Thinkpad hotkeys don't do anything) and it would be nice to be able to improve it.

A few months ago someone posted a bunch of fixes, a Ghidra project and a kernel patch that lets you overwrite the EC's code at runtime for purposes of experimentation. This seemed promising. Some amount of playing later and I'd produced a patch that generated keyboard scancodes for all the missing hotkeys, and I could then use udev to map those scancodes to the keycodes that the thinkpad_acpi driver would generate. I finally had a hotkey to tell me how much battery I had left.

But something else included in that post was a list of the GPIO mappings on the EC. A whole bunch of hardware on the board is connected to the EC in ways that allow it to control them, including things like disabling the backlight or switching the wifi card to airplane mode. Unfortunately the ACPI spec doesn't cover how to control GPIO lines attached to the embedded controller - the only real way we have to communicate is via a set of registers that the EC firmware interprets and does stuff with.

One of those registers in the vendor firmware for the X210 looked promising, with individual bits that looked like radio control. Unfortunately writing to them does nothing - the EC firmware simply stashes that write in an address and returns it on read without parsing the bits in any way. Doing anything more with them was going to involve modifying the embedded controller code.

Thankfully the EC has 64K of firmware and is only using about 40K of that, so there's plenty of room to add new code. The problem was generating the code in the first place and then getting it called. The EC is based on the CR16C architecture, which binutils supported until 10 days ago. To be fair it didn't appear to actually work, and binutils still has support for the more generic version of the CR16 family, so I built a cross assembler, wrote some assembly and came up with something that Ghidra was willing to parse except for one thing.

As mentioned previously, the existing firmware code responded to writes to this register by saving it to its RAM. My plan was to stick my new code in unused space at the end of the firmware, including code that duplicated the firmware's existing functionality. I could then replace the existing code that stored the register value with code that branched to my code, did whatever I wanted and then branched back to the original code. I hacked together some assembly that did the right thing in the most brute force way possible, but while Ghidra was happy with most of the code it wasn't happy with the instruction that branched from the original code to the new code, or the instruction at the end that returned to the original code. The branch instruction differs from a jump instruction in that it gives a relative offset rather than an absolute address, which means that branching to nearby code can be encoded in fewer bytes than going further. I was specifying the longest jump encoding possible in my assembly (that's what the :l means), but the linker was rewriting that to a shorter one. Ghidra was interpreting the shorter branch as a negative offset, and it wasn't clear to me whether this was a binutils bug or a Ghidra bug. I ended up just hacking that code out of binutils so it generated code that Ghidra was happy with and got on with life.

Writing values directly to that EC register showed that it worked, which meant I could add an ACPI device that exposed the functionality to the OS. My goal here is to produce a standard Coreboot radio control device that other Coreboot platforms can implement, and then just write a single driver that exposes it. I wrote one for Linux that seems to work.

In summary: closed-source code is more annoying to improve, but that doesn't mean it's impossible. Also, strange Russians on forums make everything easier.

comments

Julita Inca: LAS 2019: A GNOME + KDE conference

Hën, 18/11/2019 - 4:21pd

Thanks to the sponsorship of GNOME, I was able to attend the Linux App Summit 2019 held in Barcelona. This conference was hosted by two free desktop communities such as GNOME and KDE. Usually the technologies used to create applications are GTK and QT, and the objective of this conference was to present ongoing application projects that run in many Linux platforms and beyond on both, desktops and on mobiles. The ecosystem involved, the commercial part and the U project manager perspective were also presented in three core days. I had the chance to hear some talks as pictured: Adrien Plazas, Jordan and Tobias, Florian are pictured in the first place. The keynote was in charge of Mirko Boehm with the title “The Economics of FOSSâ€�, Valentin and Adam Jones from Freedesktop SDK and Nick Richards where he pointed out the “write” strategy. You might see more details on Twitter.

Women’s presence was very noticeable in this conference. As it is shown in the following picture, a UX designer such as Robin presented a communication approach to understand what the users want, the developer Heather Ellsworth explained also her work at Ubuntu making GNOME Snap apps, the enthusiastic Aniss from OpenStreetMap community also did a lightning talk about her experiences to make a FOSS community stronger. At the bottom of the picture we see the point of view of the database admin: Shola, the KDE developer: Hannah, and the closing ceremony presented by Muriel (local team organizer).

On Friday, some BoFs were set. The engagement Bof leading by Nuritzi is pictured first, followed by the KDE team. The Snap Packaging Workshop happened in the meeting room.

Lighting talks were part also of this event at the end of the day, every day. Nuritzi was prized for her effort to run the event. Thanks Julian & Tobias for joining to see Park Güell.Social events were also arranged, we started a tour from the Casa Batlló and we walked towards the Gothic quarter. The tours happened at nigth after the talks, and lasted 1.5 h.Food expenses were covered by GNOME in my case as well as for other members. Thanks!

My participation was basically done a talk in the unconference part, I organized the GNOME games with Jorge (a local organizer) and I wrote a GTK code in C with Matthias.The games started with the “Glotones” where we used flans to eat quickly, the “wise man” where lots of Linux questions were asked, and the “Sing or die” game where the participants changed the lyrics of sticky songs using the words GNOME, KDE and LinuxAppSummit. Some of the moments were pictured as follows:The imagination of the teams were fantastic, they sang and created “geek” choreographies as we requested:One of the games that lasted until the very end was “Guessing the word”. The words depicted in the photo:LAS, root, and GPL played by Nuritzi, Neil, and Jordan, respectively.It was lovely to see again several-years GNOME’s members as Florian, who is always supporting my ideas for the GNOME games the generous, intelligent and funny Javier Jardon, and the famous GNOME developer Zeeshan who also loves Rust and airplanes.

It was also delightful to meet new people. I met GNOME people such as Ismael, and Daniel who helped me to debug my mini GTK code. I also met KDE people such as Albert and Muriel. In the last photo, we are in the photo with the “wise man” and the “flan man”

Special Thanks to the local organizers Jorge and Aleix, Ismael for supporting me for almost the whole conference with my flu, and Nuritzi for the sweet chocolates she gave me.The photo group was a success, and generally, I liked the event LAS 2019 in Barcelona.

Barcelona is a place with novel architectures and I enjoyed the walking time there…

Thanks again GNOME, I will finish my reconstruction image GTK code I started in this event to make it also in parallel using HPC machines in the near future.

Jussi Pakkanen: What is -pipe and should you use it?

Hën, 18/11/2019 - 12:28pd
Every now and then you see people using the -pipe compiler argument. This is particularly common on vintage handwritten makefiles. Meson even uses the argument by default, but what does it actually do? GCC manpages say the following:

-pipe
    Use pipes rather than temporary files for communication
    between the various stages of compilation.  This fails
    to work on some systems where the assembler is unable to
    read from a pipe; but the GNU assembler has no trouble.

So presumably this is a compile speed optimization. What sort of an improvement does it actually provide? I tested this by compiling LLVM on my desktop machine both with and without the -pipe command line argument. Without it I got the following time:

ninja  14770,75s user 799,50s system 575% cpu 45:04,98 total

Adding the argument produced the following timing:

ninja  14874,41s user 791,95s system 584% cpu 44:41,22 total

This is an improvement of less than one percent. Given that I was using my machine for other things at the same time, the difference is most likely statistically insignificant.

This argument may have been needed in the ye olden times of supporting tens of broken commercial unixes. Nowadays the only platform where this might make a difference is Windows, given that its file system is a lot slower than Linux's. But is its pipe implementation any faster? I don't know, and I'll let other people measure that.
The "hindsight is perfect" design lesson to be learnedLooking at this now, it is fairly easy to see that this command line option should not exist. Punting the responsibility of knowing whether files or pipes are faster (or even work) on any given platform to the user is poor usability. Most people don't know that and performance characteristics of operating systems change over time. Instead this should be handled inside the compiler with logic roughly like the following:
if(assembler_supports_pipes(...) &&   pipes_are_faster_on_this_platform(...)) {    communicate_with_pipes();} else {    communicate_with_files();}

Molly de Blanc: GNOME Patent Troll Defense Fund reaches nearly 4,000 donors!

Sht, 16/11/2019 - 4:56md

A lot has happened since our announcement that Rothschild Imaging Ltd was alleging that GNOME is violating one of their patents. We wanted to provide you with a brief update of what has been happening over the past few weeks.

Legal cases can be expensive, and the cost of a patent case can easily reach over a million dollars. As a small non-profit, we decided to reach out to our community and ask for financial support towards our efforts to keep patent trolls out of open source. More than 3,800 of you have stepped up and contributed to the GNOME Patent Troll Legal Defense Fund. We’d like to sincerely thank everyone who has donated. If you need any additional documentation for an employer match, please contact us.

Individuals aren’t the only supporters of this initial fundraiser. The Debian Project generously reached out with a donation and Igalia also donated to support our legal efforts.

There’s been a wonderful outpouring of support from the free and open source software communities. The Software Freedom Conservancy issued a statement. Meanwhile the Open Invention Network is lending a hand in the search for examples or prior art.

We set ourselves an ambitious fundraising goal of $1.1 million to support our defense. We expect the majority of this to be raised from corporate sponsorship, but we’re going to keep working for more individual and community donations. Please share our GiveLively donation page with your social media networks. If you’re a non-profit that has issued (or is is interested in issuing) a statement of support, we’d love to hear from you.

If you want to receive updates on the case, please sign up for the GNOME Legal Updates List.

Hans de Goede: Creating an USB3 OTG cable for the Thinkpad 8

Pre, 15/11/2019 - 9:10md
The Lenovo Thinkpad 8 and also the Asus T100CHI both have an USB3 micro-B connector, but using a standard USB3 OTG (USB 3 micro-B to USB3-A receptacle) cable results in only USB2 devices working. USB3 devices are not recognized.

Searching the internet learns that many people have this problem and that the solution is to find a USB3 micro-A to USB3-A receptacle cable. This sounds like nonsense to me as micro-B really is micro-AB and is supposed to use the ID pin to automatically switch between modes dependent on the used cable.; and this does work for the USB-2 parts of the micro-B connector on the Thinkpad. Yet people do claim success with such cables (with a more square micro-A connector, instead of the trapezoid micro-B connector). The only problem is such cables are not for sale anywhere.

So I guessed that this means is that they have swapped the Rx and Tx superspeed pairs on the USB3 only part of the micro-B connector, and I decided to cut open one of my USB3 micro-A to USB3-A receptacle cables and swap the superspeed pairs. Here is what the cable looks like when it it cut open:



If you are going to make such a cable yourself, to get this picture I first removed the outer plastic isolation (part of it is still there on the right side in this picture). Then I folded away the shield wires till they are all on one side (wires at the top of the picture). After this I removed the metal foil underneath the shield wires.

Having removed the first 2 layers of shielding this reveals 4 wires in the standard USB2 colors: red, black, green and white. and 2 separately shielded cable pairs. On the picture above the separately shielded pairs have been cut, giving us 4 pairs, 2 on each end of cable; and the shielding has been removed from 3 of the 4 pairs, you can still see the shielding on the 4th pair.

A standard USB3 cable uses the following color codes:

  • Red: Vbus / 5 volt

  • White:  USB 2.0 Data -

  • Green: USB 2.0 Data +

  • Black: Ground

  • Purple: Superspeed RX -

  • Orange: Superspeed RX +

  • Blue: Superspeed TX -

  • Yellow: Superspeed TX -

So to swap RX and TX we need to connect purple to blue / blue to purple and orange to yellow / yellow to orange, resulting in:



Note the wires are just braided together here, not soldered yet. This is a good moment to carefully test the cable. Also note that the superspeed wire pairs must be length matched, so you need to cut and strip all 8 cables at the same length! If everything works you can put some solder on those braided together wires, re-test after soldering, and then cover them with some heat-shrink-tube:



And then cover the entire junction with a bigger heat-shrink-tube:



And you have a superspeed capable cable even though no one will sell you one.

Note that the Thinkpad 8 supports ACA mode so if you get an ACA capable "Y" cable or an ACA charging HUB then you can charge and use the Thinkpad 8 USB port at the same time. Typically ACA "Y" cables or hubs are USB2 only. So the superspeed mod from this blogpost will not help with those. The Asus T100CHI has a separate USB2 micro-B just for charging, so you do not need anything special there to charge + connect an USB device.

Travis Reitter: Catching Java exceptions in Swift via j2objc

Pre, 15/11/2019 - 6:04pd

Java can be converted to Objective-C and called from Swift code in an iOS app with the tool j2objc. I wrote this piece on how to handle Java exceptions from this Swift code.

Travis Reitter: Long-term betting on dependencies

Pre, 15/11/2019 - 6:03pd

Years ago, when I started planning my cocktail app, I looked at options for code re-use between Android and iOS. Critically, I knew it would take me a while to get the first platform release out so I was worried any tool I expected to use might be unmaintained by then.

I found the tool j2objc and it looked really promising:

  • open source
  • development funded by Google
  • it was being relied upon by Google for most of their apps on both mobile platforms (plus the Web)

So, I could always adapt the tool for my own needs if it did get abandoned. But the maintainers had significant motivation and resources to maintain the project so that seemed like a low risk anyhow.

I didn't imagine Android and iOS would change so much in the time it took to get my Android app completed. Both platforms even changed their primary development language in that span along with a lot of best practices and recommended components. Even though both platforms do a great job of keeping their documentation up-to-date and the most relevant pieces easy to find, learning to develop on one of these platforms is challenging. There still is a lot of outdated information out there (be it Stack Overflow posts or an incredible number of tutorials) and there are stacks and stacks of components to learn. Expectations for modern mobile apps are really high so the number of details to get right can be daunting.

Then to build your app for the other platform you get to start all over! :)

Thankfully, my bet on j2objc proved to be a good one. It's actively maintained by very helpful developers and works as expected. I've completed most of the risky work in porting the core of my app to iOS and any work I do on that core benefits the apps on both platforms.

There are very few compromises I have to make because language features in Java map surprisingly well to both Objective-C and Swift.

But one important exception remained. I'll cover that in a subsequent post.


Christian Kellner: fwupd and bolt power struggles

Enj, 14/11/2019 - 1:52md

As readers of this blog might remember, there is a mode where the firmware (BIOS) is responsible for powering the Thunderbolt controller. This means that if no device is connected to the USB type C port the controller will be physically powered down. The obvious upside is battery savings. The downside is that, for a system in that state, we cannot tell if it has a Thunderbolt controller, nor determine any of its properties, like firmware version. Luckily, there is an interface to tell the firmware (BIOS) to "force-power" the controller. The interface is a write only sysfs attribute. The writes are not reference counted, i.e. two separate commands to enable the force-power state followed by a single disable, will indeed disable the controller. For some time boltd and the firmware update daemon both directly poked that interface. This lead to some interference, leading in turn to strange timing bugs. The canonical example goes like this: fwupd force-powers the controller, uevents will be triggered and Thunderbolt entries appear in sysfs. The boltd daemon will be started via udev+systemd activation. The daemon initializes itself and starts enumerating and probing the Thunderbolt controller. Meanwhile fwupd is done with its thing and cuts the power to the controller. That makes boltd and the controller sad because they were still in the middle of getting to know each other.

boltctl power -q can be used to inspect the current force power settings

To fix this issue, boltd gained a force-power D-Bus API and fwupd in turn gained support for using that new API. No more fighting over the force-power sysfs interface. So far so good. But an unintended side-effect of that change was that now bolt was always being started, indirectly by fwupd via D-Bus activation, even if there was no Thunderbolt controller in the system to begin with. Since the daemon currently does not exit even if there is no Thunderbolt hardware1, you have a system-daemon running, but not doing anything useful. This understandably made some people unhappy (rhbz#1650881, lp#1801796). I recently made a small change to the fwupd, which should do away with this issue: before making a call to boltd, fwupd now itself checks if the force-power facility is present. If not, don't bother asking boltd and starting it in the process. The change is included in fwupd 1.3.3. Now both machine and people should be happy, I hope.

Footnotes:
  1. That is a complicated story that needs new systemd features. See #92 for the interesting technical details.

Daniel García Moreno: LAS 2019, Barcelona

Enj, 14/11/2019 - 12:00pd
The event

The Linux App Summit (LAS) is a great event that bring together a lot of linux application developers, from the bigger communities, it's organized by GNOME and KDE in collaboration and it's a good place to talk about the Linux desktop, application distribution and development.

This year the event was organized in Barcelona, this is not too far from my home town, Málaga, so I want to be there.

I sent a talk proposal and was accepted, so I was talking about distributing services with flatpak and problems related to service deployment in a flatpaked world.

Clicking in this image you can find my talk in the event streaming. The sound is not too good and my accent doesn't help, but there it is :D

The event was a really great event, with really good talks, about different topics, we've some technical talks, some talks about design, talks about language, about distribution, about the market and economics, and at least two about "removing" the system tray 😋

It was really interesting the talk about the "future" inclusion of payments in flathub because I think that this will give a new incentive to people to write and publish apps in flathub and could be a great step to get donations for developers.

Another talk that I liked was the one about the maintenance of flatpak repositories, it's always interesting to know how the things works and this talk give an easy introduction to ostree, flatpak, repositories and application distribution.

Besides the talks, this event is really interesting for the people that bring together. I've been talking with a lot of people, not too much, because I'm a shy person, but I've the opportunity to talk a bit with some Fractal developers, and during a coffee talk with Jordan Petridis, we've time to share some ideas about a cool new functionality that maybe we can implement in the near future, thanks to the outreachy program and maybe some help from the gstreamer people.

I'm also very happy to be able to spend some time talking with Martín Abente, about sugar labs, the hack computer and the different ways to teach kids with free software. Martín is a really nice person and I liked a lot to meet him and share some thoughts.

The city

This is not my first time in Barcelona, I was here at the beginning of this year, but this is a great city and I've no time to visit all the places the first time.

So I've spent the Thursday afternoon doing some tourism, visiting the "Sagrada Familia" and the "Montjuïc" fountain.

If you have not been in Barcelona and you have the opportunity to come here, don't hesitate, it's a really good city, with a great architecture to admire and really nice culture and people, and here you can find good food to enjoy.

Thank you all

I was sponsored by the GNOME Foundation, I'm really thankful for this opportunity, to come here, give a talk and share some time with great people that makes the awesome Linux and open source community possible.

I want to thank to my employer Endless because it's really a privilege to have a job that allows this kind of interactions with the community, and my team Hack, because they I've missed some meetings this week and I was not very responsive during the week.

And I want to thank to the LAS organization, because this was a really good event, good job, you can be very proud.

Sebastian Dröge: The GTK Rust bindings are not ready yet? Yes they are!

Mër, 13/11/2019 - 4:02md

When talking to various people at conferences in the last year or at conferences, a recurring topic was that they believed that the GTK Rust bindings are not ready for use yet.

I don’t know where that perception comes from but if it was true, there wouldn’t have been applications like Fractal, Podcasts or Shortwave using GTK from Rust, or I wouldn’t be able to do a workshop about desktop application development in Rust with GTK and GStreamer at the Linux Application Summit in Barcelona this Friday (code can be found here already) or earlier this year at GUADEC.

One reason I sometimes hear is that there is not support for creating subclasses of GTK types in Rust yet. While that was true, it is not true anymore nowadays. But even more important: unless you want to create your own special widgets, you don’t need that. Many examples and tutorials in other languages make use of inheritance/subclassing for the applications’ architecture, but that’s because it is the idiomatic pattern in those languages. However, in Rust other patterns are more idiomatic and even for those examples and tutorials in other languages it wouldn’t be the one and only option to design applications.

Almost everything is included in the bindings at this point, so seriously consider writing your next GTK UI application in Rust. While some minor features are still missing from the bindings, none of those should prevent you from successfully writing your application.

And if something is actually missing for your use-case or something is not working as expected, please let us know. We’d be happy to make your life easier!

P.S.

Some people are already experimenting with new UI development patterns on top of the GTK Rust bindings. So if you want to try developing an UI application but want to try something different than the usual signal/callback spaghetti code, also take a look at those.