You are here

Agreguesi i feed

Andy Wingo: pictie, my c++-to-webassembly workbench

Planet GNOME - Hën, 03/06/2019 - 12:10md

Hello, interwebs! Today I'd like to share a little skunkworks project with y'all: Pictie, a workbench for WebAssembly C++ integration on the web.

loading pictie...

<noscript> JavaScript disabled, no pictie demo. See the pictie web page for more information. </noscript>>&&<&>>>&&><<>>&&<><>>

wtf just happened????!?

So! If everything went well, above you have some colors and a prompt that accepts Javascript expressions to evaluate. If the result of evaluating a JS expression is a painter, we paint it onto a canvas.

But allow me to back up a bit. These days everyone is talking about WebAssembly, and I think with good reason: just as many of the world's programs run on JavaScript today, tomorrow much of it will also be in languages compiled to WebAssembly. JavaScript isn't going anywhere, of course; it's around for the long term. It's the "also" aspect of WebAssembly that's interesting, that it appears to be a computing substrate that is compatible with JS and which can extend the range of the kinds of programs that can be written for the web.

And yet, it's early days. What are programs of the future going to look like? What elements of the web platform will be needed when we have systems composed of WebAssembly components combined with JavaScript components, combined with the browser? Is it all going to work? Are there missing pieces? What's the status of the toolchain? What's the developer experience? What's the user experience?

When you look at the current set of applications targetting WebAssembly in the browser, mostly it's games. While compelling, games don't provide a whole lot of insight into the shape of the future web platform, inasmuch as there doesn't have to be much JavaScript interaction when you have an already-working C++ game compiled to WebAssembly. (Indeed, much of the incidental interactions with JS that are currently necessary -- bouncing through JS in order to call WebGL -- people are actively working on removing all of that overhead, so that WebAssembly can call platform facilities (WebGL, etc) directly. But I digress!)

For WebAssembly to really succeed in the browser, there should also be incremental stories -- what does it look like when you start to add WebAssembly modules to a system that is currently written mostly in JavaScript?

To find out the answers to these questions and to evaluate potential platform modifications, I needed a small, standalone test case. So... I wrote one? It seemed like a good idea at the time.

pictie is a test bed

Pictie is a simple, standalone C++ graphics package implementing an algebra of painters. It was created not to be a great graphics package but rather to be a test-bed for compiling C++ libraries to WebAssembly. You can read more about it on its github page.

Structurally, pictie is a modern C++ library with a functional-style interface, smart pointers, reference types, lambdas, and all the rest. We use emscripten to compile it to WebAssembly; you can see more information on how that's done in the repository, or check the README.

Pictie is inspired by Peter Henderson's "Functional Geometry" (1982, 2002). "Functional Geometry" inspired the Picture language from the well-known Structure and Interpretation of Computer Programs computer science textbook.

prototype in action

So far it's been surprising how much stuff just works. There's still lots to do, but just getting a C++ library on the web is pretty easy! I advise you to take a look to see the details.

If you are thinking of dipping your toe into the WebAssembly water, maybe take a look also at Pictie when you're doing your back-of-the-envelope calculations. You can use it or a prototype like it to determine the effects of different compilation options on compile time, load time, throughput, and network trafic. You can check if the different binding strategies are appropriate for your C++ idioms; Pictie currently uses embind (source), but I would like to compare to WebIDL as well. You might also use it if you're considering what shape your C++ library should have to have a minimal overhead in a WebAssembly context.

I use Pictie as a test-bed when working on the web platform; the weakref proposal which adds finalization, leak detection, and working on the binding layers around Emscripten. Eventually I'll be able to use it in other contexts as well, with the WebIDL bindings proposal, typed objects, and GC.

prototype the web forward

As the browser and adjacent environments have come to dominate programming in practice, we lost a bit of the delightful variety from computing. JS is a great language, but it shouldn't be the only medium for programs. WebAssembly is part of this future world, waiting in potentia, where applications for the web can be written in any of a number of languages. But, this future world will only arrive if it "works" -- if all of the various pieces, from standards to browsers to toolchains to virtual machines, only if all of these pieces fit together in some kind of sensible way. Now is the early phase of annealing, when the platform as a whole is actively searching for its new low-entropy state. We're going to need a lot of prototypes to get from here to there. In that spirit, may your prototypes be numerous and soon replaced. Happy annealing!

Richard Hughes: Breaking apart Dell UEFI Firmware CapsuleUpdate packages

Planet GNOME - Dje, 02/06/2019 - 2:21md

When firmware is uploaded to the LVFS we perform online checks on it. For example, one of the tests is looking for known badness like embedded UTF-8/UTF-16 BEGIN RSA PRIVATE KEY strings. As part of this we use CHIPSEC (in the form of chipsec_util -n uefi decode) which searches the binary for a UEFI volume header which is a simple string of _FVH and then decompresses the volumes which we then read back as component shards. This works well on plain EDK2 firmware, and the packages uploaded by Lenovo and HP which use IBVs of AMI and Phoenix. The nice side effect is that we can show the user what binaries have changed, as the vendor might have accidentally forgotten to mention something in the release notes.

The elephants in the room were all the hundreds of archives from Dell which could not be loaded by chipsec with no volume header detected. I spent a few hours last night adding support for these archives, and the secret is here:

  1. Decompress the firmware.cab archive into firmware.bin, disregarding the signing and metadata.
  2. If CHIPSEC fails to analyse firmware.bin, look for a > 512kB decompress-able Zlib section somewhere after the capsule header, actually in the PE binary.
  3. The decompressed blob is in PFS format, which seems to be some Dell-specific format that’s already been reverse engineered.
  4. The PFS blob is not further compressed and is in one continuous block, and so the entire PFS volume can be passed to chipsec for analysis.

The Zlib start offset seems to jump around for each release, and I’ve not found any information in the original PE file that indicates the offset. If anyone wants to give me a hint to avoid searching the multimegabyte blob for two bytes (and then testing if it’s just chance, or indeed an Zlib stream…) I would be very happy, even if you have to remain anonymous.

So, to sum up:

CapsuleHeader PE Binary Zlib stream PFS FVH PE DXEs PE PEIMs …

I’ll see if chipsec upstream wants a patch to do this as it’s probably useful outside of the LVFS too.

Jussi Pakkanen: Looking at why the Meson crowdfunding campaign failed

Planet GNOME - Sht, 01/06/2019 - 11:07md
The crowdfunding campaign to create a full manual for the Meson build system ended yesterday. It did not reach its 10 000€ goal so the book will not be produced and instead all contributed money will be returned. I'd like to thank everyone who participated. A special thanks goes out to Centricular for their bronze corporate sponsorship (which, interestingly, was almost 50% of the total money raised).
Nevertheless the fact remains that this project was a failure and a fairly major one at that since it did not reach even one third of its target. This can not be helped, but maybe we can salvage some pieces of useful information from the ruins.Some statisticsThere were a total of 42 contributors to the campaign. Indiegogo says that a total of 596 people visited the project when it was live. Thus roughly 7% of all people who came to the site participated. It is harder to know how many people saw information about the campaign without coming to the site. Estimating based on numbers based on the blog's readership, Twitter reach and other sources puts the number at around 5000 globally (with a fairly large margin of error). This would indicate a conversion rate of 1% of all the people who saw any information about the campaign. In reality the percentage is lower since many of the contributors were people who did not really need convincing. Thus the conversion rate is probably closer to 0.5% or even lower.
The project was set up so that 300 contributors would have been enough to make the project a success. Given the number of people using Meson (estimated to be in the tens of thousands) this seemed like a reasonable goal. Turns out that it wasn't. Given these conversion numbers you'd need to reach 30 000 – 60 000 people in order to succeed. For a small project with zero advertising budget this seems like a hard thing to achieve.On the Internet everything drownsTwitter, LinkedIn, Facebook and the like are not very good channels for spreading information. They are firehoses where any one post has an active time of maybe one second if you are lucky. And if you are not, the platforms' algorithms will hide your post because they deem it "uninteresting".  Sadly filtering seems to be mandatory, because not having it makes the firehose even more unreadable. The only hope you have is that someone popular writes about your project. In practice this can only be achieved via personal connections.
Reddit-like aggregation sites are not much better, because you have basically two choices: either post on a popular subreddit or an unpopulare one. In the first case your post probably won't even make it on the front page, all it takes is a few downvotes because the post is "not interesting" or "does not belong here". A post that is not on the front page might not as well even exist; no-one will read it. Posting on an non-popular area is no better. Your post is there but it will reach 10 people and out of those maybe 1 will click on the link.
New sites are great for getting the information out, but they suffer from the same popularity problem as everything else. A distilled (and only slightly snarky) explanation is that news sites write mainly about two things:
  1. Things they have already written about (i.e. have deemed popular)
  2. Things other news sites write about (i.e. that other people have deemed popular)
This is not really the fault of news sites. They are doing their best on a very difficult job. This is just how the world and popularity work. Things that are popular get more popular because of their current popularity alone. Things that are not popular are unlikely to ever become popular because of their current unpopularity alone.Unexpected requirementsOne of the goals of this campaign (or experiment, really) was to see if selling manuals would be a sustainable way to compensate FOSS project developers and maintainers for their work. If working this would be a good way for compensation, because there are already established legal practices for selling books across the world. Transferring money in other ways (donations etc) is difficult and there may be legal obstacles.
Based on this one experiment this does not seem to be a feasible approach. Interestingly multiple people let me know that they would not be participating because the end result would not be released under a free license. Presumably the same people do not complain to book store tellers that "I will only buy this Harry Potter book if, immediately after my purchase, the full book is released for free on the Internet". But for some reason there is a hidden assumption that because a person has released something under a free license, they must publish everything else they do under free licenses as well.
These additional limitations make this model of charging for docs really hard to pull off. There is no possibility of steady, long term money flow because once a book is out under a free license it becomes unsellable. People will just download the free PDF instead. A completely different model (or implementation of the attempted model) seems to be needed.So what happens next?I don't really know. Maybe the book can get published through an actual publisher. Maybe not. Maybe I'll just take a break from the whole thing and get back to it later. But to end on some kind of a positive note I have extracted one chapter from the book and have posted it here in PDF form for everyone to read. Enjoy.

Christopher Davis: What is a Platform?

Planet GNOME - Sht, 01/06/2019 - 8:23md

Often when looking for apps on Linux, one might search for something “cross-platform”. What does that mean? Typically it refers to running on more than one operating system, e.g. Windows, macOS, and GNU/Linux. But, what are developers really targeting when they target GNU/Linux, since there’s such diverse ecosystem of environments with their own behaviors? Is there really a “Linux Desktop” platform at all?

The Prerequisites

When developing an app for one platform, there are certain elements you can assume are there and able to be relied on. This can be low-level things like the standard library, or user-facing things like the system tray. On Windows you can expect the Windows API or Win32, and on macOS you can expect Cocoa. With GNU/Linux, the only constants are the GNU userspace and the Linux kernel. You can’t assume systemd, GLib, Qt, or any of the other common elements will be there for every system.

What about freedesktop? Even then, not every desktop follows all of the specifications within freedesktop, such as the Secrets API or the system tray. So making assumptions based on targeting freedesktop as a platform will not work out.

To be a true platform, the ability to rely on elements being stable for all users is a must. By this definition, the “Linux Desktop” itself is not a platform, as it does not meet the criteria.

Making Platforms Out of “The Linux Desktop”

It is possible to build fully realized platforms on top of GNU/Linux. The best example of this is elementary OS. Developers targeting elementary OS know that different elements like Granite will be present for all users of elementary OS. They also know elements that won’t be there, such as custom themes or a system tray. Thus, they can make decisions and integrate things with platform specifics in mind. This ability leads to polished, well-integrated apps on the AppCenter and developers need not fear a distro breaking their app.

To get a healthy app development community for GNOME, we need to be able to have the same guarantees. Unfortunately, we don’t have that. Because GNOME is not shipped by upstream, downstreams take the base of GNOME we target and remove or change core elements. This can be the system stylesheet or something even more functional, like Tracker (our file indexer). By doing this, the versions of GNOME that reach users break the functionality or UX in our apps. Nobody can target GNOME if every instance of it can be massively different from another. Just as no one can truly target the “Linux Desktop” due to the differences in each environment.

How do we solve this, then? To start, the community idea of the “Linux Desktop” as a platform needs to be moved past. Once it’s understood that each desktop is target that developers aim for, it will be easier for users to find what apps work best for their environment. That said, we need to have apps for them to find. Improving the development experience for various platforms will help developers in making well-integrated apps. Making sure they can safely make assumptions is fundamental, and I hope that we get there.

Emmanuele Bassi: More little testing

Planet GNOME - Sht, 01/06/2019 - 1:15md

Back in March, I wrote about µTest, a Behavior-Driven Development testing API for C libraries, and that I was planning to use it to replace the GLib testing API in Graphene.

As I was busy with other things in GTK, it took me a while to get back to µTest—especially because I needed some time to set up a development environment on Windows in order to port µTest there. I managed to find some time over various weekends and evenings, and ended up fixing a couple of small issues here and there, to the point that I could run µTest’s own test suite on my Windows 10 box, and then get the CI build job I have on Appveyor to succeed as well.

Setting up MSYS2 was the most time consuming bit, really

While at it, I also cleaned up the API and properly documented it.

Since depending on gtk-doc would defeat the purpose, and since I honestly dislike Doxygen, I was looking for a way to write the API reference and publish it as HTML. As luck would have it, I remembered a mention on Twitter about Markdeep, a self-contained bit of JavaScript capable of turning a Markdown document into a half decent HTML page client side. Coupled with GitHub pages, I ended up with a fairly decent online API reference that also works offline, falls back to a Markdown document when not running through JavaScript, and can get fixed via pull requests.

Now that µTest is in a decent state, I ported the Graphene test suite over to it and, now I can run it on Windows using MSVC—and MSYS2, as soon as the issue with GCC gets fixed upstream. This means that, hopefully, we won’t have regressions on Windows in the future.

The µTest API is small enough, now, that I don’t plan major changes; I don’t want to commit to full API stability just yet, but I think we’re getting close to a first stable release soon; definitely before Graphene 1.10 gets released.

In case you think this could be useful for you: feedback, in the form of issues and pull requests, is welcome.

Georges Basile Stavracas Neto: Profiling GNOME Shell

Planet GNOME - Sht, 01/06/2019 - 1:08pd

As of today, Mutter and GNOME Shell support Sysprof-based profiling.

Christian wrote a fantastic piece exposing what happened to Sysprof during this cycle already, and how does it look like now, so I’ll skip that.

Instead, let me focus on what I contributed the most: integrating Mutter/GNOME Shell to Sysprof.

Let’s start with a video:

Sysprof 3 profiling GNOME Shell in action

When it comes to drawing, GTK- and Clutter-based applications follow a settled cycle drawing cycle:

  • Size negotiation: position screen elements, allocates sizes;
  • Paint: draw the actual pixels;
  • Pick: find the element below the pointer;

What you see in the video above is a visual representation of this cycle happening inside GNOME Shell, although a bit more detailed.

With it, we can see when and why a frame was missed; what was probably happening when it occurs; and perhaps the most important aspect, we have actual metrics about how we’re performing.

Of course, there is room for improvements, but I’m sure this already is a solid first step. Even before landing, the profiling results already provided us with various insights of potential improvements. Good development tools like this result in better choices.

Enjoy!

Srestha Srivastava

Planet GNOME - Hën, 27/05/2019 - 3:38md
Contributing to GNOME Boxes
GNOME Boxes  is used to view, access, and manage remote and virtual machines. Currently it is able to do either express-installations on a downloaded ISO or to download an ISO and offer the option to express-install it.I stumbled upon the project ‘Improve GNOME Boxes express-installations by adding support to tree-based installations’, it required the knowledge of Object Oriented programming and I had done this course last semester only and had a project for the course. Even though I had used Java primarily for the project, it didn’t take me long to understand Vala. I quickly jumped into the irc channel of GNOME Boxes, introduced myself to the mentors(one of them being Felipe Borges), got the source code and built it. Initially it took me some time to understand how things were working but the mentors were patient enough to even answer my dumbest queries. I then started working on a few bugs, they helped me understand the front-end part of the project. I’ve fixed around 6 issues till now and am now working on understanding how the back-end works, mainly how we use libosinfo to get certain information about operating systems, hypervisors and the (virtual) hardware devices they can support.
Happy Coding :)

Faqet

Subscribe to AlbLinux agreguesi