You are here

Site në gjuhë të huaj

Jussi Pakkanen: Enabling HTTPS is easy

Planet GNOME - Mër, 08/02/2017 - 9:27md
To see how easy I looked how well it is supported in a bunch of free software blog aggregator sites.

The cobbler's shoes have no children.

Richard Hughes: New fwupd release, and why you should buy a Dell

Planet GNOME - Mër, 08/02/2017 - 2:20md

This morning I released the first new release of fwupd on the 0.8.x branch. This has a number of interesting fixes, but more importantly adds the following new features:

  • Adds support for Intel Thunderbolt devices
  • Adds support for some Logitech Unifying devices
  • Adds support for Synaptics MST cascaded hubs
  • Adds support for the Altus-Metrum ChaosKey device
  • Adds Dell-specific functionality to allow other plugins turn on TBT/GPIO

Mario Limonciello from Dell has worked really hard on this release, and I can say with conviction: If you want to support a hardware company that cares about Linux — buy a Dell. They seem to be driving the importance of Linux support into their partners and suppliers. I wish other vendors would do the same.

Alberto Garcia: QEMU and the qcow2 metadata checks

Planet GNOME - Mër, 08/02/2017 - 10:06pd

When choosing a disk image format for your virtual machine one of the factors to take into considerations is its I/O performance. In this post I’ll talk a bit about the internals of qcow2 and about one of the aspects that can affect its performance under QEMU: its consistency checks.

As you probably know, qcow2 is QEMU’s native file format. The first thing that I’d like to highlight is that this format is perfectly fine in most cases and its I/O performance is comparable to that of a raw file. When it isn’t, chances are that this is due to an insufficiently large L2 cache. In one of my previous blog posts I wrote about the qcow2 L2 cache and how to tune it, so if your virtual disk is too slow, you should go there first.

I also recommend Max Reitz and Kevin Wolf’s qcow2: why (not)? talk from KVM Forum 2015, where they talk about a lot of internal details and show some performance tests.

qcow2 clusters: data and metadata

A qcow2 file is organized into units of constant size called clusters. The cluster size defaults to 64KB, but a different value can be set when creating a new image:

qemu-img create -f qcow2 -o cluster_size=128K hd.qcow2 4G

Clusters can contain either data or metadata. A qcow2 file grows dynamically and only allocates space when it is actually needed, so apart from the header there’s no fixed location for any of the data and metadata clusters: they can appear mixed anywhere in the file.

Here’s an example of what it looks like internally:

In this example we can see the most important types of clusters that a qcow2 file can have:

  • Header: this one contains basic information such as the virtual size of the image, the version number, and pointers to where the rest of the metadata is located, among other things.
  • Data clusters: the data that the virtual machine sees.
  • L1 and L2 tables: a two-level structure that maps the virtual disk that the guest can see to the actual location of the data clusters in the qcow2 file.
  • Refcount table and blocks: a two-level structure with a reference count for each data cluster. Internal snapshots use this: a cluster with a reference count >= 2 means that it’s used by other snapshots, and therefore any modifications require a copy-on-write operation.
Metadata overlap checks

In order to detect corruption when writing to qcow2 images QEMU (since v1.7) performs several sanity checks. They verify that QEMU does not try to overwrite sections of the file that are already being used for metadata. If this happens, the image is marked as corrupted and further access is prevented.

Although in most cases these checks are innocuous, under certain scenarios they can have a negative impact on disk write performance. This depends a lot on the case, and I want to insist that in most scenarios it doesn’t have any effect. When it does, the general rule is that you’ll have more chances of noticing it if the storage backend is very fast or if the qcow2 image is very large.

In these cases, and if I/O performance is critical for you, you might want to consider tweaking the images a bit or disabling some of these checks, so let’s take a look at them. There are currently eight different checks. They’re named after the metadata sections that they check, and can be divided into the following categories:

  1. Checks that run in constant time. These are equally fast for all kinds of images and I don’t think they’re worth disabling.
    • main-header
    • active-l1
    • refcount-table
    • snapshot-table
  2. Checks that run in variable time but don’t need to read anything from disk.
    • refcount-block
    • active-l2
    • inactive-l1
  3. Checks that need to read data from disk. There is just one check here and it’s only needed if there are internal snapshots.
    • inactive-l2

By default all tests are enabled except for the last one (inactive-l2), because it needs to read data from disk.

Disabling the overlap checks

Tests can be disabled or enabled from the command line using the following syntax:

-drive file=hd.qcow2,overlap-check.inactive-l2=on
-drive file=hd.qcow2,overlap-check.snapshot-table=off

It’s also possible to select the group of checks that you want to enable using the following syntax:

-drive file=hd.qcow2,overlap-check.template=none
-drive file=hd.qcow2,overlap-check.template=constant
-drive file=hd.qcow2,overlap-check.template=cached
-drive file=hd.qcow2,overlap-check.template=all

Here, none means that no tests are enabled, constant enables all tests from group 1, cached enables all tests from groups 1 and 2, and all enables all of them.

As I explained in the previous section, if you’re worried about I/O performance then the checks that are probably worth evaluating are refcount-block, active-l2 and inactive-l1. I’m not counting inactive-l2 because it’s off by default. Let’s look at the other three:

  • inactive-l1: This is a variable length check because it depends on the number of internal snapshots in the qcow2 image. However its performance impact is likely to be negligible in all cases so I don’t think it’s worth bothering with.
  • active-l2: This check depends on the virtual size of the image, and on the percentage that has already been allocated. This check might have some impact if the image is very large (several hundred GBs or more). In that case one way to deal with it is to create an image with a larger cluster size. This also has the nice side effect of reducing the amount of memory needed for the L2 cache.
  • refcount-block: This check depends on the actual size of the qcow2 file and it’s independent from its virtual size. This check is relatively expensive even for small images, so if you notice performance problems chances are that they are due to this one. The good news is that we have been working on optimizing it, so if it’s slowing down your VMs the problem might go away completely in QEMU 2.9.
Conclusion

The qcow2 consistency checks are useful to detect data corruption, but they can affect write performance.

If you’re unsure and you want to check it quickly, open an image with overlap-check.template=none and see for yourself, but remember again that this will only affect write operations. To obtain more reliable results you should also open the image with cache=none in order to perform direct I/O and bypass the page cache. I’ve seen performance increases of 50% and more, but whether you’ll see them depends a lot on your setup. In many cases you won’t notice any difference.

I hope this post was useful to learn a bit more about the qcow2 format. There are other things that can help QEMU perform better, and I’ll probably come back to them in future posts, so stay tuned!

Acknowledgments

My work in QEMU is sponsored by Outscale and has been made possible by Igalia and the help of the rest of the QEMU development team.

Michael Catanzaro: An Update on WebKit Security Updates

Planet GNOME - Mër, 08/02/2017 - 7:39pd

One year ago, I wrote a blog post about WebKit security updates that attracted a fair amount of attention at the time. For a full understanding of the situation, you really have to read the whole thing, but the most important point was that, while WebKitGTK+ — one of the two WebKit ports present in Linux distributions — was regularly releasing upstream security updates, most Linux distributions were ignoring the updates, leaving users vulnerable to various security bugs, mainly of the remote code execution variety. At the time of that blog post, only Arch Linux and Fedora were regularly releasing WebKitGTK+ updates, and Fedora had only very recently begun doing so comprehensively.

Progress report!

So how have things changed in the past year? The best way to see this is to look at the versions of WebKitGTK+ in currently-supported distributions. The latest version of WebKitGTK+ is 2.14.3, which fixes 13 known security issues present in 2.14.2. Do users of the most popular Linux operating systems have the fixes?

  • Fedora users are good. Both Fedora 24 and Fedora 25 have the latest version, 2.14.3.
  • If you use Arch, you know you always have the latest stuff.
  • Ubuntu users rejoice: 2.14.3 updates have been released to users of both Ubuntu 16.04 and 16.10. I’m very  pleased that Ubuntu has decided to take my advice and make an exception to its usual stable release update policy to ensure its users have a secure version of WebKit. I can’t give Ubuntu an A grade here because the updates tend to lag behind upstream by several months, but slow updates are much better than no updates, so this is undoubtedly a huge improvement. (Anyway, it’s hardly a bad idea to be cautious when releasing a big update with high regression potential, as is unfortunately the case with even stable WebKit updates.) But if you use the still-supported Ubuntu 14.04 or 12.04, be aware that these versions of Ubuntu cannot ever update WebKit, as it would require a switch to WebKit2, a major API change.
  • Debian does not update WebKit as a matter of policy. The latest release, Debian 8.7, is still shipping WebKitGTK+ 2.6.2. I count 184 known vulnerabilities affecting it, though that’s an overcount as we did not exclude some Mac-specific security issues from the 2015 security advisories. (Shipping ancient WebKit is not just a security problem, but a user experience problem too. Actually attempting to browse the web with WebKitGTK+ 2.6.2 is quite painful due to bugs that were fixed years ago, so please don’t try to pretend it’s “stable.”) Note that a secure version of WebKitGTK+ is available for those in the know via the backports repository, but this does no good for users who trust Debian to provide them with security updates by default without requiring difficult configuration. Debian testing users also currently have the latest 2.14.3, but you will need to switch to Debian unstable to get security updates for the foreseeable future, as testing is about to freeze.
  • For openSUSE users, only Tumbleweed has the latest version of WebKit. The current stable release, Leap 42.2, ships with WebKitGTK+ 2.12.5, which is coincidentally affected by exactly 42 known vulnerabilities. (I swear I am not making this up.) The previous stable release, Leap 42.1, originally released with WebKitGTK+ 2.8.5 and later updated to 2.10.7, but never past that. It is affected by 65 known vulnerabilities. (Note: I have to disclose that I told openSUSE I’d try to help out with that update, but never actually did. Sorry!) openSUSE has it a bit harder than other distros because it has decided to use SUSE Linux Enterprise as the source for its GCC package, meaning it’s stuck on GCC 4.8 for the foreseeable future, while WebKit requires GCC 4.9. Still, this is only a build-time requirement; it’s not as if it would be impossible to build with Clang instead, or a custom version of GCC. I would expect WebKit updates to be provided to both currently-supported Leap releases.
  • Gentoo has the latest version of WebKitGTK+, but only in testing. The latest version marked stable is 2.12.5, so this is a serious problem if you’re following Gentoo’s stable channel.
  • Mageia has been updating WebKit and released a couple security advisories for Mageia 5, but it seems to be stuck on 2.12.4, which is disappointing, especially since 2.12.5 is a fairly small update. The problem here does not seem to be lack of upstream release monitoring, but rather lack of manpower to prepare the updates, which is a typical problem for small distros.
  • The enterprise distros from Red Hat, Oracle, and SUSE do not provide any WebKit security updates. They suffer from the same problem as Ubuntu’s old LTS releases: the WebKit2 API change  makes updating impossible. See my previous blog post if you want to learn more about that. (SUSE actually does have WebKitGTK+ 2.12.5 as well, but… yeah, 42.)

So results are clearly mixed. Some distros are clearly doing well, and others are struggling, and Debian is Debian. Still, the situation on the whole seems to be much better than it was one year ago. Most importantly, Ubuntu’s decision to start updating WebKitGTK+ means the vast majority of Linux users are now receiving updates. Thanks Ubuntu!

To arrive at the above vulnerability totals, I just counted up the CVEs listed in WebKitGTK+ Security Advisories, so please do double-check my counting if you want. The upstream security advisories themselves are worth mentioning, as we have only been releasing these for two years now, and the first year was pretty rough when we lost our original security contact at Apple shortly after releasing the first advisory: you can see there were only two advisories in all of 2015, and the second one was huge as a result of that. But 2016 seems to have gone decently well. WebKitGTK+ has normally been releasing most security fixes even before Apple does, though the actual advisories and a few remaining fixes normally lag behind Apple by roughly a month or so. Big thanks to my colleagues at Igalia who handle this work.

Challenges ahead

There are still some pretty big problems remaining!

First of all, the distributions that still aren’t releasing regular WebKit updates should start doing so.

Next, we have to do something about QtWebKit, the other big WebKit port for Linux, which stopped receiving security updates in 2013 after the Qt developers decided to abandon the project. The good news is that Konstantin Tokarev has been working on a QtWebKit fork based on WebKitGTK+ 2.12, which is almost (but not quite yet) ready for use in distributions. I hope we are able to switch to use his project as the new upstream for QtWebKit in Fedora 26, and I’d encourage other distros to follow along. WebKitGTK+ 2.12 does still suffer from those 42 vulnerabilities, but this will be a big improvement nevertheless and an important stepping stone for a subsequent release based on the latest version of WebKitGTK+. (Yes, QtWebKit will be a downstream of WebKitGTK+. No, it will not use GTK+. It will work out fine!)

It’s also time to get rid of the old WebKitGTK+ 2.4 (“WebKit1”), which all distributions currently parallel-install alongside modern WebKitGTK+ (“WebKit2”). It’s very unfortunate that a large number of applications still depend on WebKitGTK+ 2.4 — I count 41 such packages in Fedora — but this old version of WebKit is affected by over 200 known vulnerabilities and really has to go sooner rather than later. We’ve agreed to remove WebKitGTK+ 2.4 and its dependencies from Fedora rawhide right after Fedora 26 is branched next month, so they will no longer be present in Fedora 27 (targeted for release in November). That’s bad for you if you use any of the affected applications, but fortunately most of the remaining unported applications are not very important or well-known; the most notable ones that are unlikely to be ported in time are GnuCash (which won’t make our deadline) and Empathy (which is ported in git master, but is not currently in a  releasable state; help wanted!). I encourage other distributions to follow our lead here in setting a deadline for removal. The alternative is to leave WebKitGTK+ 2.4 around until no more applications are using it. Distros that opt for this approach should be prepared to be stuck with it for the next 10 years or so, as the remaining applications are realistically not likely to be ported so long as zombie WebKitGTK+ 2.4 remains available.

These are surmountable problems, but they require action by downstream distributions. No doubt some distributions will be more successful than others, but hopefully many distributions will be able to fix these problems in 2017. We shall see!

Michael Catanzaro: On Epiphany Security Updates and Stable Branches

Planet GNOME - Mër, 08/02/2017 - 7:09pd

One of the advantages of maintaining a web browser based on WebKit, like Epiphany, is that the vast majority of complexity is contained within WebKit. Epiphany itself doesn’t have any code for HTML parsing or rendering, multimedia playback, or JavaScript execution, or anything else that’s actually related to displaying web pages: all of the hard stuff is handled by WebKit. That means almost all of the security problems exist in WebKit’s code and not Epiphany’s code. While WebKit has been affected by over 200 CVEs in the past two years, and those issues do affect Epiphany, I believe nobody has reported a security issue in Epiphany’s code during that time. I’m sure a large part of that is simply because only the bad guys are looking, but the attack surface really is much, much smaller than that of WebKit. To my knowledge, the last time we fixed a security issue that affected a stable version of Epiphany was 2014.

Well that streak has unfortunately ended; you need to make sure to update to Epiphany 3.22.6, 3.20.7, or 3.18.11 as soon as possible (or Epiphany 3.23.5 if you’re testing our unstable series). If your distribution is not already preparing an update, insist that it do so. I’m not planning to discuss the embarrassing issue here — you can check the bug report if you’re interested — but rather on why I made new releases on three different branches. That’s quite unlike how we handle WebKitGTK+ updates! Distributions must always update to the very latest version of WebKitGTK+, as it is not practical to backport dozens of WebKit security fixes to older versions of WebKit. This is rarely a problem, because WebKitGTK+ has a strict policy to dictate when it’s acceptable to require new versions of runtime dependencies, designed to ensure roughly three years of WebKit updates without the need to upgrade any of its dependencies. But new major versions of Epiphany are usually incompatible with older releases of system libraries like GTK+, so it’s not practical or expected for distributions to update to new major versions.

My current working policy is to support three stable branches at once: the latest stable release (currently Epiphany 3.22), the previous stable release (currently Epiphany 3.20), and an LTS branch defined by whatever’s currently in Ubuntu LTS and elementary OS (currently Epiphany 3.18). It was nice of elementary OS to make Epiphany its default web browser, and I would hardly want to make it difficult for its users to receive updates.

Three branches can be annoying at times, and it’s a lot more than is typical for a GNOME application, but a web browser is not a typical application. For better or for worse, the majority of our users are going to be stuck on Epiphany 3.18 for a long time, and it would be a shame to leave them completely without updates. That said, the 3.18 and 3.20 branches are very stable and only getting bugfixes and occasional releases for the most serious issues. In contrast, I try to backport all significant bugfixes to the 3.22 branch and do a new release every month or thereabouts.

So that’s why I just released another update for Epiphany 3.18, which was originally released in September 2015. Compare this to the long-term support policies of Chrome (which supports only the latest version of the browser, and only for six weeks) or Firefox (which provides nine months of support for an ESR release), and I think we compare quite favorably. (A stable WebKit series like 2.14 is only supported for six months, but that’s comparable to Firefox.) Not bad?

Vincent Bernat: Write your own terminal emulator

Planet Debian - Mër, 08/02/2017 - 12:54pd

I was an happy user of rxvt-unicode until I got a laptop with an HiDPI display. Switching from a LoDPI to a HiDPI screen and back was a pain: I had to manually adjust the font size on all terminals or restart them.

VTE is a library to build a terminal emulator using the GTK+ toolkit, which handles DPI changes. It is used by many terminal emulators, like GNOME Terminal, evilvte, sakura, termit and ROXTerm. The library is quite straightforward and writing a terminal doesn’t take much time if you don’t need many features.

Let’s see how to write a simple one.

A simple terminal§

Let’s start small with a terminal with the default settings. We’ll write that in C. Another supported option is Vala.

#include <vte/vte.h> int main(int argc, char *argv[]) { GtkWidget *window, *terminal; /* Initialise GTK, the window and the terminal */ gtk_init(&argc, &argv); terminal = vte_terminal_new(); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); gtk_window_set_title(GTK_WINDOW(window), "myterm"); /* Start a new shell */ gchar **envp = g_get_environ(); gchar **command = (gchar *[]){g_strdup(g_environ_getenv(envp, "SHELL")), NULL }; g_strfreev(envp); vte_terminal_spawn_sync(VTE_TERMINAL(terminal), VTE_PTY_DEFAULT, NULL, /* working directory */ command, /* command */ NULL, /* environment */ 0, /* spawn flags */ NULL, NULL, /* child setup */ NULL, /* child pid */ NULL, NULL); /* Connect some signals */ g_signal_connect(window, "delete-event", gtk_main_quit, NULL); g_signal_connect(terminal, "child-exited", gtk_main_quit, NULL); /* Put widgets together and run the main loop */ gtk_container_add(GTK_CONTAINER(window), terminal); gtk_widget_show_all(window); gtk_main(); }

You can compile it with the following command:

gcc -O2 -Wall $(pkg-config --cflags --libs vte-2.91) term.c -o term

And run it with ./term:

More features§

From here, you can have a look at the documentation to alter behavior or add more features. Here are three examples.

Colors§

You can define the 16 basic colors with the following code:

#define CLR_R(x) (((x) & 0xff0000) >> 16) #define CLR_G(x) (((x) & 0x00ff00) >> 8) #define CLR_B(x) (((x) & 0x0000ff) >> 0) #define CLR_16(x) ((double)(x) / 0xff) #define CLR_GDK(x) (const GdkRGBA){ .red = CLR_16(CLR_R(x)), \ .green = CLR_16(CLR_G(x)), \ .blue = CLR_16(CLR_B(x)), \ .alpha = 0 } vte_terminal_set_colors(VTE_TERMINAL(terminal), &CLR_GDK(0xffffff), &(GdkRGBA){ .alpha = 0.85 }, (const GdkRGBA[]){ CLR_GDK(0x111111), CLR_GDK(0xd36265), CLR_GDK(0xaece91), CLR_GDK(0xe7e18c), CLR_GDK(0x5297cf), CLR_GDK(0x963c59), CLR_GDK(0x5E7175), CLR_GDK(0xbebebe), CLR_GDK(0x666666), CLR_GDK(0xef8171), CLR_GDK(0xcfefb3), CLR_GDK(0xfff796), CLR_GDK(0x74b8ef), CLR_GDK(0xb85e7b), CLR_GDK(0xA3BABF), CLR_GDK(0xffffff) }, 16);

While you can’t see it on the screenshot1, this also enables background transparency.

Miscellaneous settings§

VTE comes with many settings to change the behavior of the terminal. Consider the following code:

vte_terminal_set_scrollback_lines(VTE_TERMINAL(terminal), 0); vte_terminal_set_scroll_on_output(VTE_TERMINAL(terminal), FALSE); vte_terminal_set_scroll_on_keystroke(VTE_TERMINAL(terminal), TRUE); vte_terminal_set_rewrap_on_resize(VTE_TERMINAL(terminal), TRUE); vte_terminal_set_mouse_autohide(VTE_TERMINAL(terminal), TRUE);

This will:

  • disable the scrollback buffer,
  • not scroll to the bottom on new output,
  • scroll to the bottom on keystroke,
  • rewrap content when terminal size change, and
  • hide the mouse cursor when typing.
Update the window title§

An application can change the window title using XTerm control sequences (for example, with printf "\e]2;${title}\a"). If you want the actual window title to reflect this, you need to define this function:

static gboolean on_title_changed(GtkWidget *terminal, gpointer user_data) { GtkWindow *window = user_data; gtk_window_set_title(window, vte_terminal_get_window_title(VTE_TERMINAL(terminal))?:"Terminal"); return TRUE; }

Then, connect it to the appropriate signal, in main():

g_signal_connect(terminal, "window-title-changed", G_CALLBACK(on_title_changed), GTK_WINDOW(window)); Final words§

I don’t need much more as I am using tmux inside each terminal. In my own copy, I have also added the ability to complete a word using ones from the current window or other windows (also known as dynamic abbrev expansion). This requires to implement a terminal daemon to handle all terminal windows with one process, similar to urxvtcd.

While writing a terminal “from scratch”2 suits my need, it may not be worth it. evilvte is quite customizable and can be lightweight. Consider it as a first alternative. Honestly, I don’t remember why I didn’t pick it.

UPDATED: evilvte has not seen an update since 2014. Its GTK+3 support is buggy. It doesn’t support the latest versions of the VTE library. Therefore, it’s not a good idea to use it.

You should also note that the primary goal of VTE is to be a library to support GNOME Terminal. Notably, if a feature is not needed for GNOME Terminal, it won’t be added to VTE. If it already exists, it will likely to be deprecated and removed.

  1. Transparency is handled by the composite manager (Compton, in my case). 

  2. For some definition of “scratch” since the hard work is handled by VTE

Carl Chenet: The Gitlab database incident and the Backup Checker project

Planet Debian - Mër, 08/02/2017 - 12:00pd

The Gitlab.com database incident of 2017/01/31 and the resulting data loss reminded everyone (at least for the next days) how it’s easy to lose data, even when you think all your systems are safe.

Being really interested by the process of backing up data, I read with interest the report (kudos to the Gitlab company for being so transparent about it) and I was soooo excited to find the following sentence:

Regular backups seem to also only be taken once per 24 hours, though team-member-1 has not yet been able to figure out where they are stored. According to team-member-2 these don’t appear to be working, producing files only a few bytes in size.

Whoa, guys! I’m so sorry for you about the data loss, but from my point of view I was so excited to find a big FOSS company publicly admitting and communicating about a perfect use case for the Backup Checker project, a Free Software I’ve been writing these last years.

Data loss: nobody cares before, everybody cries after

Usually people don’t care about the backups. It’s a serious business for web hosters and the backup team from big companies but otherwise and in other places, nobody cares.

Usually everybody agrees about how backups are important but few people make them or install an automatized system to create backups and the day before, nobody verifies they are usable. The reason is obvious: it’s totally boring, and in some cases e.g for large archives, difficult.

Because verifying backups is boring for humans, I launched the Backup Checker project in order to automatize this task.

Backup Checker offers a wide range of features, checking lots of different archives (tar.{gz,bz2,xz}, zip, tree of files and offer lots of different tests (hash sum, size {equal, smaller/greater than}, unix rights, …,). Have a look at the official documentation for a exhaustive list of features and possible tests.

Automatize the controls of your backups with Backup Checker

Checking your backups means to describe in a configuration file how a backup should be, e.g a gzipped database dump. You usually know about what size the archive is going to be, what the owner and the group owner should be.

Even easier, with Backup Checker you can generate this list of criterias from an actual archive, and remove uneeded criterias to create a template you can re-use for different kind of archives.

Ok, 2 minutes of your time for a real word example, I use an existing database sql dump in an tar.gz archive to automatically create the list describing this backup:

$ backupchecker -G database-dump.tar.gz $ cat database-dump.list [archive] mtime| 1486480274.2923253 [files] database.sql| =7854803 uid|1000 gid|1000 owner|chaica group|chaica mode|644 type|f mtime|1486480253.0

Now, just remove parameters too precise from this list to get a backup template. Here is a possible result:

[files] database.sql| >6m uid|1000 gid|1000 mode|644 type|f

We define here a template for the archive, meaning that the database.sql file in the archive should have a size greater than 6 megabytes, be owned by the user with the uid of 1000 and the group with a gid of 1000, this file should have the mode 644 and be a regular file. In order to use a template instead of the complete list, you also need to remove the sha512 from the .conf file.

Pretty easy hmm? Ok, just for fun, lets replicate the part of the Gitlab.com database incident mentioned above and write an archive with an empty sql dump inside an archive:

$ touch /tmp/database.sql && \ tar zcvf /tmp/database-dump.tar.gz /tmp/database.sql && \ cp /tmp/database-dump.tar.gz .

Now we launch Backup Checker with the previously created template. If you didn’t change the name of database-dump.list file, the command should only be:

$ backupchecker -C database-dump.conf $ cat a.out WARNING:root:1 file smaller than expected while checking /tmp/article-backup-checker/database-dump.tar.gz: WARNING:root:database.sql size is 0. Should have been bigger than 6291456.

The automatized controls of Backup Checker trigger a warning in the log file. The empty sql dump has been identified inside the archive.

A step further

As you could read in this article, verifying some of your backups is not a time consuming task, given the fact you have a FOSS project dedicated to this task, with an easy way to realize a template of your backups and to use it.

This article provided a really simple example of such a use case, the Backup Checker has lots of features to offer when verifying your backups. Read the official documentation for more complete descriptions of the available possibilities.

Data loss, especially for projets storing user data is always a terrible event in the life of an organization. Lets try to learn from mistakes which could happen to anyone and build better backup systems.

More information about the Backup Checker project

 

 

Craig Small: WordPress 4.7.2

Planet Debian - Mar, 07/02/2017 - 9:53md

When WordPress originally announced their latest security update, there were three security fixes. While all security updates can be serious, they didn’t seem too bad. Shortly after, they updated their announcement with a fourth and more serious security problem.

I have looked after the Debian WordPress package for a while. This is the first time I have heard people actually having their sites hacked almost as soon as this vulnerability was announced.

If you are running WordPress 4.7 or 4.7.1, your website is vulnerable and there are bots out there looking for it. You should immediately upgrade to 4.7.2 (or, if there is a later 4.7.x version to that).  There is now updated Debian wordpress version 4.7.2 packages for unstable, testing and stable backports.

For stable, you are on a patched version 4.1 which doesn’t have this specific vulnerability (it was introduced in 4.7) but you should be using 4.1+dfsg-1+deb8u12 which has the fixes found in 4.7.1 ported back to 4.1 code.

Bits from Debian: DebConf17: Call for Proposals

Planet Debian - Mar, 07/02/2017 - 9:00md

The DebConf Content team would like to Call for Proposals for the DebConf17 conference, to be held in Montreal, Canada, from August 6 through August 12, 2017.

You can find this Call for Proposals in its latest form at: https://debconf17.debconf.org/cfp

Please refer to this URL for updates on the present information.

Submitting an Event

Submit an event proposal and describe your plan. Please note, events are not limited to traditional presentations or informal sessions (BoFs). We welcome submissions of tutorials, performances, art installations, debates, or any other format of event that you think would be beneficial to the Debian community.

Please include a short title, suitable for a compact schedule, and an engaging description of the event. You should use the field "Notes" to provide us information such as additional speakers, scheduling restrictions, or any special requirements we should consider for your event.

Regular sessions may either be 20 or 45 minutes long (including time for questions), other kinds of sessions (like workshops) could have different durations. Please choose the most suitable duration for your event and explain any special requests.

You will need to create an account on the site, to submit a talk. We'd encourage Debian account holders (e.g. DDs) to use Debian SSO when creating an account. But this isn't required for everybody, you can sign up with an e-mail address and password.

Timeline

The first batch of accepted proposals will be announced in April. If you depend on having your proposal accepted in order to attend the conference, please submit it as soon as possible so that it can be considered during this first evaluation period.

All proposals must be submitted before Sunday 4 June 2017 to be evaluated for the official schedule.

Topics and Tracks

Though we invite proposals on any Debian or FLOSS related subject, we have some broad topics on which we encourage people to submit proposals, including:

  • Blends
  • Debian in Science
  • Cloud and containers
  • Social context
  • Packaging, policy and infrastructure
  • Embedded
  • Systems administration, automation and orchestration
  • Security

You are welcome to either suggest more tracks, or become a coordinator for any of them; please refer to the Content Tracks wiki page for more information on that.

Code of Conduct

Our event is covered by a Code of Conduct designed to ensure everyone's safety and comfort. The code applies to all attendees, including speakers and the content of their presentations. For more information, please see the Code on the Web, and do not hesitate to contact us at content@debconf.org if you have any questions or are unsure about certain content you'd like to present.

Video Coverage

Providing video of sessions amplifies DebConf achievements and is one of the conference goals. Unless speakers opt-out, official events will be streamed live over the Internet to promote remote participation. Recordings will be published later under the DebConf license, as well as presentation slides and papers whenever available.

DebConf would not be possible without the generous support of all our sponsors, especially our Platinum Sponsor Savoir-Faire Linux. DebConf17 is still accepting sponsors; if you are interested, or think you know of others who would be willing to help, please get in touch!

In case of any questions, or if you wanted to bounce some ideas off us first, please do not hesitate to reach out to us at content@debconf.org.

We hope to see you in Montreal!

The DebConf team

Jonathan McDowell: GnuK on the Maple Mini

Planet Debian - Mar, 07/02/2017 - 7:34md

Last weekend, as a result of my addiction to buying random microcontrollers to play with, I received some Maple Minis. I bought the Baite clone direct from AliExpress - so just under £3 each including delivery. Not bad for something that’s USB capable, is based on an ARM and has plenty of IO pins.

I’m not entirely sure what my plan is for the devices, but as a first step I thought I’d look at getting GnuK up and running on it. Only to discover that chopstx already has support for the Maple Mini and it was just a matter of doing a ./configure --vidpid=234b:0000 --target=MAPLE_MINI --enable-factory-reset ; make. I’d hoped to install via the DFU bootloader already on the Mini but ended up making it unhappy so used SWD by following the same steps with OpenOCD as for the FST-01/BusPirate. (SWCLK is D21 and SWDIO is D22 on the Mini). Reset after flashing and the device is detected just fine:

usb 1-1.1: new full-speed USB device number 73 using xhci_hcd usb 1-1.1: New USB device found, idVendor=234b, idProduct=0000 usb 1-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 1-1.1: Product: Gnuk Token usb 1-1.1: Manufacturer: Free Software Initiative of Japan usb 1-1.1: SerialNumber: FSIJ-1.2.3-87155426

And GPG is happy:

$ gpg --card-status Reader ...........: 234B:0000:FSIJ-1.2.3-87155426:0 Application ID ...: D276000124010200FFFE871554260000 Version ..........: 2.0 Manufacturer .....: unmanaged S/N range Serial number ....: 87155426 Name of cardholder: [not set] Language prefs ...: [not set] Sex ..............: unspecified URL of public key : [not set] Login data .......: [not set] Signature PIN ....: forced Key attributes ...: rsa2048 rsa2048 rsa2048 Max. PIN lengths .: 127 127 127 PIN retry counter : 3 3 3 Signature counter : 0 Signature key ....: [none] Encryption key....: [none] Authentication key: [none] General key info..: [none]

While GnuK isn’t the fastest OpenPGP smart card implementation this certainly seems to be one of the cheapest ways to get it up and running. (Plus the fact that chopstx already runs on the Mini provides me with a useful basis for other experimentation.)

Bastian Ilsø Hougaard: FOSDEM 2017 Day 3: Talks & Chats

Planet GNOME - Mar, 07/02/2017 - 6:12md


Silent morning at the booths in building K (CC-BY-SA 3.0).

Today I got early up, going with Andreas to the venue, arriving at 8.30 AM. He was going there to open the Open Source Design room, I was going there to open the GNOME booth. After the shift I then decided to wandered around to collect stickers and speak to various projects at their booths.


Emiliano at the LibreOffice booth (CC-BY-SA 3.0).

LibreOffice‘s booth who had a stand right next to us and I decided to stop by. In LibreOffice they had just released version 5.3, which among other new features include a renewed user interface. LibreOffice is also making progress on integrating with GTK+3, although I unfortunately missed the talk they had about that the day before. In recent years a new flavor of LibreOffice has also arrived, namely LibreOffice online. This project makes it possible to deploy your own collaborative document editing infrastructure.


Team Coala at FOSDEM (CC-BY-SA 3.0).

At the Coala booth, I met Lasse whom I know also via the GNOME community. Coala is a type of meta code analysis software. Currently they are reworking internals, and ultimately aiming at simplifying how to perform the code analysis.


Jobs corner located in Building H. (CC-BY-SA 3.0).

My experience in all three FOSDEM conferences is that they are a good place to network and meet new faces. One thing I dont recall seeing at previous FOSDEMs was job postings. There was a very long wall and table dedicated so that individuals and organizations could advertise jobs, from everything between part-time system administrators and DevOps to full-time software engineers or project managers. Practical!


Stickers! (CC-BY-SA 3.0).

..and that was the end of my sticker collecting journey. Now I’ve got some, ready to be put on the dorm door at home. :-)

Talks

The rest of the day went with watching talks. In many places there were very large lines of people trying to get in, inside many of the rooms.


People standing in line to the “Decentralized Internet” room (CC-BY-SA 3.0).

In the end I went to the open source design room, in which i stayed for the rest of the day. This being an open source conference, many of the talks at FOSDEM are focused around software-engineering. The open source design room is the exception. It’s a small room, but there was good space available and I could sit down and do a little work in the meantime.


The Open Source Design room (CC-BY-SA 3.0).

What I really like about this room is that it is arranged by the open source design community. It feels very unified that the room directly represents a community, not just a topic. Open source design have its own repository with assets, their own forum etc and it represents designers who do their work in many different open source projects. Many of the talks were reflecting on the methodology we use to do design in open source. Many of the talks revolved around how we can approach user research to inform ourselves when designing. A speaker named Miroslav Mazel spoke about the challenges in conducting user research using local volunteers. One particular difficulty he explained is on how to keep the interest among the volunteers up for conducting it. Andreas was also there to speak about his experience conducting user interviews to inform his work on GNOME Maps. Including the user in the design process helped to recognize new use cases when designing transit routing in GNOME Maps.


Andreas answering questions during his speech “Interviews as user research” (CC-BY-SA 3.0).

Matthias Clasen and Emel spoke about the design of GNOME Recipes, a new application they are working on for GNOME’s upcoming 20th Anniversary. I think the application looks very promising and am definitely interested in submitting some more recipes!


Emel explaining the design of GNOME Recipes (CC-BY-SA 3.0).

Finally Jan, a designer on NextCloud, spoke about getting more designers involved in open source. IT is afterall not only about software engineering, the technology has to be used by people. So design matters and there are many projects which are in dire need of more designers. The open source room concluded with project pitches. Developers of various open source projects would each have three minutes to advertise their projects and make a call for design participation. I really liked this initiative! It’s hard to get started in many open source projects, especially if your role is not a software engineer. I hope for all the developers who stood up and advertised their project, succeeded in reaching out to interested designers. :-)

Home

Monday, we left Belgium. Although I left with an upset stomach and a cold, all in all I did have a really good time. Maybe we will meet again at Open Source Days 2017, foss-north 2017 or GUADEC 2017?

Olivier Berger: Making Debian stable/jessie images for OpenStack with bootstrap-vz and cloud-init

Planet Debian - Mar, 07/02/2017 - 6:09md

I’m investigating the creation of VM images for different virtualisation solutions.

Among the target platforms is a destop as a service platform based on an OpenStack public cloud.

We’ve been working with bootstrap-vz for creating VMs for Vagrant+VirtualBox so I wanted to test its use for OpenStack.

There are already pre-made images available, including official Debian ones, but I like to be able to re-create things instead of depending on some external magic (which also means to be able to optimize, customize and avoid potential MitM, of course).

It appears that bootstrap-vz can be used with cloud-init provided that some bits of config are specified.

In particular the cloud_init plugin of bootstrap-vz requires a metadata_source set to “NoCloud, ConfigDrive, OpenStack, Ec2“. Note we explicitely spell it ‘OpenStack‘ and not ‘Openstack‘ as was mistakenly done in the default Debian cloud images (see https://bugs.debian.org/854482).

The following snippet of manifest provides the necessary bits :

--- name: debian-{system.release}-{system.architecture}-{%Y}{%m}{%d} provider: name: kvm virtio_modules: - virtio_pci - virtio_blk bootstrapper: workspace: /target # create or reuse a tarball of packages tarball: true system: release: jessie architecture: amd64 bootloader: grub charmap: UTF-8 locale: en_US timezone: UTC volume: backing: raw partitions: #type: gpt type: msdos root: filesystem: ext4 size: 4GiB swap: size: 512MiB packages: # change if another mirror is closer mirror: http://ftp.fr.debian.org/debian/ plugins: root_password: password: whatever cloud_init: username: debian # Note we explicitely spell it 'OpenStack' and not 'Openstack' as done in the default Debian cloud images (see https://bugs.debian.org/854482) metadata_sources: NoCloud, ConfigDrive, OpenStack, Ec2 # admin_user: # username: Administrator # password: Whatever minimize_size: # reduce the size by around 250 Mb zerofree: true

I’ve tested this with the bootstrap-vz version in stretch/testing (0.9.10+20170110git-1) for creating jessie/stable image, which were booted on the OVH OpenStack public cloud. YMMV.

Hope this helps

Emmanuele Bassi: This week in GTK+ – 33

Planet GNOME - Mar, 07/02/2017 - 3:20md

The past two weeks we’ve had DevConf and FOSDEM back to back, so the development slowed down a bit. Expect it to pick up again, now that we’re close to the GNOME 3.24 release.

In these last two weeks, the master branch of GTK+ has seen 34 commits, with 20973 lines added and 21593 lines removed.

Planning and status Notable changes

On the master branch:

  • Timm Bäder removed gtk_widget_class_list_style_properties() in the continuing effort to deprecate the style properties inside GtkWidget and replace them with CSS properties
  • Timm also moved some of the state used only by GtkToggleButton subclasses into those types
  • William Hua improved the Mir GDK backend for proper positioning of menus
Bugs fixed
  • 777547 – Notebook arrow icon wrong color after closing final tab
  • 773686 – Software when launched shows in dash with wrong icon, name and menu
  • 775864 – getting-started: typo tie->the
  • 778009 – menu drawn on top of menubar in Fedora
Getting involved

Interested in working on GTK+? Look at the list of bugs for newcomers and join the IRC channel #gtk+ on irc.gnome.org.

Sven Hoexter: Dell Latitude E7470 hold and mark with upper left touchpad button

Planet Debian - Mar, 07/02/2017 - 12:55md

Recently some of my coworkers and I experienced an issue with using the upper left touchpad button on our Dell Latitude E7470 and similar laptops (E5xxx from the current generation). Some time in January we could no longer hold down this button and select text with the touchpad. Using the left button below the touchpad still worked. This hit my coworker running Fedora and myself running Debian/stretch. So I first thought that it's likely a libinput issue (same version in Debian/stretch and Fedora and I recently pulled that in as an update), somehow blacklisting the upper left key because it's connected to the trackpoint. So I filled #99594 upstream. While this was not very helpful at first, and according to Peter very unlikely to be related to libinput, another coworker using Debian/jessie found this issue to hit him when he upgraded the backports kernel in use from 4.8 to 4.9. That finally led to the conclusion that it's a bug in the Linux alps driver, which is already fixed in 4.10 and probably 4.9.6.

Until the Debian kernel team pulls in a fresh 4.9 point release I'm using 4.10-rc6 from experimental. For Debian/jessie + backports kernel user it might be more convenient to just stay at 4.8 in case this issue annoys you.

Kudos to Peter, Benjamin, TW and WW for the help in locating the origin of this issue!

Lessons learned:

  • I should've started with the painful downgrade of xorg and libinput via snapshot.d.o before opening the bugreport.
  • A lot more of the touchpad related hardware support is nowadays in the kernel and not in the xorg layer. Either that was just my personal historic misunderstanding, or it was different 10 years ago.
  • There is an interesting set of slides from Benjamin related to debuging input device issues.

Junichi Uekawa: According to annual health check my weight has not increased the last year.

Planet Debian - Mar, 07/02/2017 - 12:36md
According to annual health check my weight has not increased the last year. Hopefully that's because of going to the gym.

Jonas Danielsson: Maps at FOSDEM

Planet GNOME - Mar, 07/02/2017 - 7:22pd
I went to FOSDEM again this year, my fourth year running. I go with a great group of friends and it is starting to become quite the tradition.

Maps meetingFOSDEM lines up pretty well with the GNOME release cycle, in that after the conference we have about a month of time to get the last stuff in before the next 6 month development cycle comes to an end. With that in mind we had a quite quick and informal Maps meeting on what the immediate priorities where for the release and what we wanted to do after that.

Transit routingWe want to merge Marcus transit-routing branch this cycle. This will not add anything if there is no OpenTripPlaner server available. But our plan is to be able to add one to our service file so this can be turned on if we get some sponsorship or in any other way manage to solve the infrastructure needs. This will also be a way of disabling the functionality if we lose infrastructure, such as with our MapQuest tiles previously.
Geocoding / search as you typeWe now have a Mapbox account. We could use the Mapbox geocoding API instead of Nominatim that we currently use. And with that we could achieve search-as-you-type functionality. The timing is right for a switch like this, since Collabora recently landed  a patch bomb on geocode-glib to make it handle custom backends through an interface. So we could write a Mapbox interface in Maps.
I did some prototyping with this during some FOSDEM talks and the (buggy) result can be seen in the video below.

An issue with using Mapbox geocoding service that I do not yet know if we can solve is that there does not appear to be a link between the id you get for a resulting place and the OpenStreetMap id. This makes it really hard for us to support editing the nodes you find.
Tile stylesAlso since we have a Mapbox account it would be possible for us to make our own styles. For instance an hi-contrast style, a custom print style or a general GNOME style. This is a daunting task. But if anyone feel up for it, please let us know.
Mapbox GL NativeThiago Santos from Mapbox held a talk about Mapbox GL Native which is a hardware-accelerated map rendering engine. It is written in C++-14 and has recently been ported to QT. Thiago talked about what is needed to port Mapbox GL Native to new platforms, and specifically called out GTK+ and GNOME Maps. Saying that it should be possible to make Mapbox GL Native work with our infrastructures.
Mapbox have written a blogpost outlining what needs to be true about a platform for Mapbox GL Native to be ported to it. Porting Mapbox GL Native to GLib land might be a nice GSoC or Outreachy project for GNOME/GTK+.

Arun Raghavan: Stricter JSON parsing with Haskell and Aeson

Planet GNOME - Mar, 07/02/2017 - 6:23pd

I’ve been having fun recently, writing a RESTful service using Haskell and Servant. I did run into a problem that I couldn’t easily find a solution to on the magical bounty of knowledge that is the Internet, so I thought I’d share my findings and solution.

While writing this service (and practically any Haskell code), step 1 is of course defining our core types. Our REST endpoint is basically a CRUD app which exchanges these with the outside world as JSON objects. Doing this is delightfully simple:

{-# LANGUAGE DeriveGeneric #-} import Data.Aeson import GHC.Generics data Job = Job { jobInputUrl :: String , jobPriority :: Int , ... } deriving (Eq, Generic, Show) instance ToJSON Job where toJSON = genericToJSON defaultOptions instance FromJSON Job where parseJSON = genericParseJSON defaultOptions

That’s all it takes to get the basic type up with free serialization using Aeson and Haskell Generics. This is followed by a few more lines to hook up GET and POST handlers, we instantiate the server using warp, and we’re good to go. All standard stuff, right out of the Servant tutorial.

The POST request accepts a new object in the form of a JSON object, which is then used to create the corresponding object on the server. Standard operating procedure again, as far as RESTful APIs go.

The nice part about doing it like this is that the input is automatically validated based on types. So input like:

{ "jobInputUrl": 123, // should have been a string "jobPriority": 123 }

will result in:

Error in $: expected String, encountered Number

However, as this nice tour of how Aeson works demonstrate, if the input has keys that we don’t recognise, no error will be raised:

{ "jobInputUrl": "http://arunraghavan.net", "jobPriority": 100, "junkField": "junkValue" }

This behaviour would not be undesirable in use-cases such as mine — if the client is sending fields we don’t understand, I’d like for the server to signal an error so the underlying problem can be caught early.

As it turns out, making the JSON parsing stricter and catch missing fields is just a little more involved. I didn’t find how this could be done in a single place on the Internet, so here’s the best I could do:

{-# LANGUAGE DeriveDataTypeable #-} {-# LANGUAGE DeriveGeneric #-} import Data.Aeson import Data.Data import GHC.Generics data Job = Job { jobInputUrl :: String , jobPriority :: Int , ... } deriving (Data, Eq, Generic, Show) instance ToJSON Job where toJSON = genericToJSON defaultOptions instance FromJSON Job where parseJSON json = do job <- genericParseJSON defaultOptions json if keysMatchRecords json job then return job else fail "extraneous keys in input" where -- Make sure the set of JSON object keys is exactly the same as the fields in our object keysMatchRecords (Object o) d = let objKeys = sort . fmap unpack . keys recFields = sort . fmap (fieldLabelModifier defaultOptions) . constrFields . toConstr in objKeys o == recFields d keysMatchRecords _ _ = False

The idea is quite straightforward, and likely very easy to make generic. The Data.Data module lets us extract the constructor for the Job type, and the list of fields in that constructor. We just make sure that’s an exact match for the list of keys in the JSON object we parsed, and that’s it.

Of course, I’m quite new to the Haskell world so it’s likely there are better ways to do this. Feel free to drop a comment with suggestions! In the mean time, maybe this will be useful to others facing a similar problem.

Update: I’ve fixed parseJSON to properly use fieldLabelModifier from the default options, so that comparison actually works when you’re not using Aeson‘s default options. Thanks to /u/tathougies for catching that.

I’m also hoping to rewrite this in generic form using Generics, so watch this space for more updates.

Sean Whitton: reclaiming conversation

Planet Debian - Mar, 07/02/2017 - 3:52pd

On Friday night I attended a talk by Sherry Turkle called “Reclaiming Conversation: The Power of Talk in a Digital Age”. Here are my notes.

Turkle is an anthropologist who interviews people from different generations about their communication habits. She has observed cross-generational changes thanks to (a) the proliferation of instant messaging apps such as WhatsApp and Facebook Messenger; and (b) fast web searching from smartphones.

Her main concern is that conversation is being trivialised. Consider six or seven college students eating a meal together. Turkle’s research has shown that the etiquette among such a group has shifted such that so long as at least three people are engaged in conversation, others at the table feel comfortable turning their attention to their smartphones. But then the topics of verbal conversation will tend away from serious issues – you wouldn’t talk about your mother’s recent death if anyone at the table was texting.

There are also studies that purport to show that the visibility of someone’s smartphone causes them to take a conversation less seriously. The hypothesis is that the smartphone is a reminder of all the other places they could be, instead of with the person they are with.

A related cause of the trivialisation of conversation is that people are far less willing to make themselves emotionally vulnerable by talking about serious matters. People have a high degree of control over the interactions that take place electronically (they can think about their reply for much longer, for example). Texting is not open-ended in the way a face-to-face conversation is. People are unwilling to give up this control, so they choose texting over talking.

What is the upshot of these two respects in which conversation is being trivialised? Firstly, there are psycho-social effects on individuals, because people are missing out on opportunities to build relationships. But secondly, there are political effects. Disagreeing about politics immediately makes a conversation quite serious, and people just aren’t having those conversations. This contributes to polarisation.

Note that this is quite distinct from the problems of fake news and the bubbling effects of search engine algorithms, including Facebook’s news feed. It would be much easier to tackle fake news if people talked about it with people around them who would be likely to disagree with them.

Turkle understands connection as a capacity for solitude and also for conversation. The drip feed of information from the Internet prevents us from using our capacity for solitude. But then we fail to develop a sense of self. Then when we finally do meet other people in real life, we can’t hear them because we just use them to try to establish a sense of self.

Turkle wants us to be more aware of the effects that our smartphones can have on conversations. People very rarely take their phone out during a conversation because they want to escape from that conversation. Instead, they think that the phone will contribute to that conversation, by sharing some photos, or looking up some information online. But once the phone has come out, the conversation almost always takes a turn for the worse. If we were more aware of this, we would have access to deeper interactions.

A further respect in which the importance of conversation is being downplayed is in the relationships between teachers and students. Students would prefer to get answers by e-mail than build a relationship with their professors, but of course they are expecting far too much of e-mail, which can’t teach them in the way interpersonal contact can.

All the above is, as I said, cross-generational. Something that is unique to millenials and below is that we seek validation for the way that we feel using social media. A millenial is not sure how they feel until they send a text or make a broadcast (this makes them awfully dependent on others). Older generations feel something, and then seek out social interaction (presumably to share, but not in the social media sense of ‘share’).

What does Turkle think we can do about all this? She had one positive suggestion and one negative suggestion. In response to student or colleague e-mails asking for something that ought to be discussed face-to-face, reply “I’m thinking.” And you’ll find they come to you. She doesn’t want anyone to write “empathy apps” in response to her findings. For once, more tech is definitely not the answer.

Turkle also made reference to the study reported here and here and here.

Joachim Breitner: Why prove programs equivalent when your compiler can do that for you?

Planet Debian - Mar, 07/02/2017 - 1:38pd

Last week, while working on CodeWorld, via a sequence of yak shavings, I ended up creating a nicely small library that provides Control.Applicative.Succs, a new applicative functor. And because I am trying to keep my Haskell karma good1, I wanted to actually prove that my code fulfills the Applicative and Monad laws.

This led me to inserted writing long comments into my code, filled with lines like this:

The second Applicative law: pure (.) <*> Succs u us <*> Succs v vs <*> Succs w ws = Succs (.) [] <*> Succs u us <*> Succs v vs <*> Succs w ws = Succs (u .) (map (.) us) <*> Succs v vs <*> Succs w ws = Succs (u . v) (map ($v) (map (.) us) ++ map (u .) vs) <*> Succs w ws = Succs (u . v) (map (($v).(.)) us ++ map (u .) vs) <*> Succs w ws = Succs ((u . v) w) (map ($w) (map (($v).(.)) us ++ map (u .) vs) ++ map (u.v) ws) = Succs ((u . v) w) (map (($w).($v).(.)) us ++ map (($w).(u.)) vs ++ map (u.v) ws) = Succs (u (v w)) (map (\u -> u (v w)) us ++ map (\v -> u (v w)) vs ++ map (\w -> u (v w)) ws) = Succs (u (v w)) (map ($(v w)) us ++ map u (map ($w) vs ++ map v ws)) = Succs u us <*> Succs (v w) (map ($w) vs ++ map v ws) = Succs u us <*> (Succs v vs <*> Succs w ws)

Honk if you have done something like this before!

I proved all the laws, but I was very unhappy. I have a PhD on something about Haskell and theorem proving. I have worked with Isabelle, Agda and Coq. Both Haskell and theorem proving is decades old. And yet, I sit here, and tediously write manual proofs by hand. Is this really the best we can do?

Of course I could have taken my code, rewritten it in, say, Agda, and proved it correct there. But (right now) I don’t care about Agda code. I care about my Haskell code! I don’t want to write it twice, worry about copying mistakes and mismatchs in semantics, and have external proofs to maintain. Instead, I want to prove where I code, and have the proofs checked together with my code!

Then it dawned to me that this is, to some extent, possible. The Haskell compiler comes with a sophisticated program transformation machinery, which is meant to simplify and optimize code. But it can also be used to prove Haskell expressions to be equivalent! The idea is simple: Take two expressions, run both through the compiler’s simplifier, and check if the results are the same. If they are, then the expressions are, as far as the compiler is concerned, equivalent.

A handful of hours later, I was able to write proof tasks like

app_law_2 = (\ a b (c::Succs a) -> pure (.) <*> a <*> b <*> c) === (\ a b c -> a <*> (b <*> c))

and others into my source file, and the compiler would tell me happily:

[1 of 1] Compiling Successors ( Successors.hs, Successors.o ) GHC.Proof: Proving getCurrent_proof1 … GHC.Proof: Proving getCurrent_proof2 … GHC.Proof: Proving getCurrent_proof3 … GHC.Proof: Proving ap_star … GHC.Proof: Proving getSuccs_proof1 … GHC.Proof: Proving getSuccs_proof2 … GHC.Proof: Proving getSuccs_proof3 … GHC.Proof: Proving app_law_1 … GHC.Proof: Proving app_law_2 … GHC.Proof: Proving app_law_3 … GHC.Proof: Proving app_law_4 … GHC.Proof: Proving monad_law_1 … GHC.Proof: Proving monad_law_2 … GHC.Proof: Proving monad_law_3 … GHC.Proof: Proving return_pure … GHC.Proof proved 15 equalities

This is how I want to prove stuff about my code!

Do you also want to prove stuff about your code? I packaged this up as a GHC plugin in the Haskell library ghc-proofs (not yet on Hackage). The README of the repository has a bit more detail on how to use this plugin, how it works, what its limitations are and where this is heading.

This is still only a small step, but finally there is a step towards low threshold program equivalence proofs in Haskell.

  1. Or rather recover my karma after such abominations such as ghc-dup, seal-module or ghc-heap-view.

Costales: 2 Años con Ubuntu Phone: Pasado, presente, futuro

Planet UBUNTU - Hën, 06/02/2017 - 9:44md
Hace exactamente 2 años, el 6 de Febrero del 2015, Canonical me hacía entrega como insider del bq E4.5, un par de meses antes de su venta al público.

Presentación Ubuntu Phone en Londres

Y sí, usé Ubuntu Phone durante 2 años en exclusiva (excepto unos pocos días que jugué con Firefox OS y Android).
 
E4.5

PasadoYo estaba muy feliz con mi bq E4.5 cuando ¡Oh sorpresa! Canonical nos entregaba un Meizu MX4.


Eran los buenos tiempos, con dos compañías volcadas en Ubuntu Touch, sacando a posteriori el bq E5, el Meizu PRO 5 y la tablet bq M10. Y una Canonical publicando actualizaciones OTA cada mes y pico.

Tablet M10
En estos 2 años leí muchos artículos sobre los primeros terminales. Casi todos desfavorables. Se olvidaban de que eran móviles para early adopters y les hacían reviews comparándolos con lo mejor de Android. ¡Fail! Para ser justos estas primeras versiones de Ubuntu Phone superaban a las primeras versiones de Android e iOS.

A nivel personal, nacían uNav y uWriter :')) Con un éxito arrollador que me sorprendió.

Ubucon Paris 15.10PresenteGrandes baluartes de Ubuntu, como David Planella, Daniel Holbach o Martin Pitt abandonan Ubuntu. Y junto a eso leo que Canonical para el desarrollo del móvil, con una redacción que no invita al optimismo. Pero ese 'para' no significa 'abandona'.

UBPorts coge relevancia en estos últimos meses trabajando en los ports de Fair Phone 2 y OnePlus One.


FairPhone 2FuturoEl presente no puede hacer que me sienta especialmente optimista. Ya no sólo por Ubuntu Touch en particular, si no por el mercado móvil en general. Un excelente Firefox OS que murió, un SailfishOS que se mantiene a duras penas, un Tizen que sólo papa Samsung mantiene con vida y un Windows Phone que se mantiene tercero en base a pasta del number one en el escritorio.
Y es que a pesar de la falta de privacidad, seguridad y en especial de software libre, nadie tose a Android.

Imagen de neurogadget


¿Y cómo plantea Ubuntu ese futuro tan negro? Pues podemos decir que Canonical se va a jugar el todo o nada a una sola carta: snap.

snap

Debo aclarar aquí el estado actual: En PC tenemos Ubuntu con Unity 7 y en móvil Ubuntu con Unity 8. Pero todo es el mismo Ubuntu, la misma base.

Y esa es la jugada, a corto plazo deberíamos tener un Ubuntu con Unity 8 tanto en PC como en móvil y basado en paquetes snap (que no tienen problemas de dependencias y tienen muchísima seguridad al 'isolar' las aplicaciones).

Y ahí entra en juego la convergencia: Mismo Ubuntu, mismas aplicaciones, distintos dispositivos.

Imagen de OMG Ubuntu!
Pero el coste de esta jugada podría ser muy caro: Dejar atrás toda la base actual de móviles (se salva la tablet), por usar Android de 32 bits y el salto implicaría usar 64bits lo cual no parece factible.

Faqet

Subscribe to AlbLinux agreguesi - Site në gjuhë të huaj