You are here

Planet Debian

Subscribe to Feed Planet Debian
Stuff, Debian, Free Software and Craig Debian work and related packaging comments sesse's blog jmtd WEBlog -- Wouter's Eclectic Blog Thinking inside the box ganbatte kudasai! keithp.com jmtd Joachim Breitners Denkblogade Reproducible builds blog Entries tagged english faiblog joey Any sufficiently advanced thinking is indistinguishable from madness Insider infos, master your Debian/Ubuntu distribution Debian work and related packaging comments Debian and Free Software As time goes by ... Free Software entrepreneurship: Debian, Ubuntu and beyond. Ben Hutchings's diary of life and technology Ben Hutchings's diary of life and technology showing latest 10 James McCoy pabs anarcat jmtd Enrico Zini: pdo Reproducible builds blog Enrico Zini: pdo Enrico Zini: pdo Enrico Zini: pdo Feeding the Cloud Thinking inside the box random musings and comments (y eso no es poca cosa) Entries tagged english Enrico Zini: pdo faiblog Thinking inside the box spwhitton The experience of a free software community member Thinking inside the box a blog Hello World! spwhitton Joachim Breitners Denkblogade Reproducible builds blog joey (y eso no es poca cosa) Thinking inside the box
Përditësimi: 1 javë 1 ditë më parë

Back Online

Pre, 08/12/2017 - 11:58pd

I now have Internet back! Which means I can try to get the Debian WordPress packages bashed into shape. Unfortunately they still have the problem with the json horrible “no evil” license which causes so many problems all over the place.

I’m hoping there is a simple way of just removing that component and going from there.

Craig http://dropbear.xyz Small Dropbear

Testing OpenStack using tempest: all is packaged, try it yourself

Pre, 08/12/2017 - 12:00pd

tl;dr: this post explains how the new openstack-tempest-ci-live-booter package configures a machine to PXE boot a Debian Live system running on KVM in order to run functional testing of OpenStack. It may be of interest to you if you want to learn how to PXE boot a KVM virtual machine running Debian Live, even if you aren’t interested in OpenStack.

Moving my CI from one location to another leads to package it fully

After packaging a release of OpenStack, it’s kind of mandatory to functionally test the set of packages. This is done by running the tempest test suite on an already deployed OpenStack installation. I used to do that on a real hardware, provided by my employer. But since I’ve lost my job (I’m still looking for a new employer at this time), I also lost access to the hardware they were providing to me.

As a consequence, I searched for a sponsor to provide the hardware to run tempest on. I first sent a mail to the openstack-dev list, asking for such a hardware. Then Rochelle Grober and Stephen Li from Huawei got me in touch with Zachary Smith, the CEO of Packet.net. And packet.net gave me an account on their system. I am amazed how good their service is. They provide baremetal servers around the world (15 data centers), provisioned using an API (meaning, fully automatically). A big thanks to them!

Anyway, even if I planned for a few weeks to give a big thanks to the above people (they really deserves it!), this isn’t the only goal of this post. This is to introduce how to run your own tempest CI on your own machine. Because since I have been in the situation where my CI had to move twice, I decided to industrialize it, and fully automate the setup of the CI server. And what does a DD do when writing software? Package it of course. So I packaged it all, and uploaded it to the archive. Here’s how to use all of this.

General principle

The best way to run an OpenStack tempest CI is to run it on a Debian Live system. Why? Because setting-up a full OpenStack environment takes a lot of time, mostly spent on disk I/O. And on a live system, everything runs on a RAM disk, so installing under this environment is the fastest way one could do. This is what I did when working with Mirantis: I had a real baremetal server, which I was PXE booting on a Debian Live system. However nice, this imposes having access to 2 servers: one for running the Live system, and one running the dhcp/pxe/tftp server. Also, this means the boot server needs 2 nics, one on the internet, and one for booting the 2nd server that will run the Live system. It was not possible to have such specific setup at packet, so I decided to replicate this using KVM, so it would become portable. And since the servers at packet.net are very fast, it isn’t much of an issue anymore to not run on baremetal.

Anyway, let’s dive into setting-up all of this.

Network topology

We’ll assume that one of your interface has internet access, let’s say eth0. Since we don’t want to destroy any of your network config, the openstack-tempest-ci-live-booter package will use a dummy network interface (ie: modprobe dummy) and bridge it to the network interface of the KVM virtual machine. That dummy network interface will be configured with 192.168.100.1, and the Debian Live KVM will use 192.168.100.2. This convenient default can be changed, but then you’ll have to pass your specific network configuration to each and every script (just read the beginning of each script to read the parameters).

Configure the host machine

First install the openstack-tempest-ci-live-booter package. This runtime depends on the isc-dhcp-server, tftpd-hpa, apache2, qemu-kvm and all what’s needed to run a Debian Live machine, booting it over PXE / iPXE (the package support both, more on iPXE later). So, let’s do it:

apt-get install openstack-tempest-ci-live-booter

The package, once installed, doesn’t do much. To respect the Debian policy, it can’t touch configuration files of other packages in maintainer scripts. Therefore, you have to manually run:

openstack-tempest-ci-live-booter-config --configure-dummy-nick

Running this script will:

  • configure the kvm-intel module to allow nested visualization (by unloading the module, adding “options kvm-intel nested=y” to /etc/modprobe.d, and reloading the module)
  • modprobe the dummy kernel module, run “ip link set name tempestnic0 dev dummy0” to create a tempestnic0 dummy interface
  • create a tempestbr bridge, set 192.168.100.1 for the bridge IP, bridge the tempestnic0 and tempesttap
  • configure tftpd-hpa to listen on 192.168.100.1
  • configure isc-dhcp-server to dhcpreply 192.168.100.2 on the tempestbr, so that the KVM machine can boot up with an IP
  • configure apache2 to serve the filesystem.squashfs root filesystem, loaded by the Linux kernel at boot time. Note that you may need to manually start and/or reload apache after this setup though.

Again, you can change the IP addresses if you like. You can also use a real interface if you intend to boot a real hardware rather than a KVM machine (in which case, just omit the –configure-dummy-nick, and manually configure your 2nd interface).

Also, openstack-tempest-ci-live-booter provides a /etc/init.d/openstack-tempest-ci-live-booter script which will configure NAT on your server, so that the Debian Live machine has internet access (needed for apt-get operations). Edit the file if you need to change 192.168.100.1/24 by something else. The script will pick-up the interface that is connected to the default gateway by itself.

The dhcp server is configured to support both legacy PXE and the new iPXE standard. I had to support iPXE, because that’s what the standard KVM ROM does, and also I wanted to keep legacy support for older baremetal hardware. The way iPXE works is that dhcpd tells the client where to fetch the iPXE script, which itself chains to lpxelinux.0 (instead of the standard pxelinux.0). It’s rather easy to setup once you understood how it works.

Build the live image

Now that the PXE server is configured, it’s now time to build the Debian live image. Simply do this to build the image, and copy its resulting files in the PXE server folder (ie: /var/lib/tempest-live-booter):

mkdir live cd live openstack-tempest-ci-build-live-image --debian-mirror-addr http://ftp.nl.debian.org/debian

Since we need to login in that server later on, the script will create an ssh key-pair. If you want your own keys, simply drop the id_rsa and id_rsa.pub files in your current folder before running the script. Then make it so that this key-pair can be later on used by default by the user who will run the tempest script (ie: copy id_rsa and id_rsa.pub in the ~/.ssh folder).

Running the openstack-tempest-ci

What the openstack-tempest-ci script does is (re-)starting your KVM virtual machine, ssh into it, upgrade it to sid, install OpenStack, and eventually run all the tempest suite. There’s 2 ways to run it: either install the openstack-tempest-ci package, eventually configure it (in /etc/default/openstack-tempest-ci), and simply run the “openstack-tempest-ci” command. Or, you can skip the installation of the package, and simply run it from source:

git clone http://anonscm.debian.org/git/openstack/debian/openstack-meta-packages.git cd openstack-meta-packages/src ./openstack-tempest-ci

Indeed, the script is designed to copy all scripts from source inside the Debian Live machine before using these scripts. The reason it’s doing that is because we want to avoid the situation where a modification needs to be uploaded to Debian before being able to test it, and also it was needed to be able to run the openstack-tempest-ci script without installing a package (which would need root access that I don’t have on casulana.debian.org, where running tempest is needed to test official OpenStack Debian images). So, definitively, feel free to hack everything in openstack-meta-packages/src before running the tempest script. Also, openstack-tempest-ci will look for a sources.list file in the current directory, and upload it to the Debian Live system before doing the upgrade/install. This way, it is easy to use the closest mirror.

Goirand Thomas http://thomas.goirand.fr/blog Zigo's blog

Simple media cachebusting with GitHub pages

Enj, 07/12/2017 - 11:10md

GitHub Pages makes it really easy to host static websites, including sites with custom domains or even with HTTPS via CloudFlare.

However, one typical annoyance with static site hosting in general is the lack of cachebusting so updating an image or stylesheet does not result in any change in your users' browsers until they perform an explicit refresh.

One easy way to add cachebusting to your Pages-based site is to use GitHub's support for Jekyll-based sites. To start, first we add some scaffolding to use Jekyll:

$ cd "$(git rev-parse --show-toplevel) $ touch _config.yml $ mkdir _layouts $ echo '{{ content }}' > _layouts/default.html $ echo /_site/ >> .gitignore

Then in each of our HTML files, we prepend the following header:

--- layout: default ---

This can be performed on your index.html file using sed:

$ sed -i '1s;^;---\nlayout: default\n---\n;' index.html

Alternatively, you can run this against all of your HTML files in one go with:

$ find -not -path './[._]*' -type f -name '*.html' -print0 | \ xargs -0r sed -i '1s;^;---\nlayout: default\n---\n;'

Due to these new headers, we can obviously no longer simply view our site by pointing our web browser directly at the local files. Thus, we now test our site by running:

$ jekyll serve --watch

... and navigate to http://127.0.0.1:3000/.

Finally, we need to append the cachebusting strings itself. For example, if we had the following HTML to include a CSS stylesheet:

<link href="/css/style.css" rel="stylesheet">

... we should replace it with:

<link href="/css/style.css?{{ site.time | date: '%s%N' }}" rel="stylesheet">

This adds the current "build" timestamp to the file, resulting in the following HTML once deployed:

<link href="/css/style.css?1507450135153299034" rel="stylesheet">

Don't forget to to apply it all your other static media, including images and Javascript:

<img src="image.jpg?{{ site.time | date: '%s%N' }}"> <script src="/js/scripts.js?{{ site.time | date: '%s%N' }}')">

To ensure that transitively-linked images are cachebusted, instead of referencing them in the CSS you can specify them directly in the HTML instead:

<header style="background-image: url(/img/bg.jpg?{{ site.time | date: '%s%N' }})"> Chris Lamb https://chris-lamb.co.uk/blog/category/planet-debian lamby: Items or syndication on Planet Debian.

Thoughts on AlphaZero

Enj, 07/12/2017 - 8:35md

The chess world woke up to something of an earthquake two days ago, when DeepMind (a Google subsidiary) announced that they had adapted their AlphaGo engine to play chess with only minimal domain knowledge—and it was already beating Stockfish. (It also plays shogi, but who cares about shogi. :-) ) Granted, the shock wasn't as huge as what the Go community must have felt when the original AlphaGo came in from nowhere and swept with it the undisputed Go throne and a lot of egos in the Go community over the course of a few short months—computers have been better at chess than humans for a long time—but it's still a huge event.

I see people are trying to make sense of what this means for the chess world. I'm not a strong chess player, an AI expert or a top chess programmer, but I do play chess, I've worked in AI (in Google, briefly in the same division as the DeepMind team) and I run what's the strongest chess analysis website online whenever Magnus Carlsen is playing (next game 17:00 UTC tomorrow!), so I thought I should share some musings.

First some background: We've been trying to make computers play chess for almost 70 years now; originally in the hopes that it would lead us to general AI, although we sort of abandoned that eventually. In almost all of that time, we've used the same basic structure; you have an evaluation function that can look at a specific position and say “I think this is good for white”, and then search that sees what happens with that evaluation function by playing all possible moves and countermoves (“oh wow, no matter what happens black is going to take white's queen, so maybe this wasn't so good after all”). The evaluation function roughly consists of a few hundred of hand-crafted features (everything from “the queen is worth nine points and rooks are five” to more complex issues around king safety, pawn structure and piece mobility) which are more or less added together, and the search tries very hard to prune out uninteresting lines so it can go deeper into the more interesting ones. In the end, you're left with a single “principal variation” (PV) consisting of a series of chess moves (presumably the best the engine can find within the allotted time), and the evaluation of the position at the end of the PV is the final evaluation of the position.

AlphaZero is different. Instead of a hand-crafted evaluation function, it just throws the raw information about the position (where the pieces are, and a few other tidbits like right-to-castle) into a neural network and gets out something like an expected win percentage. And instead of searching for the best line, it uses Monte Carlo tree search to make sort-of a weighted average of possible outcomes, explored in a stochastic way. The neural network is simply optimized through reinforcement learning under self-play; it starts off playing what's essentially random moves (it's restricted from playing illegal ones—that's one of the very few pieces of domain-specific knowledge), but rapidly gets better as the neural network learns what works or not.

These are not new ideas (in fact, I'm hard pressed to find a single new thing in the paper), and the basic structure has been attempted applied to chess in the past with master-level results, but it hasn't really made something approaching the top before now. The idea of numerical optimization through self-play is widely used, though, mostly to tune things like piece-square tables and other evaluation function weights. So I think that it's mainly
through great engineering and tons of computing power, not a radical breakthrough, that DeepMind has managed to make what's now probably the world's strongest chess entity on the planet. (I say “probably” because it “only” won 64–36 against Stockfish 8, which is about 100 Elo, and that's probably possible to do with a few hardware doublings and/or Stockfish improvements. Granted, it didn't lose a single game, and it's highly likely that AlphaZero's approach has a lot more room for further improvement than classical alpha-beta has.)

So what do I think AlphaZero will change? In the short term: Nothing. The paper contains ten games (presumably cherry-picked wins) of the 100-game match, and while those show beautiful chess that at times makes Stockfish seem cramped and limited, they don't seem to show any radically new chess ideas like AlphaGo did with Go. Nobody knows when or if DeepMind will release more games, although they have released a fair amount of Go games in the past, and also done Go exhibition matches. People are trying to pick out information from its opening choices (for instance, it prefers the infamous Berlin defense as black), which is interesting, but right now, there's just too little data to kill entire lines or openings.

We're also not likely to stop playing chess anytime soon, for the same reason that Magnus Carlsen nearly hitting 3000 Elo in blitz didn't stop me from playing online. AlphaZero hasn't solved chess by any means, and even though checkers has been weakly solved (Chinook) provably never loses a game from the opening position, although it won't win every won position), people still play it even on the top level. Most people simply are not at the level where the existence of perfect play matters, nor are their primary motivation to try to explore the frontiers of it.

So the primary question is whether top players can use this to improve their game. Now, DeepMind is not in the business of selling software; they're an AI research company, and AlphaZero runs on hardware (TPUs) you can't buy at this moment, and hardly even rent in the cloud. (Granted, you could probably make AlphaZero run efficiently on GPUs, especially the newer ones that start to get custom blocks for accelerating neural networks, although probably slower and with higher power usage.) Thus, it's unlikely that they will be selling or open-sourcing AlphaZero anytime soon. You could imagine top players wanting to go into talks to pay for exclusive access, but if you look at the costs of developing such a thing (just the training time alone has to be significant), it's obvious that they didn't do this in the hope of recouping the development costs. If anything, you would imagine that they'd sell it as a cloud service, but nothing like that has emerged for AlphaGo, where they have a competitive much larger market lead, so it seems unlikely.

Could anyone take their paper and reimplement it? The answer is: Maybe. AlphaGo was two years ago, has been backed up with several papers, and we still don't have anything public that's really close. Tencent's AI lab has made their own clone (Fine Art), and then there's DeepZenGo and others, but nothing nearly as strong that you can download or buy at this stage (as far as I know, anyway). Chess engines are typically made by teams of one or two people, and so far, deep learning-based approaches seem to require larger teams and a fair amount of (expensive) computing time, and most chess programmers are nor deep learning experts anyway. It's hard to make a living off of selling chess engines even in a small team; one could again assume a for-hire project, but I think not even most of the top players have the money to hire someone for a year or two for doing a speculative project to making an entirely new kind of one. It's limited how much a 100 Elo stronger engine will help you during opening preparation/training anyway; knowing how to work effectively with the computer is much more valuable. After all, it's not like you can use it while playing (unless it's freestyle chess).

The good news is that DeepMind's approach seems to become simpler and simpler over time. The first version of AlphaGo had all sorts of complexities and relied partially on hand-crafted features (although it wasn't very widely publicized), while the latest versions have removed a lot of the fluff. Make no mistake, though; the devil is in the details, and writing a top-class chess engine is a huge undertaking. My hunch is on two to three years before you can buy something that beats Stockfish on the same hardware. But I will hedge my bet here; it's hard to make predictions, especially about the future. Even with a world-class neural network in your brain.

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Three Minimalism reads

Enj, 07/12/2017 - 5:26md

"The Life-Changing Magic of Tidying Up" by Marie Kondo is a popular (New York Times best selling) book by lifestyle consultant Mari Kondo about tidying up and decluttering. It's not strictly about minimalism, although her approach is informed by her own preferences which are minimalist. Like all self-help books, there's some stuff in here that you might find interesting or applicable to your own life, amongst other stuff you might not. Kondo believes, however, that her methods only works if you stick to them utterly.

Next is "Goodbye, Things" by Fumio Sasaki. The end-game for this book really is minimalism, but the book is structured in such a way that readers at any point on a journey to minimalism (or coinciding with minimalism, if that isn't your end-goal) can get something out of it. A large proportion of the middle of the book is given over to a general collection of short, one-page-or-less tips on decluttering, minimising, etc. You can randomly flip through this section a bit like randomly drawing a card from a deck. I started to wonder whether there's a gap in the market for an Oblique Strategies-like minimalism product. The book recommended several blogs for further reading, but they are all written in Japanese.

Finally issue #18 of New Philosopher is the "Stuff" issue and features several articles from modern Philosophers (as well as some pertinent material from classical ones) on the nature of materialism. I've been fascinated by Philosophy from a distance ever since my brother read it as an Undergraduate so I occasionally buy the philosophical equivalent of Pop Science books or magazines, but this was the most accessible for me that I've read to date.

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

Adding subtitles with FFmpeg

Enj, 07/12/2017 - 1:52md

For future reference (to myself, for the most part):

ffmpeg -i foo.webm -i foo.en.vtt -i foo.nl.vtt -map 0:v -map 0:a \ -map 1:s -map 2:s -metadata:s:a language=eng -metadata:s:s:0 \ language=eng -metadata:s:s:1 language=nld -c copy -y \ foo.subbed.webm

... is one way to create a single .webm file from one .webm input file and multiple .vtt files. A little bit of explanation:

  • The -i arguments pass input files. You can have multiple input files for one output file. They are numbered internally (this is necessary for the -map and -metadata options later), starting from 0.
  • The -map options take a "mapping". With them, you specify which input streams should go where in the output stream. By default, if you have multiple streams of the same type, ffmpeg will only pick one (the "best" one, whatever that is). The mappings we specify are:

    • -map 0:v: this means to take the video stream from the first file (this is the default if you do not specify any mapping at all; but if you do specify a mapping, you need to be complete)
    • -map 0:a: take the audio stream from the first file as well (same as with the video).
    • -map 1:s: take the subtitle stream from the second (i.e., indexed 1) file.
    • -map 2:s: take the subtitle stream from the third (i.e., indexed 2) file.
  • The -metadata options set metadata on the output file. Here, we pass:

    • -metadata:s:a language=eng, to add a 's'tream metadata item on the 'a'udio stream, with name language and content eng. The language metadata in ffmpeg is special, in that it gets automatically translated to the correct way of specifying the language in the target container format.
    • -metadata:s:s:0 language=eng, to add a 's'tream metadata item on the first (indexed 0) 's'ubtitle stream in the output file. This too has the english language set
    • `-metadata:s:s:1 language=nld', to add a 's'tream metadata item on the second (indexed 1) 's'ubtitle stream in the output file. This has dutch set as the language.
  • The -c copy option tells ffmpeg to not transcode the input video data, but just to rewrite the container. This works because all input files (WebM video plus VTT subtitles) are valid for WebM. If you do not have an input subtitle format that is valid for WebM, you can instead limit the copy modifier to the video and audio only, allowing ffmpeg to transcode the subtitles. This is done by way of -c:v copy -c:a copy.
  • Finally, we pass -y to specify that any pre-existing output file should be overwritten, and the name of the output file.
Wouter Verhelst https://grep.be/blog//pd/ pd

RcppArmadillo 0.8.300.1.0

Enj, 07/12/2017 - 1:59pd

Another RcppArmadillo release hit CRAN today. Since our last 0.8.100.1.0 release in October, Conrad kept busy and produced Armadillo releases 8.200.0, 8.200.1, 8.300.0 and now 8.300.1. We tend to now package these (with proper reverse-dependency checks and all) first for the RcppCore drat repo from where you can install them "as usual" (see the repo page for details). But this actual release resumes within our normal bi-monthly CRAN release cycle.

These releases improve a few little nags on the recent switch to more extensive use of OpenMP, and round out a number of other corners. See below for a brief summary.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 405 other packages on CRAN.

A high-level summary of changes follows.

Changes in RcppArmadillo version 0.8.300.1.0 (2017-12-04)
  • Upgraded to Armadillo release 8.300.1 (Tropical Shenanigans)

    • faster handling of band matrices by solve()

    • faster handling of band matrices by chol()

    • faster randg() when using OpenMP

    • added normpdf()

    • expanded .save() to allow appending new datasets to existing HDF5 files

  • Includes changes made in several earlier GitHub-only releases (versions 0.8.300.0.0, 0.8.200.2.0 and 0.8.200.1.0).

  • Conversion from simple_triplet_matrix is now supported (Serguei Sokol in #192).

  • Updated configure code to check for g++ 5.4 or later to enable OpenMP.

  • Updated the skeleton package to current packaging standards

  • Suppress warnings from Armadillo about missing OpenMP support and -fopenmp flags by setting ARMA_DONT_PRINT_OPENMP_WARNING

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

My Free Software Activities in November 2017

Mër, 06/12/2017 - 7:33md

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

Debian Games Debian Java
  • New upstream versions this month: undertow, jackrabbit, libpdfbox2, easymock, libokhttp-java, mediathekview, pdfsam, libsejda-java, libsambox-java and libnative-platform-java.
  • I updated bnd (2.4.1-7) in order to help with the removal of Eclipse from Testing. Unfortunately there is more work to do and the only way forward is to package a newer version of Eclipse and to split the package in a way, so that such issues can be avoided in the future. P.S.: We appreciate help with maintaining Eclipse! (#681726)
  • I sponsored libimglib2-java for Ghislain Antony Vaillant.
  • I fixed a regression in libmetadata-extractor-java related to relative classpaths. (#880746)
  • I spent more time on upgrading Gradle to version 3.4.1 and finally succeeded. The package is in experimental now. Upgrading from 3.2.1 to 3.4.1 didn’t seem like a big undertaking but the 8 MB debdiff and ~170000 lines of code changes proved me wrong. I discovered two regressions with this version in mockito and bnd. The former one could be resolved but bnd requires probably an upgrade as well. I would like to avoid that at the moment because major bnd upgrades tend to affect dozens of reverse-dependencies, mostly in a negative way.
  • Netbeans was affected by a regression in jaxb and failed to build from source. (#882525) I could partly revert the damage but another bug in jaxb 2.3.0 is currently preventing a complete recovery.
  • I fixed two Java 9 transition bugs in libnative-platform-java (#874645) and  jedit (#875583).
Debian LTS

This was my twenty-first month as a paid contributor and I have been paid to work 14.75 hours (13 +1.75 from October) on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-1177-1. Issued a security update for poppler fixing 4 CVE.
  • DLA-1178-1. Issued a security update for opensaml2 fixing 1 CVE.
  • DLA-1179-1. Issued a security update for shibboleth-sp2 fixing 1 CVE.
  • DLA-1180-1. Issued a security update for libspring-ldap-java fixing 1 CVE.
  • DLA-1184-1. Issued a security update for optipng fixing 1 CVE.
  • DLA-1185-1. Issued a security update for sam2p fixing 1 CVE.
  • DLA-1197-1. Issued a security update for sox fixing 7 CVE.
  • DLA-1198-1. Issued a security update for libextractor fixing 6 CVE. I also discovered that libextractor in buster/sid is still affected by more security issues and reported my findings as Debian bug #883528.
Misc
  • I packaged a new upstream release of osmo, a neat task manager and calendar application.
  • I prepared a security update for sam2p, which will be part of the next Jessie point release, and libspring-ldap-java. (DSA-4046-1)

Thanks for reading and see you next time.

Apo https://gambaru.de/blog planetdebian – gambaru.de

Creating a blog with pelican and Github pages

Mar, 05/12/2017 - 11:30md

Today I'm going to talk about how this blog was created. Before we begin, I expect you to be familiarized with using Github and creating a Python virtual enviroment to develop. If you aren't, I recommend you to learn with the Django Girls tutorial, which covers that and more.

This is a tutorial to help you publish a personal blog hosted by Github. For that, you will need a regular Github user account (instead of a project account).

The first thing you will do is to create the Github repository where your code will live. If you want your blog to point to only your username (like rsip22.github.io) instead of a subfolder (like rsip22.github.io/blog), you have to create the repository with that full name.

I recommend that you initialize your repository with a README, with a .gitignore for Python and with a free software license. If you use a free software license, you still own the code, but you make sure that others will benefit from it, by allowing them to study it, reuse it and, most importantly, keep sharing it.

Now that the repository is ready, let's clone it to the folder you will be using to store the code in your machine:

$ git clone https://github.com/YOUR_USERNAME/YOUR_USERNAME.github.io.git

And change to the new directory:

$ cd YOUR_USERNAME.github.io

Because of how Github Pages prefers to work, serving the files from the master branch, you have to put your source code in a new branch, preserving the "master" for the output of the static files generated by Pelican. To do that, you must create a new branch called "source":

$ git checkout -b source

Create the virtualenv with the Python3 version installed on your system.

On GNU/Linux systems, the command might go as:

$ python3 -m venv venv

or as

$ virtualenv --python=python3.5 venv

And activate it:

$ source venv/bin/activate

Inside the virtualenv, you have to install pelican and it's dependencies. You should also install ghp-import (to help us with publishing to github) and Markdown (for writing your posts using markdown). It goes like this:

(venv)$ pip install pelican markdown ghp-import

Once that is done, you can start creating your blog using pelican-quickstart:

(venv)$ pelican-quickstart

Which will prompt us a series of questions. Before answering them, take a look at my answers below:

> Where do you want to create your new web site? [.] ./ > What will be the title of this web site? Renata's blog > Who will be the author of this web site? Renata > What will be the default language of this web site? [pt] en > Do you want to specify a URL prefix? e.g., http://example.com (Y/n) n > Do you want to enable article pagination? (Y/n) y > How many articles per page do you want? [10] 10 > What is your time zone? [Europe/Paris] America/Sao_Paulo > Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) Y **# PAY ATTENTION TO THIS!** > Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) n > Do you want to upload your website using FTP? (y/N) n > Do you want to upload your website using SSH? (y/N) n > Do you want to upload your website using Dropbox? (y/N) n > Do you want to upload your website using S3? (y/N) n > Do you want to upload your website using Rackspace Cloud Files? (y/N) n > Do you want to upload your website using GitHub Pages? (y/N) y > Is this your personal page (username.github.io)? (y/N) y Done. Your new project is available at /home/username/YOUR_USERNAME.github.io

About the time zone, it should be specified as TZ Time zone (full list here: List of tz database time zones).

Now, go ahead and create your first blog post! You might want to open the project folder on your favorite code editor and find the "content" folder inside it. Then, create a new file, which can be called my-first-post.md (don't worry, this is just for testing, you can change it later). The contents should begin with the metadata which identifies the Title, Date, Category and more from the post before you start with the content, like this:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes Title: My first post Date: 2017-11-26 10:01 Modified: 2017-11-27 12:30 Category: misc Tags: first, misc Slug: My-first-post Authors: Your name Summary: What does your post talk about? Write here. This is the *first post* from my Pelican blog. **YAY!**

Let's see how it looks?

Go to the terminal, generate the static files and start the server. To do that, use the following command:

(venv)$ make html && make serve

While this command is running, you should be able to visit it on your favorite web browser by typing localhost:8000 on the address bar.

Pretty neat, right?

Now, what if you want to put an image in a post, how do you do that? Well, first you create a directory inside your content directory, where your posts are. Let's call this directory 'images' for easy reference. Now, you have to tell Pelican to use it. Find the pelicanconf.py, the file where you configure the system, and add a variable that contains the directory with your images:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes STATIC_PATHS = ['images']

Save it. Go to your post and add the image this way:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes ![Write here a good description for people who can't see the image]({filename}/images/IMAGE_NAME.jpg)

You can interrupt the server at anytime pressing CTRL+C on the terminal. But you should start it again and check if the image is correct. Can you remember how?

(venv)$ make html && make serve

One last step before your coding is "done": you should make sure anyone can read your posts using ATOM or RSS feeds. Find the pelicanconf.py, the file where you configure the system, and edit the part about feed generation:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes FEED_ALL_ATOM = 'feeds/all.atom.xml' FEED_ALL_RSS = 'feeds/all.rss.xml' AUTHOR_FEED_RSS = 'feeds/%s.rss.xml' RSS_FEED_SUMMARY_ONLY = False

Save everything so you can send the code to Github. You can do that by adding all files, committing it with a message ('first commit') and using git push. You will be asked for your Github login and password.

$ git add -A && git commit -a -m 'first commit' && git push --all

And... remember how at the very beginning I said you would be preserving the master branch for the output of the static files generated by Pelican? Now it's time for you to generate them:

$ make github

You will be asked for your Github login and password again. And... voilà! Your new blog should be live on https://YOUR_USERNAME.github.io.

If you had an error in any step of the way, please reread this tutorial, try and see if you can detect in which part the problem happened, because that is the first step to debbugging. Sometimes, even something simple like a typo or, with Python, a wrong indentation, can give us trouble. Shout out and ask for help online or on your community.

For tips on how to write your posts using Markdown, you should read the Daring Fireball Markdown guide.

To get other themes, I recommend you visit Pelican Themes.

This post was adapted from Adrien Leger's Create a github hosted Pelican blog with a Bootstrap3 theme. I hope it was somewhat useful for you.

Renata https://rsip22.github.io/blog/ Renata's blog

nuttx-scheme

Mar, 05/12/2017 - 11:25md
Scheme For NuttX

Last fall, I built a tiny lisp interpreter for AltOS. That was fun, but had some constraints:

  • Yet another lisp-like language
  • Ran only on AltOS, not exactly a widely used operating system.

To fix the first problem, I decided to try and just implement scheme. The language I had implemented wasn't far off; it had lexical scoping and call-with-current-continuation after all. The rest is pretty simple stuff.

To fix the second problem, I ported the interpreter to NuttX. NuttX is a modest operating system for 8 to 32-bit microcontrollers with a growing community of developers.

I downloaded the most recent Scheme spec, the Revised⁷ Report, which is the 'small language' follow on to the contentious Revised⁶ Report.

Converting ao-lisp to ao-scheme

Reading through the spec, it was clear there were a few things I needed to deal with to provide something that looked like scheme:

  • quasiquote
  • syntax-rules
  • function names
  • boolean type

Quasiquote turned out to be fun -- the spec described it in terms of a set of list forms, so I hacked up the reader to convert the convenient syntax using ` , and ,@ into lists and wrote a macro to emit construction code from the generated lists.

Syntax-rules is a 'nicer' way to write macros, and I haven't implemented it yet. There's nothing it can do which the underlying full macros cannot, so I'm planning on just writing it in scheme rather than having a pile more C code.

Scheme as a separate boolean type, rather than using the empty list, (), for false, it uses #f and has everything else be 'true'. Adding that wasn't hard, just tedious as I had to work through any place that used boolean values and switch it to using #f or #t.

There were also a pile of random function name swaps and another bunch of new functions to write.

All in all, not a huge amount of work, and now the language looks a lot more like scheme.

Adding more number types

The original language had only integers, and those were only 14 bits wide. To make the language a bit more usable, I added 24 bit integers as well, along with 32-bit floats. Then I added automatic promotion between representations and the usual scheme tests for exactness. This added a bit to the footprint, and maybe I should make it optional.

Porting to NuttX

This was the easiest piece of the process -- NuttX offers a posix-like API, just like AltOS. Getting it to build was actually a piece of cake. The only part requiring new code was the lack of any command line editing or echo -- I ended up using readline to get that to work.

I was pleased that all of the language changes I made didn't significantly impact the footprint of the resulting system. I built NuttX for the stm32f4-discovery board, compiling in basic and then comparing with and without scheme:

Before:

$ size nuttx text data bss dec hex filename 183037 172 4176 187385 2dbf9 nuttx

After:

$ size nuttx text data bss dec hex filename 213197 188 22672 236057 39a19 nuttx

The increase in text includes 11kB of built-in lisp code, so that when the interpreter starts, you already have all of the necessary lisp code loaded that turns the bare interpreter into a full scheme system. That makes the core interpreter around 20kB of code, which is nice and compact (at least for scheme, I think).

The BSS space includes the heap; this can be set to any size you like. It would probably be good to have that allocated on the fly instead of used even when the interpreter isn't running.

Where's the Code

I've pushed the code here:

$ git clone git://keithp.com/git/apps Future Work

There's more work to complete the language support; here's some tasks needing attention at some point:

  • No vectors or bytevectors
  • Characters are just numbers
  • No dynamic-wind or exceptions
  • No environments
  • No ports
  • No syntax-rules
  • No record types
  • No libraries
  • Heap allocated from BSS
A Sample Application!

Here's towers of hanoi in scheme for nuttx:

; ; Towers of Hanoi ; ; Copyright © 2016 Keith Packard <keithp@keithp.com> ; ; This program is free software; you can redistribute it and/or modify ; it under the terms of the GNU General Public License as published by ; the Free Software Foundation, either version 2 of the License, or ; (at your option) any later version. ; ; This program is distributed in the hope that it will be useful, but ; WITHOUT ANY WARRANTY; without even the implied warranty of ; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ; General Public License for more details. ; ; ANSI control sequences (define (move-to col row) (for-each display (list "\033[" row ";" col "H")) ) (define (clear) (display "\033[2J") ) (define (display-string x y str) (move-to x y) (display str) ) (define (make-piece num max) ; A piece for position 'num' ; is num + 1 + num stars ; centered in a field of max * ; 2 + 1 characters with spaces ; on either side. This way, ; every piece is the same ; number of characters (define (chars n c) (if (zero? n) "" (+ c (chars (- n 1) c)) ) ) (+ (chars (- max num 1) " ") (chars (+ (* num 2) 1) "*") (chars (- max num 1) " ") ) ) (define (make-pieces max) ; Make a list of numbers from 0 to max-1 (define (nums cur max) (if (= cur max) () (cons cur (nums (+ cur 1) max)) ) ) ; Create a list of pieces (map (lambda (x) (make-piece x max)) (nums 0 max)) ) ; Here's all of the towers of pieces ; This is generated when the program is run (define towers ()) ; position of the bottom of ; the stacks set at runtime (define bottom-y 0) (define left-x 0) (define move-delay 25) ; Display one tower, clearing any ; space above it (define (display-tower x y clear tower) (cond ((= 0 clear) (cond ((not (null? tower)) (display-string x y (car tower)) (display-tower x (+ y 1) 0 (cdr tower)) ) ) ) (else (display-string x y " ") (display-tower x (+ y 1) (- clear 1) tower) ) ) ) ; Position of the top of the tower on the screen ; Shorter towers start further down the screen (define (tower-pos tower) (- bottom-y (length tower)) ) ; Display all of the towers, spaced 20 columns apart (define (display-towers x towers) (cond ((not (null? towers)) (display-tower x 0 (tower-pos (car towers)) (car towers)) (display-towers (+ x 20) (cdr towers))) ) ) ; Display all of the towers, then move the cursor ; out of the way and flush the output (define (display-hanoi) (display-towers left-x towers) (move-to 1 23) (flush-output) (delay move-delay) ) ; Reset towers to the starting state, with ; all of the pieces in the first tower and the ; other two empty (define (reset-towers len) (set! towers (list (make-pieces len) () ())) (set! bottom-y (+ len 3)) ) ; Move a piece from the top of one tower ; to the top of another (define (move-piece from to) ; references to the cons holding the two towers (define from-tower (list-tail towers from)) (define to-tower (list-tail towers to)) ; stick the car of from-tower onto to-tower (set-car! to-tower (cons (caar from-tower) (car to-tower))) ; remove the car of from-tower (set-car! from-tower (cdar from-tower)) ) ; The implementation of the game (define (_hanoi n from to use) (cond ((= 1 n) (move-piece from to) (display-hanoi) ) (else (_hanoi (- n 1) from use to) (_hanoi 1 from to use) (_hanoi (- n 1) use to from) ) ) ) ; A pretty interface which ; resets the state of the game, ; clears the screen and runs ; the program (define (hanoi len) (reset-towers len) (clear) (display-hanoi) (_hanoi len 0 1 2) #t ) Keith Packard http://keithp.com/blog/ blog

back on the Linux desktop

Mar, 05/12/2017 - 4:35md

As forecast, I've switched from Mac back to Linux on the Desktop. I'm using a work-supplied Thinkpad T470s which is a nice form-factor machine (the the T450s was the first Thinkpad to widen my perspective away from just looking at the X series).

I've installed Debian to get started and ended up with GNOME 3 as the desktop (I was surprised to not be prompted for a choice in the installer, but on reflection that makes sense, I did a non-networked installed from the GNOME-flavour of the live DVD). So for the time being I'm going to stick to GNOME 3 and see what's new/better/worse than last time, but once my replacement SSD arrives I can revisit.

I haven't made much progress on the sticking points I identified in my last post. I'm hoping to get 1pass up and running in the interim to read my 1Password DB so I can get by until I've found a replacement password manager that I like.

Most of my desktop configuration steps I have captured in some Ansible playbooks. I'm looking at Ansible after a long break from using puppet, and there's things I like and things I don't. I've also been exploring ownCloud for personal file sharing and despite a couple of warning signs (urgh PHP, official Debian package was dropped) I'm finding it really useful, in particular for sharing stuff with family. I might write more about both of those later.

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

Finding bugs in Haskell code by proving it

Mar, 05/12/2017 - 3:17md

Last week, I wrote a small nifty tool called bisect-binary, which semi-automates answering the question “To what extent can I fill this file up with zeroes and still have it working”. I wrote it it in Haskell, and part of the Haskell code, in the Intervals.hs module, is a data structure for “subsets of a file” represented as a sorted list of intervals:

data Interval = I { from :: Offset, to :: Offset } newtype Intervals = Intervals [Interval]

The code is the kind of Haskell code that I like to write: A small local recursive function, a few guards to case analysis, and I am done:

intersect :: Intervals -> Intervals -> Intervals intersect (Intervals is1) (Intervals is2) = Intervals $ go is1 is2 where go _ [] = [] go [] _ = [] go (i1:is1) (i2:is2) -- reorder for symmetry | to i1 < to i2 = go (i2:is2) (i1:is1) -- disjoint | from i1 >= to i2 = go (i1:is1) is2 -- subset | to i1 == to i2 = I f' (to i2) : go is1 is2 -- overlapping | otherwise = I f' (to i2) : go (i1 { from = to i2} : is1) is2 where f' = max (from i1) (from i2)

But clearly, the code is already complicated enough so that it is easy to make a mistake. I could have put in some QuickCheck properties to test the code, I was in proving mood...

Now available: Formal Verification for Haskell

Ten months ago I complained that there was no good way to verify Haskell code (and created the nifty hack ghc-proofs). But things have changed since then, as a group at UPenn (mostly Antal Spector-Zabusky, Stephanie Weirich and myself) has created hs-to-coq: a translator from Haskell to the theorem prover Coq.

We have used hs-to-coq on various examples, as described in our CPP'18 paper, but it is high-time to use it for real. The easiest way to use hs-to-coq at the moment is to clone the repository, copy one of the example directories (e.g. examples/successors), place the Haskell file to be verified there and put the right module name into the Makefile. I also commented out parts of the Haskell file that would drag in non-base dependencies.

Massaging the translation

Often, hs-to-coq translates Haskell code without a hitch, but sometimes, a bit of help is needed. In this case, I had to specify three so-called edits:

  • The Haskell code uses Intervals both as a name for a type and for a value (the constructor). This is fine in Haskell, which has separate value and type namespaces, but not for Coq. The line

    rename value Intervals.Intervals = ival

    changes the constructor name to ival.

  • I use the Int64 type in the Haskell code. The Coq version of Haskell’s base library that comes with hs-to-coq does not support that yet, so I change that via

    rename type GHC.Int.Int64 = GHC.Num.Int

    to the normal Int type, which itself is mapped to Coq’s Z type. This is not a perfect fit, and my verification would not catch problems that arise due to the boundedness of Int64. Since none of my code does arithmetic, only comparisons, I am fine with that.

  • The biggest hurdle is the recursion of the local go functions. Coq requires all recursive functions to be obviously (i.e. structurally) terminating, and the go above is not. For example, in the first case, the arguments to go are simply swapped. It is very much not obvious why this is not an infinite loop.

    I can specify a termination measure, i.e. a function that takes the arguments xs and ys and returns a “size” of type nat that decreases in every call: Add the lengths of xs and ys, multiply by two and add one if the the first interval in xs ends before the first interval in ys.

    If the problematic function were a top-level function I could tell hs-to-coq about this termination measure and it would use this information to define the function using Program Fixpoint.

    Unfortunately, go is a local function, so this mechanism is not available to us. If I care more about the verification than about preserving the exact Haskell code, I could easily change the Haskell code to make go a top-level function, but in this case I did not want to change the Haskell code.

    Another way out offered by hs-to-coq is to translate the recursive function using an axiom unsafeFix : forall a, (a -> a) -> a. This looks scary, but as I explain in the previous blog post, this axiom can be used in a safe way.

    I should point out it is my dissenting opinion to consider this a valid verification approach. The official stand of the hs-to-coq author team is that using unsafeFix in the verification can only be a temporary state, and eventually you’d be expected to fix (heh) this, for example by moving the functions to the top-level and using hs-to-coq’s the support for Program Fixpoint.

With these edits in place, hs-to-coq splits out a faithful Coq copy of my Haskell code.

Time to prove things

The rest of the work is mostly straight-forward use of Coq. I define the invariant I expect to hold for these lists of intervals, namely that they are sorted, non-empty, disjoint and non-adjacent:

Fixpoint goodLIs (is : list Interval) (lb : Z) : Prop := match is with | [] => True | (I f t :: is) => (lb <= f)%Z /\ (f < t)%Z /\ goodLIs is t end. Definition good is := match is with ival is => exists n, goodLIs is n end.

and I give them meaning as Coq type for sets, Ensemble:

Definition range (f t : Z) : Ensemble Z := (fun z => (f <= z)%Z /\ (z < t)%Z). Definition semI (i : Interval) : Ensemble Z := match i with I f t => range f t end. Fixpoint semLIs (is : list Interval) : Ensemble Z := match is with | [] => Empty_set Z | (i :: is) => Union Z (semI i) (semLIs is) end. Definition sem is := match is with ival is => semLIs is end.

Now I prove for every function that it preserves the invariant and that it corresponds to the, well, corresponding function, e.g.:

Lemma intersect_good : forall (is1 is2 : Intervals), good is1 -> good is2 -> good (intersect is1 is2). Proof. … Qed. Lemma intersection_spec : forall (is1 is2 : Intervals), good is1 -> good is2 -> sem (intersect is1 is2) = Intersection Z (sem is1) (sem is2). Proof. … Qed.

Even though I punted on the question of termination while defining the functions, I do not get around that while verifying this, so I formalize the termination argument above

Definition needs_reorder (is1 is2 : list Interval) : bool := match is1, is2 with | (I f1 t1 :: _), (I f2 t2 :: _) => (t1 <? t2)%Z | _, _ => false end. Definition size2 (is1 is2 : list Interval) : nat := (if needs_reorder is1 is2 then 1 else 0) + 2 * length is1 + 2 * length is2.

and use it in my inductive proofs.

As I intend this to be a write-once proof, I happily copy’n’pasted proof scripts and did not do any cleanup. Thus, the resulting Proof file is big, ugly and repetitive. I am confident that judicious use of Coq tactics could greatly condense this proof.

Using Program Fixpoint after the fact?

This proofs are also an experiment of how I can actually do induction over a locally defined recursive function without too ugly proof goals (hence the line match goal with [ |- context [unsafeFix ?f _ _] ] => set (u := f) end.). One could improve upon this approach by following these steps:

  1. Define copies (say, intersect_go_witness) of the local go using Program Fixpoint with the above termination measure. The termination argument needs to be made only once, here.

  2. Use this function to prove that the argument f in go = unsafeFix f actually has a fixed point:

    Lemma intersect_go_sound:

    f intersect_go_witness = intersect_go_witness

    (This requires functional extensionality). This lemma indicates that my use of the axioms unsafeFix and unsafeFix_eq are actually sound, as discussed in the previous blog post.

  3. Still prove the desired properties for the go that uses unsafeFix, as before, but using the functional induction scheme for intersect_go! This way, the actual proofs are free from any noisy termination arguments.

    (The trick to define a recursive function just to throw away the function and only use its induction rule is one I learned in Isabelle, and is very useful to separate the meat from the red tape in complex proofs. Note that the induction rule for a function does not actually mention the function!)

Maybe I will get to this later.

Update: I experimented a bit in that direction, and it does not quite work as expected. In step 2 I am stuck because Program Fixpoint does not create a fixpoint-unrolling lemma, and in step 3 I do not get the induction scheme that I was hoping for. Both problems would not exist if I use the Function command, although that needs some tickery to support a termination measure on multiple arguments. The induction lemma is not quite as polished as I was hoping for, so he resulting proof is still somewhat ugly, and it requires copying code, which does not scale well.

Efforts and gains

I spent exactly 7 hours working on these proofs, according to arbtt. I am sure that writing these functions took me much less time, but I cannot calculate that easily, as they were originally in the Main.hs file of bisect-binary.

I did find and fix three bugs:

  • The intersect function would not always retain the invariant that the intervals would be non-empty.
  • The subtract function would prematurely advance through the list intervals in the second argument, which can lead to a genuinely wrong result. (This occurred twice.)

Conclusion: Verification of Haskell code using Coq is now practically possible!

Final rant: Why is the Coq standard library so incomplete (compared to, say, Isabelle’s) and requires me to prove so many lemmas about basic functions on Ensembles?

Joachim Breitner mail@joachim-breitner.de nomeata’s mind shares

Reproducible Builds: Weekly report #136

Mar, 05/12/2017 - 3:10md

Here's what happened in the Reproducible Builds effort between Sunday, November 26 and Saturday, December 2, 2017:

Media coverage Arch Linux imap key leakage

A security issue was found in the imap package in Arch Linux thanks to the reproducible builds effort in that distribution.

Due to a hardcoded key-generation routine in the build() step of imap's PKGBUILD (the standard packaging file for Arch Linux packages), a default secret key was generated and leaked on all imap installations. This was prompty reviewed, confirmed and fixed by the package maintainers.

This mirrors similar security issues found in Debian, such as #833885.

Debian packages reviewed and fixed, and bugs filed

In addition, 73 FTBFS bugs were detected and reported by Adrian Bunk.

Reviews of unreproducible Debian packages

83 package reviews have been added, 41 have been updated and 33 have been removed in this week, adding to our knowledge about identified issues.

1 issue type was updated:

LEDE / OpenWrt packages updates: diffoscope development reprotest development

Version 0.7.4 was uploaded to unstable by Ximin Luo. It included contributions already covered by posts of the previous weeks as well as new ones from:

reproducible-website development tests.reproducible-builds.org Misc.

This week's edition was written by Alexander Couzens, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Santiago Torres-Arias, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

Is the short movie «Empty Socks» from 1927 in the public domain or not?

Mar, 05/12/2017 - 12:30md

Three years ago, a presumed lost animation film, Empty Socks from 1927, was discovered in the Norwegian National Library. At the time it was discovered, it was generally assumed to be copyrighted by The Walt Disney Company, and I blogged about my reasoning to conclude that it would would enter the Norwegian equivalent of the public domain in 2053, based on my understanding of Norwegian Copyright Law. But a few days ago, I came across a blog post claiming the movie was already in the public domain, at least in USA. The reasoning is as follows: The film was released in November or Desember 1927 (sources disagree), and presumably registered its copyright that year. At that time, right holders of movies registered by the copyright office received government protection for there work for 28 years. After 28 years, the copyright had to be renewed if the wanted the government to protect it further. The blog post I found claim such renewal did not happen for this movie, and thus it entered the public domain in 1956. Yet someone claim the copyright was renewed and the movie is still copyright protected. Can anyone help me to figure out which claim is correct? I have not been able to find Empty Socks in Catalog of copyright entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures available from the University of Pennsylvania, neither in page 45 for the first half of 1955, nor in page 119 for the second half of 1955. It is of course possible that the renewal entry was left out of the printed catalog by mistake. Is there some way to rule out this possibility? Please help, and update the wikipedia page with your findings.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

FAI.me build server improvements

Hën, 04/12/2017 - 9:59md

Only one week ago, I've announced the FAI.me build service for creating your own installation images. I've got some feedback and people like to have root login without a password but using a ssh key. This feature is now available. You can upload you public ssh key which will be installed as authorized_keys for the root account.

You can now also download the configuration space that is used on the installation image and you can get the whole log file from the fai-mirror call. This command creates the partial package mirror. The log file helps you debugging if you add some packages which have conflicts on other packages, or if you misspelt a package name.

FAI.me

Thomas Lange http://blog.fai-project.org/ FAI (Fully Automatic Installation) / Plan your Installation and FAI installs your Plan

new old thing

Hën, 04/12/2017 - 9:50md

This branch came from a cedar tree overhanging my driveway.

It was fun to bust this open and shape it with hammer and chisels. My dad once recommended learning to chisel before learning any power tools for wood working.. so I suppose this is a start.

Some tung oil and drilling later, and I'm very pleased to have a nice place to hang my cast iron.

Joey Hess http://joeyh.name/blog/ see shy jo

20171204-qubes-mirage-firewall

Hën, 04/12/2017 - 3:37md
On using QubesOS MirageOS firewall

So I'm lucky to attend the 4th MirageOS hack retreat in Marrakesh this week, where I learned to build and use qubes-mirage-firewall, which is a MirageOS based (system) firewall for Qubes OS. The main visible effect is that this unikernel only needs 32 megabytes of memory, while a Debian (or Fedora) based firewall systems needs half a gigabyte. It's also said to be more secure, but I have not verified that myself

In the spirit of avoiding overhead I decided not to build with docker as the qubes-mirage-firewall's README.md suggests, but rather use a base Debian stretch system. Here's how to build natively:

sudo apt install git ocaml-native-compilers camlp4-extra opam aspcud curl debianutils m4 ncurses-dev perl pkg-config time git clone https://github.com/talex5/qubes-mirage-firewall cd qubes-mirage-firewall/ opam init # the next line is super useful if there is bad internet connectivity but you happen to have access to a local mirror # opam repo add local http://10.0.0.2:8080 opam switch 4.04.2 eval `opam config env` ## in there: opam install -y vchan xen-gnt mirage-xen-ocaml mirage-xen-minios io-page mirage-xen mirage mirage-nat mirage-qubes netchannel mirage configure -t xen make depend make tar

Then follow the instructions in the README.md and switch some AppVMs to it, and then make it the default and shutdown the old firewall, if you are happy with the results, which currently I'm not sure I am because it doesn't allow updating template VMs...

Update: qubes-mirage-firewall allows this. Just the crashed qubes-updates-proxy service in sys-net prevented it, but that's another bug elsewhere.

I also learned that it builds reproducibly given the same build path and ignoring the issue of timestamps in the generated tarball, IOW, the unikernel (and the 3 other files) inside the tarball is reproducible. And I still need to compare a docker build with a build done the above way & and I really don't like having to edit the firewalls rules.ml file and then rebuilding it. More on this in another post later, hopefully.

Oh, I didn't mention it and won't say more here, but this hack retreat and it's organisation is marvellous! Many thanks to everyone here!

Holger Levsen http://layer-acht.org/thinking/ Any sufficiently advanced thinking is indistinguishable from madness

My Free Software Activities in November 2017

Dje, 03/12/2017 - 6:52md

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h but I only spent 10h. During this time, I managed the LTS frontdesk during one week, reviewing new security issues and classifying the associated CVE (16 commits to the security tracker).

I prepared and released DLA-1171-1 on libxml-libxml-perl.

I prepared a new update for simplesamlphp (1.9.2-1+deb7u1) fixing 6 CVE. I did not release any DLA yet since I was not able to test the updated package yet. I’m hoping that the the current maintainer can do it since he wanted to work on the update a few months ago.

Distro Tracker

Distro Tracker has seen a high level of activity in the last month. Ville Skyttä continued to contribute a few patches, he helped notably to get rid of the last blocker for a switch to Python 3.

I then worked with DSA to get the production instance (tracker.debian.org) upgraded to stretch with Python 3.5 and Django 1.11. This resulted in a few regressions related to the Python 3 switch (despite the large number of unit tests) that I had to fix.

In parallel Pierre-Elliott Bécue showed up on the debian-qa mailing list and he started to contribute. I have been exchanging with him almost daily on IRC to help him improve his patches. He has been very responsive and I’m looking forward to continue to cooperate with him. His first patch enabled the use “src:” and “bin:” prefix in the search feature to specify if we want to lookup among source packages or binary packages.

I did some cleanup/refactoring work after the switch of the codebase to Python 3 only.

Misc Debian work

Sponsorship. I sponsored many new packages: python-envparse 0.2.0-1, python-exotel 0.1.5-1, python-aws-requests-auth 0.4.1-1, pystaticconfiguration 0.10.3-1, python-jira 1.0.10-1, python-twilio 6.8.2-1, python-stomp 4.1.19-1. All those are dependencies for elastalert 0.1.21-1 that I also sponsored.

I sponsored updates for vboot-utils 0~R63-10032.B-2 (new upstream release for openssl 1.1 compat), aircrack-ng 1:1.2-0~rc4-4 (introducing airgraph-ng package) and asciidoc 8.6.10-2 (last upstream release, tool is deprecated).

Debian Installer. I submitted a few patches a while ago to support finding ISO images in LVM logical volumes in the hd-media installation method. Colin Watson reviewed them and made a few suggestions and expressed a few concerns. I improved my patches to take into account his suggestions and I resolved all the problems he pointed out. I then committed everything to the respective git repositories (for details review #868848, #868859, #868900, #868852).

Live Build. I merged 3 patches for live-build (#879169, #881941, #878430).

Misc. I uploaded Django 1.11.7 to stretch-backports. I filed an upstream bug on zim for #881464.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Raphaël Hertzog https://raphaelhertzog.com apt-get install debian-wizard

On the demise of Linux Journal

Dje, 03/12/2017 - 3:54pd

Lwn, Slashdot, and many others have marked the recent announcement of Linux Journal's demise. I'll take this opportunity to share some of my thoughts, and to thank the publication and its many contributors for their work over the years.

I think it's probably hard for younger people to imagine what the Linux world was like 20 years ago. Today, it's really not an exaggeration to say that the Internet as we know it wouldn't exist at all without Linux. Almost every major Internet company you can think of runs almost completely on Linux. Amazon, Google, Facebook, Twitter, etc, etc. All Linux. In 1997, though, the idea of running a production workload on Linux was pretty far out there.

I was in college in the late 90's, and worked for a time at a small Cambridge, Massachusetts software company. The company wrote a pretty fancy (and expensive!) GUI builder targeting big expensive commercial UNIX platforms like Solaris, HP/UX, SGI IRIX, and others. At one point a customer inquired about the availability of our software on Linux, and I, as an enthusiastic young student, got really excited about the idea. The company really had no plans to support Linux, though. I'll never forget the look of disbelief on a company exec's face as he asked "$3000 on a Linux system?"

Throughout this period, on my lunch breaks from work, I'd swing by the now defunct Quantum Books. One of the monthly treats was a new issue of Linux Journal on the periodicals shelf. In these issues, I learned that more forward thinking companies actually were using Linux to do real work. An article entitled "Linux Sinks the Titanic" described how Hollywood deployed hundreds(!) of Linux systems running custom software to generate the special effects for the 1997 movie Titanic. Other articles documented how Linux was making inroads at NASA and in the broader scientific community. Even the ads were interesting, as they showed increasing commercial interest in Linux, both on the hardware (HyperMicro, VA Research, Linux Hardware Solutions, etc) and software (CDE, Xi Graphics) fronts.

The software world is very different now than it was in 1997. The media world is different, too. Not only is Linux well established, it's pretty much the dominant OS on the planet. When Linux Journal reported in the late 90's that Linux was being used for some new project, that was news. When they documented how to set up a Linux system to control some new piece of hardware or run some network service, you could bet that they filled a gap that nobody else was working on. Today, it's no longer news that a successful company is using Linux in production. Nor is it surprising that you can run Linux on a small embedded system; in fact it's quite likely that the system shipped with Linux pre-installed. On the media side, it used to be valuable to have everything bundled in a glossy, professionally produced archive published on a regular basis. Today, at least in the Linux/free software sphere, that's less important. Individual publication is easy on the Internet today, and search engines are very good at ensuring that the best content is the most discoverable content. The whole Internet is basically one giant continuously published magazine.

It's been a long time since I paid attention to Linux Journal, so from a practical point of view I can't honestly say that I'll miss it. I appreciate the role it played in my growth, but there are so many options for young people today entering the Linux/free software communities that it appears that the role is no longer needed. Still, the termination of this magazine is a permanent thing, and I can't help but worry that there's somebody out there who might thrive in the free software community if only they had the right door open before them.

Noah Meyerhans http://noah.meyerhans.us/ Category: debian | Noah Meyerhans

There’s cloud, and it can even be YOURS on YOUR computer

Sht, 02/12/2017 - 11:09md

Each time I see the FSFE picture, just like on Daniel’s last post to planet.d.o, where it says:

“There is NO CLOUD, just other people’s computers”

it makes me so frustrated. There’s such a thing as private cloud, setup on your own set of servers. I’ve been working on delivering OpenStack to Debian for the last 6 years and a half, motivated exactly to fix this issue: I refuse that the only cloud people could use would be a closed source solution like GCE, AWS or Azure. The FSFE (and the FSF) completely dismissing this work is more than annoying: it is counter productive. Not only the FSFE shouldn’t pull anyone away from the cloud, but it should push for the public to choose cloud providers using free software like OpenStack.

The openstack.org market place lists 23 public cloud providers using OpenStack, so there is now no excuse to use any other type of cloud: for sure, there’s one where you need it. If you use a free software solution like OpenStack, then the question if you’re running on your own hardware, on some rented hardware (on which you deployed OpenStack yourself), or on someone else’s OpenStack deployment is just a practical one, on which you can always back-up quickly. That’s one of the very reason why one should deploy on the cloud: so that it’s possible to redeploy quickly on another cloud provider, or even on your own private cloud. This gives you more freedom than you ever had, because it makes you not dependent anymore on the hosting company you’ve selected: switching provider is just the mater of launching a script. The reality is that neither the FSFE or RMS understand all of this. Please don’t dive into the FSFE very wrong message.

Goirand Thomas http://thomas.goirand.fr/blog Zigo's blog

Faqet