You are here

Agreguesi i feed

Compute rescaling progress

Planet Debian - Pre, 29/12/2017 - 2:18md

My Lanczos rescaling compute shader for Movit is finally nearing usable performance improvements:

BM_ResampleEffectInt8/Fragment/Int8Downscale/1280/720/640/360 3149 us 69.7767M pixels/s BM_ResampleEffectInt8/Fragment/Int8Downscale/1280/720/320/180 2720 us 20.1983M pixels/s BM_ResampleEffectHalf/Fragment/Float16Downscale/1280/720/640/360 3777 us 58.1711M pixels/s BM_ResampleEffectHalf/Fragment/Float16Downscale/1280/720/320/180 3269 us 16.8054M pixels/s BM_ResampleEffectInt8/Compute/Int8Downscale/1280/720/640/360 2007 us 109.479M pixels/s [+ 56.9%] BM_ResampleEffectInt8/Compute/Int8Downscale/1280/720/320/180 1609 us 34.1384M pixels/s [+ 69.0%] BM_ResampleEffectHalf/Compute/Float16Downscale/1280/720/640/360 2057 us 106.843M pixels/s [+ 56.7%] BM_ResampleEffectHalf/Compute/Float16Downscale/1280/720/320/180 1633 us 33.6394M pixels/s [+100.2%]

Some tuning and bugfixing still needed; this is on my Haswell (the NVIDIA results are somewhat different). Upscaling also on its way. :-)

Steinar H. Gunderson Steinar H. Gunderson


Planet Debian - Pre, 29/12/2017 - 12:11md
I have no idea whatsover of how I achieved this, but there you go. This citizen's legal draft is moving forward to the Finnish parliament. Martin-Éric Funkyware: ITCetera

Debian Policy call for participation -- December 2017

Planet Debian - Enj, 28/12/2017 - 11:47md

Yesterday we released Debian Policy, containing patches from numerous different contributors, some of them first-time contributors. Thank you to everyone who was involved!

Please consider getting involved in preparing the next release of Debian Policy, which is likely to be uploaded sometime around the end of January.

Consensus has been reached and help is needed to write a patch

#780725 PATH used for building is not specified

#793499 The Installed-Size algorithm is out-of-date

#823256 Update maintscript arguments with dpkg >= 1.18.5

#833401 virtual packages: dbus-session-bus, dbus-default-session-bus

#835451 Building as root should be discouraged

#838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus

#845715 Please document that packages are not allowed to write outside thei…

#853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai…

#874019 Note that the ’-e’ argument to x-terminal-emulator works like ’–’

#874206 allow a trailing comma in package relationship fields

Wording proposed, awaiting review from anyone and/or seconds by DDs

#515856 remove get-orig-source

#582109 document triggers where appropriate

#610083 Remove requirement to document upstream source location in debian/c…

#645696 [copyright-format] clearer definitions and more consistent License:…

#649530 [copyright-format] clearer definitions and more consistent License:…

#662998 stripping static libraries

#682347 mark ‘editor’ virtual package name as obsolete

#737796 copyright-format: support Files: paragraph with both abbreviated na…

#742364 Document debian/missing-sources

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#835451 Building as root should be discouraged

#845255 Include best practices for packaging database applications

#846970 Proposal for a Build-Indep-Architecture: control file field

#864615 please update version of posix standard for scripts (section 10.4)

Sean Whitton Notes from the Library

Get rid of the backpack

Planet Debian - Enj, 28/12/2017 - 11:43md

In 2008 I read a blog post by Mark Pilgrim which made a profound impact on me, although I didn't realise it at the time. It was

  1. Stop buying stuff you don’t need
  2. Pay off all your credit cards
  3. Get rid of all the stuff that doesn’t fit in your house/apartment (storage lockers, etc.)
  4. Get rid of all the stuff that doesn’t fit on the first floor of your house (attic, garage, etc.)
  5. Get rid of all the stuff that doesn’t fit in one room of your house
  6. Get rid of all the stuff that doesn’t fit in a suitcase
  7. Get rid of all the stuff that doesn’t fit in a backpack
  8. Get rid of the backpack

At the time I first read it, I think I could see (and concur) with the logic behind the first few points, but not further. Revisiting it now I can agree much further along the list and I'm wondering if I'm brave enough to get to the last step, or anywhere near it.

Mark was obviously going on a journey, and another stopping-off point for him on that journey was to delete his entire online persona, which is why I've linked to the Wayback Machine copy of the blog post.

jmtd Jonathan Dowland's Weblog

Successive Heresies

Planet Debian - Enj, 28/12/2017 - 2:37md

I prefer the book The Hobbit to The Lord Of The Rings.

I much prefer the Hobbit movies to the LOTR movies.

I like the fact the Hobbit movies were extended with material not in the original book: I'm glad there are female characters. I love the additional material with Radagast the Brown. I love the singing and poems and sense of fun preserved from what was a novel for children.

I find the foreshadowing of Sauron in The Hobbit movies to more effectively convey a sense of dread and power than actual Sauron in the LOTR movies.

Whilst I am generally bored by large CGI battles, I find the skirmishes in The Hobbit movies to be less boring than the epic-scale ones in LOTR.

jmtd Jonathan Dowland's Weblog

Reproducible Builds: Weekly report #139

Planet Debian - Enj, 28/12/2017 - 1:55md

Here's what happened in the Reproducible Builds effort between Sunday December 17 and Saturday December 23 2017:

Packages reviewed and fixed, and bugs filed

Bugs filed in Debian:

Bugs filed in openSUSE:

  • Bernhard M. Wiedemann:
    • WindowMaker (merged) - use modification date of ChangeLog, upstreamable
    • ntp (merged) - drop date
    • bzflag - version upgrade to include already-upstreamed SOURCE_DATE_EPOCH patch
Reviews of unreproducible packages

20 package reviews have been added, 36 have been updated and 32 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (6)
  • Matthias Klose (8)
diffoscope development strip-nondeterminism development disorderfs development reprotest development reproducible-website development
  • Chris Lamb:
    • rws3:
      • Huge number of formatting improvements, typo fixes, capitalisation
      • Add section headings to make splitting up easier.
  • Holger Levsen:
    • rws3:
      • Add a disclaimer that this part of the website is a Work-In-Progress.
      • Split notes from each session into separate pages (6 sessions).
      • Other formatting and style fixes.
      • Link to Ludovic Courtès' notes on GNU Guix.
  • Ximin Luo:
    • rws3:
      • Format to look like previous years', and other fixes
      • Split notes from each session into separate pages (1 session). development Misc.

This week's edition was written by Ximin Luo and Bernhard M. Wiedemann & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks Reproducible builds blog

(Micro)benchmarking Linux kernel functions

Planet Debian - Enj, 28/12/2017 - 10:27pd

Usually, the performance of a Linux subsystem is measured through an external (local or remote) process stressing it. Depending on the input point used, a large portion of code may be involved. To benchmark a single function, one solution is to write a kernel module.

Minimal kernel module

Let’s suppose we want to benchmark the IPv4 route lookup function, fib_lookup(). The following kernel function executes 1,000 lookups for and returns the average value.1 It uses the get_cycles() function to compute the execution “time.”

/* Execute a benchmark on fib_lookup() and put result into the provided buffer `buf`. */ static int do_bench(char *buf) { unsigned long long t1, t2; unsigned long long total = 0; unsigned long i; unsigned count = 1000; int err = 0; struct fib_result res; struct flowi4 fl4; memset(&fl4, 0, sizeof(fl4)); fl4.daddr = in_aton(""); for (i = 0; i < count; i++) { t1 = get_cycles(); err |= fib_lookup(&init_net, &fl4, &res, 0); t2 = get_cycles(); total += t2 - t1; } if (err != 0) return scnprintf(buf, PAGE_SIZE, "err=%d msg=\"lookup error\"\n", err); return scnprintf(buf, PAGE_SIZE, "avg=%llu\n", total / count); }

Now, we need to embed this function in a kernel module. The following code registers a sysfs directory containing a pseudo-file run. When a user queries this file, the module runs the benchmark function and returns the result as content.

#define pr_fmt(fmt) "kbench: " fmt #include <linux/kernel.h> #include <linux/version.h> #include <linux/module.h> #include <linux/inet.h> #include <linux/timex.h> #include <net/ip_fib.h> /* When a user fetches the content of the "run" file, execute the benchmark function. */ static ssize_t run_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { return do_bench(buf); } static struct kobj_attribute run_attr = __ATTR_RO(run); static struct attribute *bench_attributes[] = { &run_attr.attr, NULL }; static struct attribute_group bench_attr_group = { .attrs = bench_attributes, }; static struct kobject *bench_kobj; int init_module(void) { int rc; /* ❶ Create a simple kobject named "kbench" in /sys/kernel. */ bench_kobj = kobject_create_and_add("kbench", kernel_kobj); if (!bench_kobj) return -ENOMEM; /* ❷ Create the files associated with this kobject. */ rc = sysfs_create_group(bench_kobj, &bench_attr_group); if (rc) { kobject_put(bench_kobj); return rc; } return 0; } void cleanup_module(void) { kobject_put(bench_kobj); } /* Metadata about this module */ MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Microbenchmark for fib_lookup()");

In ❶, kobject_create_and_add() creates a new kobject named kbench. A kobject is the abstraction behind the sysfs filesystem. This new kobject is visible as the /sys/kernel/kbench/ directory.

In ❷, sysfs_create_group() attaches a set of attributes to our kobject. These attributes are materialized as files inside /sys/kernel/kbench/. Currently, we declare only one of them, run, with the __ATTR_RO macro. The attribute is therefore read-only (0444) and when a user tries to fetch the content of the file, the run_show() function is invoked with a buffer of PAGE_SIZE bytes as last argument and is expected to return the number of bytes written.

For more details, you can look at the documentation in the kernel and the associated example. Beware, random posts found on the web (including this one) may be outdated.2

The following Makefile will compile this example:

# Kernel module compilation KDIR = /lib/modules/$(shell uname -r)/build obj-m += kbench_mod.o kbench_mod.ko: kbench_mod.c make -C $(KDIR) M=$(PWD) modules

After executing make, you should get a kbench_mod.ko file:

$ modinfo kbench_mod.ko filename: /home/bernat/code/…/kbench_mod.ko description: Microbenchmark for fib_lookup() license: GPL depends: name: kbench_mod vermagic: 4.14.0-1-amd64 SMP mod_unload modversions

You can load it and execute the benchmark:

$ insmod ./kbench_mod.ko $ ls -l /sys/kernel/kbench/run -r--r--r-- 1 root root 4096 déc. 10 16:05 /sys/kernel/kbench/run $ cat /sys/kernel/kbench/run avg=75

The result is a number of cycles. You can get an approximate time in nanoseconds if you divide it by the frequency of your processor in gigahertz (25 ns if you have a 3 GHz processor).3

Configurable parameters

The module hard-code two constants: the number of loops and the destination address to test. We can make these parameters user-configurable by exposing them as attributes of our kobject and define a pair of functions to read/write them:

static unsigned long loop_count = 5000; static u32 flow_dst_ipaddr = 0x08080808; /* A mutex is used to ensure we are thread-safe when altering attributes. */ static DEFINE_MUTEX(kb_lock); /* Show the current value for loop count. */ static ssize_t loop_count_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { ssize_t res; mutex_lock(&kb_lock); res = scnprintf(buf, PAGE_SIZE, "%lu\n", loop_count); mutex_unlock(&kb_lock); return res; } /* Store a new value for loop count. */ static ssize_t loop_count_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t count) { unsigned long val; int err = kstrtoul(buf, 0, &val); if (err < 0) return err; if (val < 1) return -EINVAL; mutex_lock(&kb_lock); loop_count = val; mutex_unlock(&kb_lock); return count; } /* Show the current value for destination address. */ static ssize_t flow_dst_ipaddr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { ssize_t res; mutex_lock(&kb_lock); res = scnprintf(buf, PAGE_SIZE, "%pI4\n", &flow_dst_ipaddr); mutex_unlock(&kb_lock); return res; } /* Store a new value for destination address. */ static ssize_t flow_dst_ipaddr_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t count) { mutex_lock(&kb_lock); flow_dst_ipaddr = in_aton(buf); mutex_unlock(&kb_lock); return count; } /* Define the new set of attributes. They are read/write attributes. */ static struct kobj_attribute loop_count_attr = __ATTR_RW(loop_count); static struct kobj_attribute flow_dst_ipaddr_attr = __ATTR_RW(flow_dst_ipaddr); static struct kobj_attribute run_attr = __ATTR_RO(run); static struct attribute *bench_attributes[] = { &loop_count_attr.attr, &flow_dst_ipaddr_attr.attr, &run_attr.attr, NULL };

The IPv4 address is stored as a 32-bit integer but displayed and parsed using the dotted quad notation. The kernel provides the appropriate helpers for this task.

After this change, we have two new files in /sys/kernel/kbench. We can read the current values and modify them:

# cd /sys/kernel/kbench # ls -l -rw-r--r-- 1 root root 4096 déc. 10 19:10 flow_dst_ipaddr -rw-r--r-- 1 root root 4096 déc. 10 19:10 loop_count -r--r--r-- 1 root root 4096 déc. 10 19:10 run # cat loop_count 5000 # cat flow_dst_ipaddr # echo > flow_dst_ipaddr # cat flow_dst_ipaddr

We still need to alter the do_bench() function to make use of these parameters:

static int do_bench(char *buf) { /* … */ mutex_lock(&kb_lock); count = loop_count; fl4.daddr = flow_dst_ipaddr; mutex_unlock(&kb_lock); for (i = 0; i < count; i++) { /* … */ Meaningful statistics

Currently, we only compute the average lookup time. This value is usually inadequate:

  • A small number of outliers can raise this value quite significantly. An outlier can happen because we were preempted out of CPU while executing the benchmarked function. This doesn’t happen often if the function execution time is short (less than a millisecond), but when this happens, the outliers can be off by several milliseconds, which is enough to make the average inadequate when most values are several order of magnitude smaller. For this reason, the median usually gives a better view.

  • The distribution may be asymmetrical or have several local maxima. It’s better to keep several percentiles or even a distribution graph.

To be able to extract meaningful statistics, we store the results in an array.

static int do_bench(char *buf) { unsigned long long *results; /* … */ results = kmalloc(sizeof(*results) * count, GFP_KERNEL); if (!results) return scnprintf(buf, PAGE_SIZE, "msg=\"no memory\"\n"); for (i = 0; i < count; i++) { t1 = get_cycles(); err |= fib_lookup(&init_net, &fl4, &res, 0); t2 = get_cycles(); results[i] = t2 - t1; } if (err != 0) { kfree(results); return scnprintf(buf, PAGE_SIZE, "err=%d msg=\"lookup error\"\n", err); } /* Compute and display statistics */ display_statistics(buf, results, count); kfree(results); return strnlen(buf, PAGE_SIZE); }

Then, We need an helper function to be able to compute percentiles:

static unsigned long long percentile(int p, unsigned long long *sorted, unsigned count) { int index = p * count / 100; int index2 = index + 1; if (p * count % 100 == 0) return sorted[index]; if (index2 >= count) index2 = index - 1; if (index2 < 0) index2 = index; return (sorted[index] + sorted[index+1]) / 2; }

This function needs a sorted array as input. The kernel provides a heapsort function, sort(), for this purpose. Another useful value to have is the deviation from the median. Here is a function to compute the median absolute deviation:4

static unsigned long long mad(unsigned long long *sorted, unsigned long long median, unsigned count) { unsigned long long *dmedian = kmalloc(sizeof(unsigned long long) * count, GFP_KERNEL); unsigned long long res; unsigned i; if (!dmedian) return 0; for (i = 0; i < count; i++) { if (sorted[i] > median) dmedian[i] = sorted[i] - median; else dmedian[i] = median - sorted[i]; } sort(dmedian, count, sizeof(unsigned long long), compare_ull, NULL); res = percentile(50, dmedian, count); kfree(dmedian); return res; }

With these two functions, we can provide additional statistics:

static void display_statistics(char *buf, unsigned long long *results, unsigned long count) { unsigned long long p95, p90, p50; sort(results, count, sizeof(*results), compare_ull, NULL); if (count == 0) { scnprintf(buf, PAGE_SIZE, "msg=\"no match\"\n"); return; } p95 = percentile(95, results, count); p90 = percentile(90, results, count); p50 = percentile(50, results, count); scnprintf(buf, PAGE_SIZE, "min=%llu max=%llu count=%lu 95th=%llu 90th=%llu 50th=%llu mad=%llu\n", results[0], results[count - 1], count, p95, p90, p50, mad(results, p50, count)); }

We can also append a graph of the distribution function (and of the cumulative distribution function):

min=72 max=33364 count=100000 95th=154 90th=142 50th=112 mad=6 value │ ┊ count 72 │ 51 77 │▒ 3548 82 │▒▒░░ 4773 87 │▒▒░░░░░ 5918 92 │░░░░░░░ 1207 97 │░░░░░░░ 437 102 │▒▒▒▒▒▒░░░░░░░░ 12164 107 │▒▒▒▒▒▒▒░░░░░░░░░░░░░░ 15508 112 │▒▒▒▒▒▒▒▒▒▒▒░░░░░░░░░░░░░░░░░░░░░░ 23014 117 │▒▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 6297 122 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 905 127 │▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 3845 132 │▒▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 6687 137 │▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 4884 142 │▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 4133 147 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 1015 152 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 1123 Benchmark validity

While the benchmark produces some figures, we may question their validity. There are several traps when writing a microbenchmark:

dead code
Compiler may optimize away our benchmark because the result is not used. In our example, we ensure to combine the result in a variable to avoid this.
warmup phase
One-time initializations may affect negatively the benchmark. This is less likely to happen with C code since there is no JIT. Nonetheless, you may want to add a small warmup phase.
too small dataset
If the benchmark is running using the same input parameters over and over, the input data may fit entirely in the L1 cache. This affects positively the benchmark. Therefore, it is important to iterate over a large dataset.
too regular dataset
A regular dataset may still affect positively the benchmark despite its size. While the whole dataset will not fit into L1/L2 cache, the previous run may have loaded most of the data needed for the current run. In the route lookup example, as route entries are organized in a tree, it’s important to not linearly scan the address space. Address space could be explored randomly (a simple linear congruential generator brings reproducible randomness).
large overhead
If the benchmarked function runs in a few nanoseconds, the overhead of the benchmark infrastructure may be too high. Typically, the overhead of the method presented here is around 5 nanoseconds. get_cycles() is a thin wrapper around the RDTSC instruction: it returns the number of cycles for the current processor since last reset. It’s also virtualized with low-overhead in case you run the benchmark in a virtual machine. If you want to measure a function with a greater precision, you need to wrap it in a loop. However, the loop itself adds to the overhead, notably if you need to compute a large input set (in this case, the input can be prepared). Compilers also like to mess with loops. At last, a loop hides the result distribution.
While the benchmark is running, the thread executing it can be preempted (or when running in a virtual machine, the whole virtual machine can be preempted by the host). When the function takes less than a millisecond to execute, one can assume preemption is rare enough to be filtered out by using a percentile function.
When running the benchmark, noise from unrelated processes (or sibling hosts when benchmarking in a virtual machine) needs to be avoided as it may change from one run to another. Therefore, it is not a good idea to benchmark in a public cloud. On the other hand, adding controlled noise to the benchmark may lead to less artificial results: in our example, route lookup is only a small part of routing a packet and measuring it alone in a tight loop affects positively the benchmark.
syncing parallel benchmarks
While it is possible (and safe) to run several benchmarks in parallel, it may be difficult to ensure they really run in parallel: some invocations may work in better conditions because other threads are not running yet, skewing the result. Ideally, each run should execute bogus iterations and start measures only when all runs are present. This doesn’t seem a trivial addition.

As a conclusion, the benchmark module presented here is quite primitive (notably compared to a framework like JMH for Java) but, with care, can deliver some conclusive results like in these posts: “IPv4 route lookup on Linux” and “IPv6 route lookup on Linux.”


Use of a tracing tool is an alternative approach. For example, if we want to benchmark IPv4 route lookup times, we can use the following process:

while true; do ip route get $((RANDOM%100)).$((RANDOM%100)).$((RANDOM%100)).5 sleep 0.1 done

Then, we instrument the __fib_lookup() function with eBPF (through BCC):

$ sudo funclatency-bpfcc __fib_lookup Tracing 1 functions for "__fib_lookup"... Hit Ctrl-C to end. ^C nsecs : count distribution 0 -> 1 : 0 | | 2 -> 3 : 0 | | 4 -> 7 : 0 | | 8 -> 15 : 0 | | 16 -> 31 : 0 | | 32 -> 63 : 0 | | 64 -> 127 : 0 | | 128 -> 255 : 0 | | 256 -> 511 : 3 |* | 512 -> 1023 : 1 | | 1024 -> 2047 : 2 |* | 2048 -> 4095 : 13 |****** | 4096 -> 8191 : 42 |********************|

Currently, the overhead is quite high, as a route lookup on an empty routing table is less than 100 ns. Once Linux supports inter-event tracing, the overhead of this solution may be reduced to be usable for such microbenchmarks.

  1. In this simple case, it may be more accurate to use:

    t1 = get_cycles(); for (i = 0; i < count; i++) { err |= fib_lookup(…); } t2 = get_cycles(); total = t2 - t1;

    However, this prevents us to compute more statistics. Moreover, when you need to provide a non-constant input to the fib_lookup() function, the first way is likely to be more accurate. 

  2. In-kernel API backward compatibility is a non-goal of the Linux kernel. 

  3. You can get the current frequency with cpupower frequency-info. As the frequency may vary (even when using the performance governor), this may not be accurate but this still provides an easier representation (comparable results should use the same frequency). 

  4. Only integer arithmetic is available in the kernel. While it is possible to approximate a standard deviation using only integers, the median absolute deviation just reuses the percentile() function defined above. 

Vincent Bernat Vincent Bernat

Freezing of tasks failed

Planet Debian - Enj, 28/12/2017 - 7:33pd

It is interesting how a user-space task could lead to hinder a Linux kernel software suspend operation.

[11735.155443] PM: suspend entry (deep) [11735.155445] PM: Syncing filesystems ... done. [11735.215091] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11735.215172] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11735.558676] rfkill: input handler enabled [11735.608859] (NULL device *): firmware: direct-loading firmware rtlwifi/rtl8723befw_36.bin [11735.609910] (NULL device *): firmware: direct-loading firmware rtl_bt/rtl8723b_fw.bin [11735.611871] Freezing user space processes ... [11755.615603] Freezing of tasks failed after 20.003 seconds (1 tasks refusing to freeze, wq_busy=0): [11755.615854] digikam D 0 13262 13245 0x00000004 [11755.615859] Call Trace: [11755.615873] __schedule+0x28e/0x880 [11755.615878] schedule+0x2c/0x80 [11755.615889] request_wait_answer+0xa3/0x220 [fuse] [11755.615895] ? finish_wait+0x80/0x80 [11755.615902] __fuse_request_send+0x86/0x90 [fuse] [11755.615907] fuse_request_send+0x27/0x30 [fuse] [11755.615914] fuse_send_readpages.isra.30+0xd1/0x120 [fuse] [11755.615920] fuse_readpages+0xfd/0x110 [fuse] [11755.615928] __do_page_cache_readahead+0x200/0x2d0 [11755.615936] filemap_fault+0x37b/0x640 [11755.615940] ? filemap_fault+0x37b/0x640 [11755.615944] ? filemap_map_pages+0x179/0x320 [11755.615950] __do_fault+0x1e/0xb0 [11755.615953] __handle_mm_fault+0xc8a/0x1160 [11755.615958] handle_mm_fault+0xb1/0x200 [11755.615964] __do_page_fault+0x257/0x4d0 [11755.615968] do_page_fault+0x2e/0xd0 [11755.615973] page_fault+0x22/0x30 [11755.615976] RIP: 0033:0x7f32d3c7ff90 [11755.615978] RSP: 002b:00007ffd887c9d18 EFLAGS: 00010246 [11755.615981] RAX: 00007f32d3fc9c50 RBX: 000000000275e440 RCX: 0000000000000003 [11755.615982] RDX: 0000000000000002 RSI: 00007ffd887c9f10 RDI: 000000000275e440 [11755.615984] RBP: 00007ffd887c9f10 R08: 000000000275e820 R09: 00000000018d2f40 [11755.615986] R10: 0000000000000002 R11: 0000000000000000 R12: 000000000189cbc0 [11755.615987] R13: 0000000001839dc0 R14: 000000000275e440 R15: 0000000000000000 [11755.616014] OOM killer enabled. [11755.616015] Restarting tasks ... done. [11755.817640] PM: suspend exit [11755.817698] PM: suspend entry (s2idle) [11755.817700] PM: Syncing filesystems ... done. [11755.983156] rfkill: input handler disabled [11756.030209] rfkill: input handler enabled [11756.073529] Freezing user space processes ... [11776.084309] Freezing of tasks failed after 20.010 seconds (2 tasks refusing to freeze, wq_busy=0): [11776.084630] digikam D 0 13262 13245 0x00000004 [11776.084636] Call Trace: [11776.084653] __schedule+0x28e/0x880 [11776.084659] schedule+0x2c/0x80 [11776.084672] request_wait_answer+0xa3/0x220 [fuse] [11776.084680] ? finish_wait+0x80/0x80 [11776.084688] __fuse_request_send+0x86/0x90 [fuse] [11776.084695] fuse_request_send+0x27/0x30 [fuse] [11776.084703] fuse_send_readpages.isra.30+0xd1/0x120 [fuse] [11776.084711] fuse_readpages+0xfd/0x110 [fuse] [11776.084721] __do_page_cache_readahead+0x200/0x2d0 [11776.084730] filemap_fault+0x37b/0x640 [11776.084735] ? filemap_fault+0x37b/0x640 [11776.084743] ? __update_load_avg_blocked_se.isra.33+0xa1/0xf0 [11776.084749] ? filemap_map_pages+0x179/0x320 [11776.084755] __do_fault+0x1e/0xb0 [11776.084759] __handle_mm_fault+0xc8a/0x1160 [11776.084765] handle_mm_fault+0xb1/0x200 [11776.084772] __do_page_fault+0x257/0x4d0 [11776.084777] do_page_fault+0x2e/0xd0 [11776.084783] page_fault+0x22/0x30 [11776.084787] RIP: 0033:0x7f31ddf315e0 [11776.084789] RSP: 002b:00007ffd887ca068 EFLAGS: 00010202 [11776.084793] RAX: 00007f31de13c350 RBX: 00000000040be3f0 RCX: 000000000283da60 [11776.084795] RDX: 0000000000000001 RSI: 00000000040be3f0 RDI: 00000000040be3f0 [11776.084797] RBP: 00007f32d3fca1e0 R08: 0000000005679250 R09: 0000000000000020 [11776.084799] R10: 00000000058fc1b0 R11: 0000000004b9ac50 R12: 0000000000000000 [11776.084801] R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000000 [11776.084806] QXcbEventReader D 0 13268 13245 0x00000004 [11776.084810] Call Trace: [11776.084817] __schedule+0x28e/0x880 [11776.084823] schedule+0x2c/0x80 [11776.084827] rwsem_down_write_failed_killable+0x25a/0x490 [11776.084832] call_rwsem_down_write_failed_killable+0x17/0x30 [11776.084836] ? call_rwsem_down_write_failed_killable+0x17/0x30 [11776.084842] down_write_killable+0x2d/0x50 [11776.084848] do_mprotect_pkey+0xa9/0x2f0 [11776.084854] SyS_mprotect+0x13/0x20 [11776.084859] system_call_fast_compare_end+0xc/0x97 [11776.084861] RIP: 0033:0x7f32d1f7c057 [11776.084863] RSP: 002b:00007f32cbb8c8d8 EFLAGS: 00000206 ORIG_RAX: 000000000000000a [11776.084867] RAX: ffffffffffffffda RBX: 00007f32c4000020 RCX: 00007f32d1f7c057 [11776.084869] RDX: 0000000000000003 RSI: 0000000000001000 RDI: 00007f32c4024000 [11776.084871] RBP: 00000000000000c5 R08: 00007f32c4000000 R09: 0000000000024000 [11776.084872] R10: 00007f32c4024000 R11: 0000000000000206 R12: 00000000000000a0 [11776.084874] R13: 00007f32c4022f60 R14: 0000000000001000 R15: 00000000000000e0 [11776.084906] OOM killer enabled. [11776.084907] Restarting tasks ... done. [11776.289655] PM: suspend exit [11776.459624] IPv6: ADDRCONF(NETDEV_UP): wlp1s0: link is not ready [11776.469521] rfkill: input handler disabled [11776.978733] IPv6: ADDRCONF(NETDEV_UP): wlp1s0: link is not ready [11777.038879] IPv6: ADDRCONF(NETDEV_UP): wlp1s0: link is not ready [11778.022062] wlp1s0: authenticate with 50:8f:4c:82:4d:dd [11778.033155] wlp1s0: send auth to 50:8f:4c:82:4d:dd (try 1/3) [11778.038522] wlp1s0: authenticated [11778.041511] wlp1s0: associate with 50:8f:4c:82:4d:dd (try 1/3) [11778.059860] wlp1s0: RX AssocResp from 50:8f:4c:82:4d:dd (capab=0x431 status=0 aid=5) [11778.060253] wlp1s0: associated [11778.060308] IPv6: ADDRCONF(NETDEV_CHANGE): wlp1s0: link becomes ready [11778.987669] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11779.117608] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11779.160930] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11779.784045] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11779.913668] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11779.961517] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch 11:58 ♒♒♒ ☺ Categories: Keywords: Like:  Ritesh Raj Sarraf RESEARCHUT - Debian-Blog

Testing Ansible Playbooks With Vagrant

Planet Debian - Enj, 28/12/2017 - 12:00pd

I use Ansible to automate the deployments of my websites (, Journal du hacker) and my applications (Feed2toot, Feed2tweet). I’ll describe in this blog post my setup in order to test my Ansible Playbooks locally on my laptop.

Why testing the Ansible Playbooks

I need a simple and a fast way to test the deployments of my Ansible Playbooks locally on my laptop, especially at the beginning of writing a new Playbook, because deploying directly on the production server is both reeeeally slow… and risky for my services in production.

Instead of deploying on a remote server, I’ll deploy my Playbooks on a VirtualBox using Vagrant. This allows getting quickly the result of a new modification, iterating and fixing as fast as possible.

Disclaimer: I am not a profesionnal programmer. There might exist better solutions and I’m only describing one solution of testing Ansible Playbooks I find both easy and efficient for my own use cases.

My process
  1. Begin writing the new Ansible Playbook
  2. Launch a fresh virtual machine (VM) and deploy the playbook on this VM using Vagrant
  3. Fix the issues either from the playbook either from the application deployed by Ansible itself
  4. Relaunch the deployment on the VM
  5. If more errors, go back to step 3. Otherwise destroy the VM, recreate it and deploy to test a last time with a fresh install
  6. If no error remains, tag the version of your Ansible Playbook and you’re ready to deploy in production
What you need

First, you need Virtualbox. If you use the Debian distribution, this link describes how to install it, either from the Debian repositories either from the upstream.

Second, you need Vagrant. Why Vagrant? Because it’s a kind of middleware between your development environment and your virtual machine, allowing programmatically reproducible operations and easy linking your deployments and the virtual machine. Install it with the following command:

# apt install vagrant

Setting up Vagrant

Everything about Vagrant lies in the file Vagrantfile. Here is mine:

Vagrant.require_version ">= 2.0.0" Vagrant.configure(1) do |config| = "debian/stretch64" config.vm.provision "shell", inline: "apt install --yes git python3-pip" config.vm.provision "ansible" do |ansible| ansible.verbose = "v" ansible.playbook = "site.yml" ansible.vault_password_file = "vault_password_file" end end

Debian, the best OS to operate your online services

  1. The 1st line defines what versions of Vagrant should execute your Vagrantfile.
  2. The first loop of the file, you could define the following operations for as many virtual machines as you wish (here just 1).
  3. The 3rd line defines the official Vagrant image we’ll use for the virtual machine.
  4. The 4th line is really important: those are the missing apps we miss on the VM. Here we install git and python3-pip with apt.
  5. The next line indicates the start of the Ansible configuration.
  6. On the 6th line, we want a verbose output of Ansible.
  7. On the 7th line, we define the entry point of your Ansible Playbook.
  8. On the 8th line, if you use Ansible Vault to encrypt some files, just define here the file with your Ansible Vault passphrase.

When Vagrant launches Ansible, it’s going to launch something like:

$  ansible-playbook --inventory-file=/home/me/ansible/test-ansible-playbook/.vagrant/provisioners/ansible/inventory -v --vault-password-file=vault_password_file site.yml Executing Vagrant

After writing your Vagrantfile, you need to launch your VM. It’s as simple as using the following command:

$ vagrant up

That’s a slow operation, because the VM will be launched, the additionnal apps you defined in the Vagrantfile will be installed and finally your Playbook will be deployed on it. You should sparsely use it.

Ok, now we’re really ready to iterate fast. Between your different modifications, in order to test your deployments fast and on a regular basis, just use the following command:

$ vagrant provision

Once your Ansible Playbook is finally ready, usually after lots of iterations (at least that’s my case), you should test it on a fresh install, because your different iterations may have modified your virtual machine and could trigger unexpected results.

In order to test it from a fresh install, use the following command:

$ vagrant destroy && vagrant up

That’s again a slow operation. You should use it when you’re pretty sure your Ansible Playbook is almost finished. After testing your deployment on a fresh VM, you’re now ready to deploy in production.Or at least better prepared :p

Possible improvements? Let me know

I find the setup described in this blog post quite useful for my use cases. I can iterate quite fast especially when I begin writing a new playbook, not only on the playbook but sometimes on my own latest apps, not yet ready to be deployed in production. Deploying on a remote server would be both slow and dangerous for my services in production.

I could use a continous integration (CI) server, but that’s not the topic of this blog post.  As said before, the goal is to iterate as fast as possible in the beginning of writing a new Ansible Playbook.

Gitlab, offering Continuous Integration and Continuous Deployment services

Commiting, pushing to your Git repository and waiting for the execution of your CI tests is overkill at the beginning of your Ansible Playbook, when it’s full of errors waiting to be debugged one by one. I think CI is more useful later in the life of the Ansible Playbooks, especially when different people work on it and you have a set or code quality rules to enforce. That’s only my opinion and it’s open to discussion, one more time I’m not a professionnal programmer.

If you have better solutions to test Ansible Playbooks or to improve the one describe here, let me know by writing a comment or by contacting me through my accounts on social networks below, I’ll be delighted to listen to your improvements.

About Me

Carl Chenet, Free Software Indie Hacker, Founder of, a job board for Free and Open Source Jobs in France.

Follow Me On Social Networks


Carl Chenet debian – Carl Chenet's Blog

Translating my website to Finnish

Planet Debian - Mër, 27/12/2017 - 11:00md

I've now been living in Finland for two years, and I'm pondering a small project to translate my main website into Finnish.

Obviously if my content is solely Finnish it will become of little interest to the world - if my vanity lets me even pretend it is useful at the moment!

The traditional way to do this, with Apache, is to render pages in multiple languages and let the client(s) request their preferred version with Accept-Language:. Though it seems that many clients are terrible at this, and the whole approach is a mess. Pretending it works though we render pages such as:

index.html index.en.html

Then "magic happens", such that the right content is served. I can then do extra-things, like add links to "English" or "Finnish" in the header/footers to let users choose.

Unfortunately I have an immediate problem! I host a bunch of websites on a single machine and I don't want to allow a single site compromise to affect other sites. To do that I run each website under its own Unix user. For example I have the website "" running as the "s-fi" user, and my blog runs as "s-blog", or "s-blogfi":

root@www ~ # psx -ef | egrep '(s-blog|s-fi)' s-blogfi /usr/sbin/lighttpd -f /srv/ -D s-blog /usr/sbin/lighttpd -f /srv/ -D s-fi /usr/sbin/lighttpd -f /srv/ -D

There you can see the Unix user, and the per-user instance of lighttpd which hosts the website. Each instance binds to a high-port on localhost, and I have a reverse proxy listening on the public IP address to route incoming connections to the appropriate back-end instance.

I used to use thttpd but switched to lighttpd to allow CGI scripts to be used - some of my sites are slightly/mostly dynamic.

Unfortunately lighttpd doesn't support multiviews without some Lua hacks which will require rewriting - as the supplied example only handles Accept rather than the language-header I want.

It seems my simplest solution is to switch from having lighttpd on the back-end to running apache2 instead, but I've not yet decided which way to jump.

Food for thought, anyway.

hyvää joulua!

Steve Kemp Steve Kemp's Blog

Debian-Med bug squashing

Planet Debian - Mar, 26/12/2017 - 12:16md

The Debian Med Advent Calendar was again really successful this year. As announced on the mailinglist, this year the second highest number of bugs has been closed during that bug squashing:

year number of bugs closed 2011 63 2012 28 2013 73 2014 5 2015 150 2016 95 2017 105

Well done everybody who participated!

alteholz » planetdebian

Dockerizing Compiled Software

Planet Debian - Mar, 26/12/2017 - 8:00pd

I recently went through a stint of closing a huge number of issues in the docker-library/php repository, and one of the oldest (and longest) discussions was related to installing depedencies for a compiling extensions, and I wrote a semi-long comment explaining how I do this in a general way for any software I wish to Dockerize.

I’m going to copy most of that comment here and perhaps expand a little bit more in order to have a better/cleaner place to link to!

The first step I take is to write the naïve version of the Dockerfile: download the source, run ./configure && make etc, clean up. I then try building my naïve creation, and in doing so hope for an error message. (yes, really!)

The error message will usually take the form of something like error: could not find "xyz.h" or error: libxyz development headers not found.

If I’m building in Debian, I’ll hit (replacing “xyz.h” with the name of the header file from the error message), or even just Google something like “xyz.h debian”, to figure out the name of the package I require.

If I’m building in Alpine, I’ll use to perform a similar search.

The same works to some extent for “libxyz development headers”, but in my experience Google works better for those since different distributions and projects will call these development packages by different names, so sometimes it’s a little harder to figure out exactly which one is the “right” one to install.

Once I’ve got a package name, I add that package name to my Dockerfile, rinse, and repeat. Eventually, this usually leads to a successful build. Occationally I find that some library either isn’t in Debian or Alpine, or isn’t new enough, and I’ve also got to build it from source, but those instances are rare in my own experience – YMMV.

I’ll also often check the source for the Debian (via or Alpine (via package of the software I’m looking to compile, especially paying attention to Build-Depends (ala php7.0=7.0.26-1’s debian/control file) and/or makedepends (ala php7’s APKBUILD file) for package name clues.

Personally, I find this sort of detective work interesting and rewarding, but I realize I’m probably a bit of a unique creature. Another good technique I use occationally is to determine whether anyone else has already Dockerized the thing I’m trying to, so I can simply learn directly from their Dockerfile which packages I’ll need to install.

For the specific case of PHP extensions, there’s almost always someone who’s already figured out what’s necessary for this or that module, and all I have to do is some light detective work to find them.

Anyways, that’s my method! Hope it’s helpful, and happy hunting!

Tianon Gravi Tianon's Ramblings

Salsa batch import

Planet Debian - Hën, 25/12/2017 - 4:43md

Now that Salsa is in beta, it's time to import projects (= GitLab speak for "repository"). This is probably best done automated. Head to Access Tokens and generate a token with "api" scope, which you can then use with curl:

$ cat salsa-import #!/bin/sh set -eux PROJECT="${1%.git}" DESCRIPTION="$PROJECT packaging" ALIOTH_URL="" ALIOTH_GROUP="collab-maint" SALSA_URL="" SALSA_NAMESPACE="2" # 2 is "debian" SALSA_TOKEN="yourcryptictokenhere" curl -f "$SALSA_URL/projects?private_token=$SALSA_TOKEN" \ --data "path=$PROJECT&namespace_id=$SALSA_NAMESPACE&description=$DESCRIPTION&import_url=$ALIOTH_URL/$ALIOTH_GROUP/$PROJECT&visibility=public"

This will create the GitLab project in the chosen namespace, and import the repository from Alioth.

To get the namespace id, use something like:

curl | jq . | less

Pro tip: To import a whole Alioth group to GitLab, run this on Alioth:

for f in *.git; do sh salsa-import $f; done Christoph Berg Myon's Debian Blog

Kitten Block equivalent for Firefox 57

Planet Debian - Mar, 19/12/2017 - 1:00pd

I’ve been using Kitten Block for years, since I don’t really need the blood pressure spike caused by accidentally following links to certain UK newspapers. Unfortunately it hasn’t been ported to Firefox 57. I tried emailing the author a couple of months ago, but my email bounced.

However, if your primary goal is just to block the websites in question rather than seeing kitten pictures as such (let’s face it, the internet is not short of alternative sources of kitten pictures), then it’s easy to do with uBlock Origin. After installing the extension if necessary, go to Tools → Add-ons → Extensions → uBlock Origin → Preferences → My filters, and add and, each on its own line. (Of course you can easily add more if you like.) Voilà: instant tranquility.

Incidentally, this also works fine on Android. The fact that it was easy to install a good ad blocker without having to mess about with a rooted device or strange proxy settings was the main reason I switched to Firefox on my phone.

Colin Watson Colin Watson's blog

littler 0.3.3

Planet Debian - Dje, 17/12/2017 - 5:37md

The fourth release of littler as a CRAN package is now available, following in the now more than ten-year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. In my very biased eyes better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. Last but not least it is also less silly than Rscript and always loads the methods package avoiding those bizarro bugs between code running in R itself and a scripting front-end.

littler prefers to live on Linux and Unix, has its difficulties on OS X due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems where a good idea?) and simply does not exist on Windows (yet -- the build system could be extended -- see RInside for an existence proof, and volunteers welcome!).

A few examples as highlighted at the Github repo:

This release brings a few new examples scripts, extends a few existing ones and also includes two fixes thanks to Carl. Again, no internals were changed. The NEWS file entry is below.

Changes in littler version 0.3.3 (2017-12-17)
  • Changes in examples

    • The script installGithub.r now correctly uses the upgrade argument (Carl Boettiger in #49).

    • New script pnrrs.r to call the package-native registration helper function added in R 3.4.0

    • The script install2.r now has more robust error handling (Carl Boettiger in #50).

    • New script cow.r to use R Hub's check_on_windows

    • Scripts cow.r and c4c.r use #!/usr/bin/env r

    • New option --fast (or -f) for scripts build.r and rcc.r for faster package build and check

    • The build.r script now defaults to using the current directory if no argument is provided.

    • The RStudio getters now use the rvest package to parse the webpage with available versions.

  • Changes in package

    • Travis CI now uses https to fetch script, and sets the group

Courtesy of CRANberries, there is a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs off my littler page and the local directory here -- and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel Thinking inside the box

Clive Johnston: Love KDE software? Show your love by donating today

Planet Ubuntu - Pre, 15/12/2017 - 11:16md

It is the season of giving and if you use KDE software, donate to KDE.  Software such as Krita, Kdenlive, KDE Connect, Kontact, digiKam, the Plasma desktop and many many more are all projects under the KDE umbrella.

KDE have launched a fund drive running until the end of 2017.  If you want to help make KDE software better, please consider donating.  For more information on what KDE will do with any money you donate, please go to

Matthew Helmke: Learn Java the Easy Way

Planet Ubuntu - Pre, 15/12/2017 - 4:53md

This is an enjoyable introduction to programming in Java by an author I have enjoyed in the past.

Learn Java the Easy Way: A Hands-On Introduction to Programming was written by Dr. Bryson Payne. I previously reviewed his book Teach Your Kids to Code, which is Python-based.

Learn Java the Easy Way covers all the topics one would expect, from development IDEs (it focuses heavily on Eclipse and Android Studio, which are both reasonable, solid choices) to debugging. In between, the reader receives clear explanations of how to perform calculations, manipulate text strings, use conditions and loops, create functions, along with solid and easy-to-understand definitions of important concepts like classes, objects, and methods.

Java is taught systematically, starting with simple and moving to complex. We first create a simple command-line game, then we create a GUI for it, then we make it into an Android app, then we add menus and preference options, and so on. Along the way, new games and enhancement options are explored, some in detail and some in end-of-chapter exercises designed to give more confident or advancing students ideas for pushing themselves further than the book’s content. I like that.

Side note: I was pleasantly amused to discover that the first program in the book is the same as one that I originally wrote in 1986 on a first-generation Casio graphing calculator, so I would have something to kill time when class lectures got boring.

The pace of the book is good. Just as I began to feel done with a topic, the author moved to something new. I never felt like details were skipped and I also never felt like we were bogged down with too much detail, beyond what is needed for the current lesson. The author has taught computer science and programming for nearly 20 years, and it shows.

Bottom line: if you want to learn Java, this is a good introduction that is clearly written and will give you a nice foundation upon which you can build.

Disclosure: I was given my copy of this book by the publisher as a review copy. See also: Are All Book Reviews Positive?

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, November 2017

Planet Ubuntu - Pre, 15/12/2017 - 3:15md

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 144 work hours have been dispatched among 12 paid contributors. Their reports are available:

  • Antoine Beaupré did 8.5h (out of 13h allocated + 3.75h remaining, thus keeping 8.25h for December).
  • Ben Hutchings did 17 hours (out of 13h allocated + 4 extra hours).
  • Brian May did 10 hours.
  • Chris Lamb did 13 hours.
  • Emilio Pozuelo Monfort did 14.5 hours (out of 13 hours allocated + 15.25 hours remaining, thus keeping 13.75 hours for December).
  • Guido Günther did 14 hours (out of 11h allocated + 5.5 extra hours, thus keeping 2.5h for December).
  • Hugo Lefeuvre did 13h.
  • Lucas Kanashiro did not request any work hours, but he had 3 hours left. He did not publish any report yet.
  • Markus Koschany did 14.75 hours (out of 13 allocated + 1.75 extra hours).
  • Ola Lundqvist did 7h.
  • Raphaël Hertzog did 10 hours (out of 12h allocated, thus keeping 2 extra hours for December).
  • Roberto C. Sanchez did 32.5 hours (out of 13 hours allocated + 24.50 hours remaining, thus keeping 5 extra hours for November).
  • Thorsten Alteholz did 13 hours.
About external support partners

You might notice that there is sometimes a significant gap between the number of distributed work hours each month and the number of sponsored hours reported in the “Evolution of the situation” section. This is mainly due to some work hours that are “externalized” (but also because some sponsors pay too late). For instance, since we don’t have Xen experts among our Debian contributors, we rely on credativ to do the Xen security work for us. And when we get an invoice, we convert that to a number of hours that we drop from the available hours in the following month. And in the last months, Xen has been a significant drain to our resources: 35 work hours made in September (invoiced in early October and taken off from the November hours detailed above), 6.25 hours in October, 21.5 hours in November. We also have a similar partnership with Diego Bierrun to help us maintain libav, but here the number of hours tend to be very low.

In both cases, the work done by those paid partners is made freely available for others under the original license: credativ maintains a Xen 4.1 branch on GitHub, Diego commits his work on the release/0.8 branch in the official git repository.

Evolution of the situation

The number of sponsored hours did not change at 183 hours per month. It would be nice if we could continue to find new sponsors as the amount of work seems to be slowly growing too.

The security tracker currently lists 55 packages with a known CVE and the dla-needed.txt file 35 (we’re a bit behind in CVE triaging apparently).

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Dimitri John Ledkov: What does FCC Net Neutrality repeal mean to you?

Planet Ubuntu - Pre, 15/12/2017 - 10:09pd
Sorry, the web page you have requested is not available through your internet connection.We have received an order from the Courts requiring us to prevent access to this site in order to help protect against Lex Julia Majestatis infridgement.If you are a home broadband customer, for more information on why certain web pages are blocked, please click here.If you are a business customer, or are trying to view this page through your company's internet connection, please click here.

A second X server on vt8, running a different Debian suite, using systemd-nspawn

Planet Debian - Pre, 15/12/2017 - 1:18pd
Two tensions
  1. Sometimes the contents of the Debian archive isn’t yet sufficient for working in a software ecosystem in which I’d like to work, and I want to use that ecosystem’s package manager which downloads the world into $HOME – e.g. stack, pip, lein and friends.

    But I can’t use such a package manager when $HOME contains my PGP subkeys and other valuable files, and my X session includes Firefox with lots of saved passwords, etc.

  2. I want to run Debian stable on my laptop for purposes of my day job – if I can’t open Emacs on a Monday morning, it’s going to be a tough week.

    But I also want to do Debian development on my laptop, and most of that’s a pain without either Debian testing or Debian unstable.

The solution

Have Propellor provision and boot a systemd-nspawn(1) container running Debian unstable, and start a window manager in that container with $DISPLAY pointing at an X server in vt8. Wooo!

In more detail:

  1. Laptop runs Debian stable. Main account is spwhitton.
  2. Achieve isolation from /home/spwhitton by creating a new user account, spw, that can’t read /home/spwhitton. Also, in X startup scripts for spwhitton, run xhost -local:.
  3. debootstrap a Debian unstable chroot into /var/lib/container/develacc.
  4. Install useful desktop things like task-british-desktop into /var/lib/container/develacc.
  5. Boot /var/lib/container/develacc as a systemd-nspawn container called develacc.
  6. dm-tool switch-to-greeter to start a new X server on vt8. Login as spw.
  7. Propellor installs a script enter-develacc which uses nsenter(1) to run commands in the develacc container. Create a further script enter-develacc-i3 which does

    /usr/local/bin/enter-develacc sh -c "cd ~spw; DISPLAY=$1 su spw -c i3"
  8. Finally, /home/spw/.xsession starts i3 in the chroot pointed at vt8’s X server:

    sudo /usr/local/bin/enter-develacc-i3 $DISPLAY
  9. Phew. May now pip install foo. And Ctrl-Alt-F7 to go back to my secure session. That session can read and write /home/spw, so I can dgit push etc.

The Propellor configuration develaccProvisioned :: Property (HasInfo + DebianLike) develaccProvisioned = propertyList "develacc provisioned" $ props & User.accountFor (User "spw") & Dotfiles.installedFor (User "spw") & User.hasDesktopGroups (User "spw") & withMyAcc "Sean has 'spw' group" (\u -> tightenTargets $ User.hasGroup u (Group "spw")) & withMyHome "Sean's homedir chmodded" (\h -> tightenTargets $ File.mode h 0O0750) & "/home/spw" `File.mode` 0O0770 & "/etc/sudoers.d/spw" `File.hasContent` ["spw ALL=(root) NOPASSWD: /usr/local/bin/enter-develacc-i3"] & "/usr/local/bin/enter-develacc-i3" `File.hasContent` [ "#!/bin/sh" , "" , "echo \"$1\" | grep -q -E \"^:[0-9.]+$\" || exit 1" , "" , "/usr/local/bin/enter-develacc sh -c \\" , "\t\"cd ~spw; DISPLAY=$1 su spw -c i3\"" ] & "/usr/local/bin/enter-develacc-i3" `File.mode` 0O0755 -- we have to start xss-lock outside of the container in order that it -- can interface with host logind & "/home/spw/.xsession" `File.hasContent` [ "if [ -e \"$HOME/local/wallpaper.png\" ]; then" , " xss-lock -- i3lock -i $HOME/local/wallpaper.png &" , "else" , " xss-lock -- i3lock -c 3f3f3f -n &" , "fi" , "sudo /usr/local/bin/enter-develacc-i3 $DISPLAY" ] & Systemd.nspawned develAccChroot & "/etc/network/if-up.d/develacc-resolvconf" `File.hasContent` [ "#!/bin/sh" , "" , "cp -fL /etc/resolv.conf \\" ,"\t/var/lib/container/develacc/etc/resolv.conf" ] & "/etc/network/if-up.d/develacc-resolvconf" `File.mode` 0O0755 where develAccChroot = Systemd.debContainer "develacc" $ props -- Prevent propellor passing --bind=/etc/resolv.conf which -- - won't work when system first boots as WLAN won't be up yet, -- so /etc/resolv.conf is a dangling symlink -- - doesn't keep /etc/resolv.conf up-to-date as I move between -- wireless networks ! Systemd.resolvConfed & osDebian Unstable X86_64 & Apt.stdSourcesList & Apt.suiteAvailablePinned Experimental 1 -- use host apt cacher (we assume I have that on any system with -- develaccProvisioned) & Apt.proxy "http://localhost:3142" & Apt.installed [ "i3" , "task-xfce-desktop" , "task-british-desktop" , "xss-lock" , "emacs" , "caffeine" , "redshift-gtk" , "gnome-settings-daemon" ] & Systemd.bind "/home/spw" -- note that this won't create /home/spw because that is -- bind-mounted, which is what we want & User.accountFor (User "spw") -- ensure that spw inside the container can read/write ~spw & scriptProperty [ "usermod -u $(stat --printf=\"%u\" /home/spw) spw" , "groupmod -g $(stat --printf=\"%g\" /home/spw) spw" ] `assume` NoChange Comments

I first tried using a traditional chroot. I bound lots of /dev into the chroot and then tried to start lightdm on vt8. This way, the whole X server would be within the chroot; this is in a sense more straightforward and there is not the overhead of booting the container. But lightdm refuses to start.

It might have been possible to work around this, but after reading a number of reasons why chroots are less good under systemd as compared with sysvinit, I thought I’d try systemd-nspawn, which I’ve used before and rather like in general. I couldn’t get lightdm to start inside that, either, because systemd-nspawn makes it difficult to mount enough of /dev for X servers to be started. At that point I realised that I could start only the window manager inside the container, with the X server started from the host’s lightdm, and went from there.

The security isn’t that good. You shouldn’t be running anything actually untrusted, just stuff that’s semi-trusted.

  • chmod 750 /home/spwhitton, xhost -local: and the argument validation in enter-develacc-i3 are pretty much the extent of the security here. The containerisation is to get Debian sid on a Debian stable machine, not for isolation

  • lightdm still runs X servers as root even though it’s been possible to run them as non-root in Debian for a few years now (there’s a wishlist bug against lightdm)

I now have a total of six installations of Debian on my laptop’s hard drive … four traditional chroots, one systemd-nspawn container and of course the host OS. But this is easy to manage with propellor!


Screen locking is weird because logind sessions aren’t shared into the container. I have to run xss-lock in /home/spw/.xsession before entering the container, and the window manager running in the container cannot have a keybinding to lock the screen (as it does in my secure session). To lock the spw X server, I have to shut my laptop lid, or run loginctl lock-sessions from my secure session, which requires entering the root password.

Sean Whitton Notes from the Library


Subscribe to AlbLinux agreguesi