You are here

Agreguesi i feed

4.16.11: stable

Kernel Linux - Mar, 22/05/2018 - 6:56md
Version:4.16.11 (stable) Released:2018-05-22 Source:linux-4.16.11.tar.xz PGP Signature:linux-4.16.11.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.16.11

4.14.43: longterm

Kernel Linux - Mar, 22/05/2018 - 6:54md
Version:4.14.43 (longterm) Released:2018-05-22 Source:linux-4.14.43.tar.xz PGP Signature:linux-4.14.43.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.14.43

4.9.102: longterm

Kernel Linux - Mar, 22/05/2018 - 4:58md
Version:4.9.102 (longterm) Released:2018-05-22 Source:linux-4.9.102.tar.xz PGP Signature:linux-4.9.102.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.9.102

4.17-rc6: mainline

Kernel Linux - Hën, 21/05/2018 - 12:31pd
Version:4.17-rc6 (mainline) Released:2018-05-20 Source:linux-4.17-rc6.tar.gz Patch:full (incremental)

next-20180517: linux-next

Kernel Linux - Enj, 17/05/2018 - 9:04pd
Version:next-20180517 (linux-next) Released:2018-05-17

4.4.132: longterm

Kernel Linux - Mër, 16/05/2018 - 10:06pd
Version:4.4.132 (longterm) Released:2018-05-16 Source:linux-4.4.132.tar.xz PGP Signature:linux-4.4.132.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.4.132

3.18.109: longterm

Kernel Linux - Mër, 16/05/2018 - 10:05pd
Version:3.18.109 (EOL) (longterm) Released:2018-05-16 Source:linux-3.18.109.tar.xz PGP Signature:linux-3.18.109.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-3.18.109

Peter Hutterer: X server pointer acceleration analysis - part 4

Planet GNOME - Enj, 10/05/2018 - 7:37pd

This post is part of a four part series: Part 1, Part 2, Part 3, Part 4.

In the first three parts, I covered the X server and synaptics pointer acceleration curves and how libinput compares to the X server pointer acceleration curve. In this post, I will compare libinput to the synaptics acceleration curve.

Comparison of synaptics and libinput

libinput has multiple different pointer acceleration curves, depending on the device. In this post, I will only consider the one used for touchpads. So let's compare the synaptics curve with the libinput curve at the default configurations:


Synaptics vs libinput's touchpad profile
But this one doesn't tell the whole story, because the touchpad accel for libinput actually changes once we get faster. So here are the same two curves, but this time with the range up to 1000mm/s.
Synaptics vs libinput's touchpad profile (full range)
These two graphs show that libinput is both very different and similar. Both curves have an acceleration factor less than 1 for the majority of speeds, they both decelerate the touchpad more than accelerating it. synaptics has two factors it sticks to and a short curve, libinput has a short deceleration curve and its plateau is the same or lower than synaptics for the most part. Once the threshold is hit at around 250 mm/s, libinput's acceleration keeps increasing until it hits a maximum much later.

So, anything under ~20mm/s, libinput should be the same as synaptics (ignoring the <7mm/s deceleration). For anything less than 250mm/s, libinput should be slower. I say "should be" because that is not actually the case, synaptics is slower so I suspect the server scaling slows down synaptics even further. Hacking around in the libinput code, I found that moving libinput's baseline to 0.2 matches the synaptics cursor's speed. However, AFAIK that scaling depends on the screen size, so your mileage may vary.

Comparing configuration settings

Let's overlay the libinput speed toggles. In Part 2 we've seen the synaptics toggles and they're open-ended, so it's a bit hard to pick a specific set to go with to compare. I'll be using the same combined configuration options from the diagram there.


Synaptics configurations vs libinput speeds
And we need the diagram from 0-1000mm/s as well.
Synaptics configurations vs libinput speeds
There isn't much I can talk about here in direct comparison, the curves are quite different and the synaptics curves vary greatly with the configuration options (even though the shape remains the same).

AnalysisIt's fairly obvious that the acceleration profiles are very different once depart from the default settings. Most notable, only libinput's slowest speed setting matches the 0.2 speed that is the synaptics default setting. In other words, if your touchpad is too fast compared to synaptics, it may not be possible to slow it down sufficiently. Likewise, even at the fastest speed, the baseline is well below the synaptics baseline for e.g. 0.6 [1], so if your touchpad is too slow, you may not be able to speed it up sufficiently (at least for low speeds). That problem won't exist for the maximum acceleration factor, the main question here is simply whether they are too high. Answer: I don't know.

So the base speed of the touchpad in libinput needs a higher range, that's IMO a definitive bug that I need to work on. The rest... I don't know. Let's see how we go.

[1] A configuration I found suggested in some forum when googling for MinSpeed, so let's assume there's at least one person out there using it.

Peter Hutterer: X server pointer acceleration analysis - part 3

Planet GNOME - Enj, 10/05/2018 - 5:47pd

This post is part of a four part series: Part 1, Part 2, Part 3, Part 4.

In Part 1 and Part 2 I showed the X server acceleration code as used by the evdev and synaptics drivers. In this part, I'll show how it compares against libinput.

Comparison to libinput

libinput has multiple different pointer acceleration curves, depending on the device. In this post, I will only consider the default one used for mice. A discussion of the touchpad acceleration curve comes later. So, back to the graph of the simple profile. Let's overlay this with the libinput pointer acceleration curve:

Classic vs libinput's profile Turns out the pointer acceleration curve, mostly modeled after the xserver behaviour roughly matches the xserver behaviour. Note that libinput normalizes to 1000dpi (provided MOUSE_DPI is set correctly) and thus the curves only match this way for 1000dpi devices.

libinput's deceleration is slightly different but I doubt it is really noticeable. The plateau of no acceleration is virtually identical, i.e. at slow speeds libinput moves like the xserver's pointer does. Likewise for speeds above ~33mm/s, libinput and the server accelerate by the same amount. The actual curve is slightly different. It is a linear curve (I doubt that's noticeable) and it doesn't have that jump in it. The xserver acceleration maxes out at roughly 20mm/s. The only difference in acceleration is for the range of 10mm/s to 33mm/s.

30mm/s is still a relatively slow movement (just move your mouse by 30mm within a second, it doesn't feel fast). This means that for all but slow movements, the current server and libinput acceleration provides but a flat acceleration at whatever the maximum acceleration is set to.

Comparison of configuration options

The biggest difference libinput has to the X server is that it exposes a single knob of normalised continuous configuration (-1.0 == slowest, 1.0 == fastest). It relies on settings like MOUSE_DPI to provide enough information to map a device into that normalised range.

Let's look at the libinput speed settings and their effect on the acceleration profile (libinput 1.10.x).


libinput speed settings
libinput's speed setting is a combination of changing thresholds and accel at the same time. The faster you go, the sooner acceleration applies and the higher the maximum acceleration is. For very slow speeds, libinput provides deceleration. Noticeable here though is that the baseline speed is the same until we get to speed settings of less than -0.5 (where we have an effectively flat profile anyway). So up to the (speed-dependent) threshold, the mouse speed is always the same.

Let's look at the comparison of libinput's speed setting to the accel setting in the simple profile:


Comparison of libinput speed and accel settings
Clearly obvious: libinput's range is a lot smaller than what the accel setting allows (that one is effectively unbounded). This obviously applies to the deceleration as well:
Comparison of libinput speed and deceleration
I'm not posting the threshold comparison, as Part 1 shows it does not effect the maximum acceleration factor anyway.

Analysis

So, where does this leave us? I honestly don't know. The curves are different but the only paper I could find on comparing acceleration curves is Casiez and Roussel' 2011 UIST paper. It provides a comparison of the X server acceleration with the Windows and OS X acceleration curves [1]. It shows quite a difference between the three systems but the authors note that no specific acceleration curve is definitely superior. However, the most interesting bit here is that both the Windows and the OS X curve seem to be constant acceleration (with very minor changes) rather than changing the curve shape.

Either way, there is one possible solution for libinput to implement: to change the base plateau with the speed. Otherwise libinput's acceleration curve is well defined for the configurable range. And a maximum acceleration factor of 3.5 is plenty for a properly configured mouse (generally anything above 3 is tricky to control). AFAICT, the main issues with pointer acceleration come from mice that either don't have MOUSE_DPI set or trackpoints which are, unfortunately, a completely different problem.

I'll probably also give the windows/OS X approaches a try (i.e. same curve, different constant deceleration) and see how that goes. If it works well, that may be a a solution because it's easier to scale into a large range. Otherwise, *shrug*, someone will have to come with a better solution.

[1] I've never been able to reproduce the same gain (== factor) but at least the shape and x axis seems to match.

PDP-8/e Replicated — Clocks And Logic

Planet Debian - Mër, 09/05/2018 - 2:58md

This is, at long last, part 3 of the overview of my PDP-8/e replica project and offers some details of the low-level implementation.

I have mentioned that I build my PDP-8/e replica from the original schematics. The great thing about the PDP-8/e is that it is still built in discrete logic rather than around a microprocessor, meaning that schematics of the actual CPU logic are available instead of just programmer’s documentation. After all, with so many chips on multiple boards something is bound to break down sooner or later and technicians need schematics to diagnose and fix that1. In addition, there’s a maintenance manual that very helpfully describes the workings of every little part of the CPU with excerpts of the full schematics, but it has some inaccuracies and occasionally outright errors in the excerpts so the schematics are still indispensable.

Originally I wanted to design my own logic and use the schematics as just another reference. Since the front panel is a major part of the project and I want it to visually behave as close as possible to the real thing, I would have to duplicate the cycles exactly and generally work very close to the original design. I decided that at that point I might as well just reimplement the schematics.

However, some things can not be reimplemented exactly in the FPGA, other things are a bad fit and can be improved significantly with slight changes.

Clocks

Back in the day, the rule for digital circuits were multi-phase clocks of varying complexity, and the PDP-8/e is no exception in that regard. A cycle has four timing states of different lengths that each end on a timing pulse.

timing diagram from the "pdp8/e & pdp8/m small computer handbook 1972"

As can be seen, the timing states are active when they are at low voltage while the timing pulses are active high. There are plenty of quirks like this which I describe below in the Logic section.

In the PDP-8 the timing generator was implemented as a circular chain of shift registers with parallel outputs. At power on, these registers are reset to all 1 except for 0 in the first two bits. The shift operation is driven by a 20 MHz clock2 and the two zeros then circulate while logic combinations of the parallel outputs generate the timing signals (and also core memory timing, not shown in the diagram above).

What happens with these signals is that the timing states together with control signals decoded from instructions select data outputs and paths while the associated timing pulse combined with timing state and instruction signals trigger D type flip-flops to save these results and present it on their outputs until they are next triggered with different input data.

Relevant for D type flip-flops is the rising edge of their clock input. The length of the pulse does not matter as long as it is not shorter than a required minimum. For example the accumulator register needs to be loaded at TP3 during major state E for a few different instructions. Thus the AC LOAD signal is generated as TP3 and E and (TAD instruction or AND instruction or …) and that signal is used as the clock input for all twelve flip-flops that make up the accumulator register.

However, having flip-flops clocked off timing pulses that are combined with different amounts of logic create differences between sample times which in turn make it hard to push that kind of design to high cycle frequencies. Basically all modern digital circuits are synchronous. There, all flip-flops are clocked directly off the same global clock and get triggered at the same time. Since of course not all flip-flops should get new values at every cycle they have an additional enable input so that the rising clock edge will only register new data when enable is also true3.

Naturally, FPGAs are tailored to this design paradigm. They have (a limited number of) dedicated clock distribution networks set apart from the regular signal routing resources, to provide low skew clock signals across the device. Groups of logic elements get only a very limited set of clock inputs for all their flip-flops. While it is certainly possible to run the same scheme as the original in an FPGA, it would be an inefficient use of resources and very likely make automated timing analysis difficult by requiring lots of manual specification of clock relationships between registers.

So while I do use 20 MHz as the base clock in my timing generator and generate the same signals in my design, I also provide this 20 MHz as the common clock to all logic. Instead of registers triggering on the timing pulse rising edges they get the timing pulses as enables. One difference resulting from that is registers aren’t triggered by the rising edge of a pulse anymore but will trigger on every clock cycle where the pulse is active. The original pulses are two clock cycles long and extend into the following time state so the correct data they picked up on the first cycle would be overwritten by wrong data on the second. I simply shortened the timing pulses to one clock cycle to adapt to this.

timing signals from simulaton showing long and fast cycle

To reiterate, this is all in the interest of matching the original timing exactly. Analysis by the synthesis tool shows that I could easily push the base clock well over twice the current speed, and that’s already with it assuming that everything has to settle within one clock cycle as I’ve not specified multicycle paths. Meaning I could shorten the timing states to a single cycle4 for an overall more than 10× acceleration on the lowest speed grade of the low-end FPGA I’m using.

Logic

To save on logic, many parts with open collector outputs were used in the PDP-8. Instead of driving high or low voltage to represent zeros and ones, an open collector only drives either low voltage or leaves the line alone. Many outputs can then be simply connected together as they can’t drive conflicting voltages. Return to high voltage in the absence of outputs driving low is accomplished by a resistor to positive supply somewhere on the signal line.

The effect is that the connection of outputs itself forms a logic combination in that the signal is high when none of the gates drive low and it’s low when any number of gates drive low. Combining that with active low signalling, where a low voltage represents active or 1, the result is a logical OR combination of all outputs (called wired OR since no logic gates are involved).

The designers of the PDP-8/e made extensive use of that. The majority of signals are active low, marked with L after their name in the schematics. Some signals don’t have multiple sources and can be active high where it’s more convenient, those are marked with H. And then there are many signals that carry no indication at all and some that miss the indication in the maintenance manual just to make a reimplementer’s life more interesting.

excerpt from the PDP-8/e CPU schematic, generation of the accumulator load (AC LOAD L) signal can be seen on the right

As an example in this schematic, let’s look at AC LOAD L which triggers loading the accumulator register from the major registers bus. It’s a wired OR of two NAND outputs, both in the chip labeled E15, and with pull-up resistor R12 to +5 V. One NAND combines BUS STROBE and C2, the other TP3 and an OR in chip E7 of a bunch of instruction related signals. For comparison, here’s how I implemented it in VHDL:

ac_load <= (bus_strobe and not c2) or (TP3 and E and ir_TAD) or (TP3 and E and ir_AND) or (TP3 and E and ir_DCA) or (TP3 and F and ir_OPR);

FPGAs don’t have internal open collector logic5 and any output of a logic gate must be the only one driving a particular line. As a result, all the wired OR must be implemented with explicit ORs. Without the need to have active low logic I consistently use active high everywhere, meaning that the logic in my implementation is mostly inverted compared to the real thing. The deviation of ANDing TP3 with every signal instead of just once with the result of the OR is again due to consistency, I use the “<timing> and <major state> and <instruction signal>” pattern a lot.

One difficulty with wired OR is that it is never quite obvious from a given section of the schematics what all inputs to a given gate are. You may have a signal X being generated on one page of the schematic and a line with signal X as an input to a gate on another page, but that doesn’t mean there isn’t something on yet another page also driving it, or that it isn’t also present on a peripheral port6.

Some of the original logic is needed only for electrical reasons, such as buffers which are logic gates that do not combine multiple inputs but simply repeat their inputs (maybe inverted). Logic gate outputs can only drive so many inputs, so if one signal is widely used it needs buffers. Inverters are commonly used for that in the PDP-8. BUS STROBE above is one example, it is the inverted BUS STROBE L found on the backplane. Another is BTP3 (B for buffered) which is TP3 twice inverted.

Finally, some additional complexity is owed to the fact that the 8/e is made of discrete logic chips and that these have multiple gates in a package, for example the 7401 with four 2-input NAND gates with open collector outputs per chip. In designing the 8/e, logic was sometimes not implemented straightforwardly but as more complicated equivalents if it means unused gates in existing chips could be used rather than adding more chips.

Summary

I have started out saying that I build an exact PDP-8/e replica from the schematics. As I have detailed, that doesn’t involve just taking every gate and its connections from the schematic and writing it down in VHDL. I am changing the things that can not be directly implemented in an FPGA (like wired OR) and leaving out things that are not needed in this environment (such as buffers). Nevertheless, the underlying logic stays the same and as a result my implementation has the exact same timing and behaviour even in corner cases.

Ultimately all this only applies to the CPU and closely associated units (arithmetic and address extension). Moving out to peripheral hardware, the interface to the CPU may be the only part that could be implemented from original schematics. After all, where the magnetic tape drive interface in the original was controlling the actual tape hardware the equivalent in the replica project would be accessing emulated tape storage.

This finally concludes the overview of my project. Its development hasn’t advanced as much as I expected around this time last year since I ended up putting the project aside for a long while. After returning to it, running the MAINDEC test programs revealed a bunch of stuff I forgot to implement or implemented wrong which I had to fix. The optional Extended Arithmetic Element isn’t implemented yet, the Memory Extension & Time Share is now complete pending test failures I need to debug. It is now reaching a state where I consider the design getting ready to be published.

  1. There are also test programs that exercise and check every single logic gate to help pinpoint a problem. Naturally they are also extremely helpful with verifying that a replica is in fact working exactly like the real thing. [return]
  2. Thus 50 ns, one cycle of the 20 MHz clock, is the granularity at which these signals are created. The timing pulses, for example, are 100 ns long. [return]
  3. This is simply implemented by presenting them their own output by a feedback path when enable is false so that they reload their existing value on the clock edge. [return]
  4. In fact I would be limited by the speed of the external SRAM I use and its 6 bit wide data connection, requiring two cycles for a 12 bit access. [return]
  5. The IO pins that interface to the outside world generally have the capability to switch disconnection at run time, allowing open collector and similar signaling. [return]
  6. Besides the memory and expansion bus, the 8/e CPU also has a special interface to attach the Extended Arithmetic Element. [return]
Andreas Bombe https://activelow.net/tags/pdo/ pdo on Active Low

Introducing Autodeb

Planet Debian - Mër, 09/05/2018 - 6:00pd

Autodeb is a new service that will attempt to automatically update Debian packages to newer upstream versions or to backport them. Resulting packages will be distributed in apt repositories.

I am happy to annnounce that I will be developing this new service as part of Google Summer of Code 2018 with Lucas Nussbaum as a mentor.

The program is currently nearing the end of the “Community Bonding” period, where mentors and their mentees discuss on the details of the projects and try to set objectives.

Main goal

Automatically create and distribute package updates/backports

This is the main objective of Autodeb. These unofficial packages can be an alternative for Debian users that want newer versions of software faster than it is available in Debian.

The results of this experiment will also be interesting when looking at backports. If successful, it could be that a large number of packages are backportable automatically, brigning even more options for stable (or even oldstable) users.

We also hope that the information that is produced by the system can be used by Debian Developers. For example, Debian Developers may be more tempted to support backports of their packages if they know that it already builds and that autopkgtests passes.

Other goals

Autodeb will be composed of infrastructure that is capable to build and test a large number of packages. We intend to build it with two secondary goals in mind:

Test packages that were built by developers before they upload it to the archive

We intend to add a self-service interface so that our testing infrastructure can be used for other purposes than automatically updating packages. This can empower Debian Developers by giving them easy access to more rigorous testing before they upload a package to the archive. For more more details on this, see my previous testeduploads proposal.

Archive rebuids / modifying the build and test environment

We would like to allow for building packages with a custom environment. For example, with a new version of GCC or with a set of packages. Ideally, this would also be a self-service interface where Developers can setup their environement and then upload packages for testing. Istead of individual package uploads, the input of packages to build could be a whole apt repository from which all source packages would be downloaded and rebuilt, with filters to select the packages to include or exclude.

What’s next

The next phase of Google Summer of Code is the coding period, it begins of May 14 and ends on August 6. However, there area a number of areas where work has already begun:

  1. Main repository: code for the master and the worker components of the service, written in golang.
  2. Debian packaging: Debian packaging autodeb. Contains scripts that will publish packages to pages.debian.net.
  3. Ansible scripts: this repository contains ansible scripts to provision the infrastructure at auto.debian.net.
Status update at DebConf

I have submitted a proposal for a talk on Autodeb at DebConf. By that time, the project should have evolved from idea to prototype and it will be interesting to discuss the things that we have learned:

  • How many packages can we successfully build?
  • How many of these packages fail tests?

If all goes well, it will also be an opportunity to officialy present our self-service interface to the public so that the community can start using it to test packages.

In the meantime, feel free to get in touch with us by email, on OFTC at #autodeb, or via issues on salsa.

Alexandre Viau https://alexandreviau.net/blog/ Alexandre Viau’s blog

Raphaël Hertzog: My Free Software Activities in April 2018

Planet Ubuntu - Hën, 07/05/2018 - 11:17md

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

pkg-security team

I improved the packaging of openvas-scanner and openvas-manager so that they mostly work out of the box with a dedicated redis database pre-configured and with certificates created in the postinst. I merged a patch for cross-build support in mac-robber and another patch from chkrootkit to avoid unexpected noise in quiet mode.

I prepared an update of openscap-daemon to fix the RC bug #896766 and to update to a new upstream release. I pinged the package maintainer to look into the autopkgtest failure (that I did not introduce). I sponsored hashcat 4.1.0.

Distro Tracker

While it slowed down, I continued to get merge requests. I merged two of them for some newcomer bugs:

I reviewed a merge request suggesting to add a “team search” feature.

I did some work of my own too: I fixed many exceptions that have been seen in production with bad incoming emails and with unexpected maintainer emails. I also updated the contributor guide to match the new workflow with salsa and with the new pre-generated database and its associated helper script (to download it and configure the application accordingly). During this process I also filed a GitLab issue about the latest artifact download URL not working as advertised.

I filed many issues (#13 to #19) for things that were only stored in my personal TODO list.

Misc Debian work

Bug Reports. I filed bug #894732 on mk-build-deps to filter build dependencies to include/install based on build profiles. For reprepro, I always found the explanation about FilterList very confusing (bug #895045). I filed and fixed a bug on mirrorbrain with redirection to HTTPS URLs.

I also investigated #894979 and concluded that the CA certificates keystore file generated with OpenJDK 9 is not working properly with OpenJDK 8. This got fixed in ca-certificates-java.

Sponsorship. I sponsored pylint-plugin-utils 0.2.6-2.

Packaging. I uploaded oca-core (still in NEW) and ccextractor for Freexian customers. I also uploaded python-num2words (dependency for oca-core). I fixed the RC bug #891541 on lua-posix.

Live team. I reviewed better handling of missing host dependency on live-build and reviewed a live-boot merge request to ensure that the FQDN returned by DHCP was working properly in the initrd.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Timo Jyrinki: Converting an existing installation to LUKS using luksipc - 2018 notes

Planet Ubuntu - Hën, 07/05/2018 - 1:08md
Time for a laptop upgrade. Encryption was still not the default for the new Dell XPS 13 Developer Edition (9370) that shipped with Ubuntu 16.04 LTS, so I followed my own notes from 3 years ago together with the official documentation to convert the unencrypted OEM Ubuntu installation to LUKS during the weekend. This only took under 1h altogether.

On this new laptop model, EFI boot was already in use, Secure Boot was enabled and the SSD had GPT from the beginning. The only thing I wanted to change thus was the / to be encrypted.

Some notes for 2018 to clarify what is needed and what is not needed:
  • Before luksipc, remember to resize existing partitions to have 10 MB of free space at the end of the / partition, and also create a new partition of eg 1 GB size partition for /boot.
  • To get the code and compile luksipc on Ubuntu 16.04.4 LTS live USB, just apt install git build-essential is needed. cryptsetup package is already installed.
  • After luksipc finishes and you've added your own passphrase and removed the initial key (slot 0), it's useful to cryptsetup luksOpen it and mount it still under the live session - however, when using ext4, the mounting fails due to a size mismatch in ext4 metadata! This is simple to correct: sudo resize2fs /dev/mapper/root. Nothing else is needed.
  • I mounted both the newly encrypted volume (to /mnt) and the new /boot volume (to /mnt2 which I created), and moved /boot/* from the former to latter.
  • I edited /etc/fstab of the encrypted volume to add the /boot partition
  • Mounted as following in /mnt:
    • mount -o bind /dev dev
    • mount -o bind /sys sys
    • mount -t proc proc proc
  • Then:
    • chroot /mnt
    • mount -a # (to mount /boot and /boot/efi)
    • Edited files /etc/crypttab (added one line: root UUID none luks) and /etc/grub/default (I copied over my overkill configuration that specifies all of cryptopts and cryptdevice some of which may be obsolete, but at least one of them and root=/dev/mapper/root is probably needed).
    • Ran grub-install ; update-grub ; mkinitramfs -k all -c (notably no other parameters were needed)
    • Rebooted.
  • What I did not need to do:
    • Modify anything in /etc/initramfs-tools.
If the passphrase input shows on your next boot, but your correct passphrase isn't accepted, it's likely that the initramfs wasn't properly updated yet. I first forgot to run the mkinitramfs command and faced this.

Dylan McCall: My tiny file server with Ubuntu Core, Nextcloud and Syncthing

Planet Ubuntu - Dje, 06/05/2018 - 6:38pd

My annual Dropbox renewal date was coming up, and I thought to myself “I’m working with servers all the time. I shouldn’t need to pay someone else for this.” I was also knee deep in a math course, so I felt like procrastinating.

I’m really happy with the result, so I thought I would explain it for anyone else who wants to do the same. Here’s what I was aiming for:

  • Safe, convenient archiving for big files.
  • Instant sync between devices for stuff I’m working on.
  • Access over LAN from home, and over the Internet from anywhere else.
  • Regular, encrypted offsite backups.
  • Compact, low power hardware that I can stick in a closet and forget about.
  • Some semblance of security, at least so a compromised service won’t put the rest of the system at risk.

The hardware

I dabbled with a BeagleBoard that I used for an embedded systems course, and I pondered a Raspberry Pi with a case. I decided against both of those, because I wanted something with a bit more wiggle room. And besides, I like having a BeagleBoard free to mess around with now and then.

In the end, I picked out an Intel NUC, and I threw in an old SSD and a stick of RAM:

It’s tiny, it’s quiet, and it looks okay too! (Just find somewhere to hide the power brick). My only real complaint is the wifi hardware doesn’t work with older Linux kernels, but that wasn’t a big deal for my needs and I’m sure it will work in the future.

The software

I installed Ubuntu Core 16, which is delightful. Installing it is a bit surprising for the uninitiated because there isn’t really an install process: you just clone the image to the drive you want to boot from and you’re done. It’s easier if you do this while the drive is connected to another computer. (I didn’t feel like switching around SATA cables in my desktop, so I needed to write a different OS to a flash drive, boot from that on the NUC, transfer the Ubuntu Core image to there, then dd that image to the SSD. Kind of weird for this use case).

Now that I figured out how to run it, I’ve been enjoying how this system is designed to minimize the time you need to spend with your device connected to a screen and keyboard like some kind of savage. There’s a simple setup process (configure networking, log in to your Ubuntu One account), and that’s it. You can bury the thing somewhere and SSH to it from now on. In fact, you’re pretty much forced to: you don’t even get a login prompt. Chances are you won’t need to SSH to the system anyway since it keeps itself up to date. As someone who obsesses over loose threads, I’m finding this all very

Although, with that in mind, one important thing: if you haven’t played with Ubuntu for a while, head over to login.ubuntu.com and make sure your SSH keys are up to date. The first time I set it up, I realized I had a bunch of obsolete SSH keys in my account and I had no way to reach the system from the laptop I was using. Fixing that meant changing Ubuntu Core’s writable files from another operating system. (I would love to know if there is a better way).

The other software

Okay, using Ubuntu Core is probably a bit weird when I want to run all these servers and I’m probably a little picky, but it’s so elegant! And, happily, there are Snap packages for both Nextcloud and Syncthing. I ended up using both.

I really like how files you can edit are tucked away in /writable. For this guide,  I always refer to things by their full paths under /writable. I found thinking like that spared me getting lost in files that I couldn’t change, and it helped to emphasize the nature of this system.

DNS

Before I get to the fun stuff, there were some networking conundrums I needed to solve.

First, public DNS. My router has some buttons if you want to use a dynamic DNS service, but I just rolled my own thing. To start off, I added some additional records for my DNS pointing at my home IP address. My web host has an API for editing DNS rules, so I set up a dynamic DNS after everything else was working, but I will get to that further along.

Next, my router didn’t support hairpinning (or NAT Loopback), so requests to core.example.com were still resolving to my public IP address, which means way too many hops for sending data around. My ridiculous solution: I’ll run my own DNS server, darnit.

To get started, check the network configuration in /writable/system-data/etc/netplan/00-snapd-config.yaml. You’ll want to make sure the system requests a static IP address (I used 192.168.1.2) and uses its own nameservers. Mine looks like this:

network: ethernets: eth0: dhcp4: false dhcp6: false addresses: [192.168.1.2/24, '2001:1::2/64'] gateway4: 192.168.1.1 nameservers: addresses: [8.8.8.8, 8.8.4.4] version: 2

After changing the Netplan configuration, use sudo netplan generate to update the system.

For the actual DNS server, we can install an unofficial snap that provides dnsmasq:

$ snap install dnsmasq-escoand

You’ll want to edit  /writable/system-data/etc/hosts so the service’s domains resolve to the devices’s local IP address:

127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.1.2 core.example.com fe80::96c6:91ff:fe1a:6581 core.example.com

Now it’s safe to go into your router’s configuration, reserve an IP address for this device, and set it as your DNS server:

And that solved it.

To check, run tracepath from another computer on your network and the result should be something simple like this:

$ tracepath core.example.com 1?: [LOCALHOST] pmtu 1500 1: core.example.com 0.789ms reached 1: core.example.com 0.816ms reached Resume: pmtu 1500 hops 1 back 1

While you’re looking at the router, you may as well forward some ports, too. By default you need TCP ports 80 and 443 for Nextcloud, and 22000 for Syncthing.

Nextcloud

The Nextcloud snap is fantastic. It already works out of the box: it adds a system service for its copy of Apache on port 80, and it comes with a bunch of scripts for setting up common things like SSL certificates. I wanted to use an external hard drive for its data store, so I needed to configure the mount point for that and grant the necessary permissions for the snap to access removable media.

Let’s set up that mount point first. These are configured with Systemd mount units, so we’ll want to create a file like /writable/system-data/etc/systemd/system/media-data1.mount. You need to tell it how to identify the storage device. (I always give them nice volume labels when I format them so it’s easy to use that). Note that the name of the unit file must correspond to the full name of the mount point:

[Unit] Description=Mount unit for data1 [Mount] What=/dev/disk/by-label/data1 Where=/media/data1 Type=ext4 [Install] WantedBy=multi-user.target

One super cool thing here is you can start and stop the mount unit just like any other system service:

$ sudo systemctl daemon-reload $ sudo systemctl start media-data1.mount $ sudo systemctl enable media-data1.mount

Now let’s set up Nextcloud. The code repository for the Nextcloud snap has lots of documentation if you need.

$ snap install nextcloud $ snap connect nextcloud:removable-media :removable-media $ sudo snap run nextcloud.manual-install USERNAME PASSWORD $ snap stop nextcloud

Before we do anything else we need to tell Nextcloud to store its data in /media/data1/nextcloud/, and allow access through the public domain from earlier. To do that, edit /writable/system-data/var/snap/nextcloud/current/nextcloud/config/config.php:

<?php $CONFIG = array ( 'apps_paths' => array ( … ), … 'trusted_domains' => array ( 0 => 'localhost', 1 => 'core.example.com' ), 'datadirectory' => '/media/data1/nextcloud/data', … );

Move the existing data directory to the new location, and restart the service:

$ snap stop nextcloud $ sudo mkdir /media/data1/nextcloud $ sudo mv /writable/system-data/var/snap/nextcloud/common/nextcloud/data /media/data1/nextcloud/ $ snap start nextcloud

Now you can enable HTTPS. There is a lets-encrypt option (for letsencrypt.org), which is very convenient:

$ sudo snap run nextcloud.enable-https lets-encrypt -d $ sudo snap run nextcloud.enable-https lets-encrypt

At this point you should be able to reach Nextcloud from another computer on your network, or remotely, using the same domain.

Syncthing

If you aren’t me, you can probably stop here and use Nextcloud, but I decided Nextcloud wasn’t quite right for all of my files, so I added Syncthing to the mix. It’s like a peer to peer Dropbox, with a somewhat more geeky interface. You can link your devices by globally unique IDs, and they’ll find the best way to connect to each other and automatically sync files between your shared folders. It’s very elegant, but I wasn’t sure about using it without some kind of central repository. This way my systems will sync between each other when they can, but there’s one central device that is always there, ready to send or receive the newest versions of everything.

Syncthing has a snap, but it is a bit different from Nextcloud, so the package needed a few extra steps. Syncthing, like Dropbox, runs one instance for each user, instead of a monolithic service that serves many users. So, it doesn’t install a system service of its own, and we’ll need to figure that out. First, let’s install the package:

$ snap install syncthing $ snap connect syncthing:home :home $ snap run syncthing

Once you’re satisfied, you can stop syncthing. That isn’t very useful yet, but we needed to run it once to create a configuration file.

So, first, we need to give syncthing a place to put its data, replacing “USERNAME” with your system username:

$ sudo mkdir /media/data1/syncthing $ sudo chown USERNAME:USERNAME /media/data1/syncthing

Unfortunately, you’ll find that the syncthing application doesn’t have access to /media/data1, and its snap doesn’t support the removable-media interface, so it’s limited to your home folder. But that’s okay, we can solve this by creating a bind mount. Let’s create a mount unit in /writable/system-data/etc/systemd/system/home-USERNAME-syncthing.mount:

[Unit] Description=Mount unit for USERNAME-syncthing [Mount] What=/media/data1/syncthing/USERNAME Where=/home/USERNAME/syncthing Type=none Options=bind [Install] WantedBy=multi-user.target

(If you’re wondering, yes, systemd figures out that it needs to mount media-data1 before it can create this bind mount, so don’t worry about that).

$ sudo systemctl daemon-reload $ sudo systemctl start home-USERNAME-syncthing.mount $ sudo systemctl enable home-USERNAME-syncthing.mount

Now update Syncthing’s configuration and tell it to put all of its shared folders in that directory. Open /home/USERNAME/snap/syncthing/common/syncthing/config.xml in your favourite editor, and make sure you have something like this:

<configuration version="27"> <folder id="default" label="Default Folder" path="/home/USERNAME/syncthing/Sync" type="readwrite" rescanIntervalS="60" fsWatcherEnabled="false" fsWatcherDelayS="10" ignorePerms="false" autoNormalize="true"> … </folder> <device id="…" name="core.example.com" compression="metadata" introducer="false" skipIntroductionRemovals="false" introducedBy=""> <address>dynamic</address> <paused>false</paused> <autoAcceptFolders>false</autoAcceptFolders> </device> <gui enabled="true" tls="false" debugging="false"> <address>192.168.1.2:8384</address> … </gui> <options> <defaultFolderPath>/home/USERNAME/syncthing</defaultFolderPath> </options> </configuration>

With those changes, Syncthing will create new folders inside /home/USERNAME/syncthing, you can move the default “Sync” folder there as well, and its web interface will be accessible over your local network at http://192.168.1.2:8384. (I’m not enabling TLS here, for two reasons: it’s just the local network, and Nextcloud enables HSTS for the core.example.com domain, so things get confusing when you try to access it like that).

You can try snap run syncthing again, just to be sure.

Now we need to add a service file so Syncthing runs automatically. We could create a service that has the User field filled in and it always runs as a certain user, but for this yupe of service it doesn’t hurt to set it up as a template unit. Happily, Syncthing’s documentation provides a unit file we can borrow, so we don’t need to do much thinking here. You’ll need to create a file called /writable/system-data/etc/systemd/system/syncthing@.service:

[Unit] Description=Syncthing - Open Source Continuous File Synchronization for %I Documentation=man:syncthing(1) After=network.target [Service] User=%i ExecStart=/usr/bin/snap run syncthing -no-browser -logflags=0 Restart=on-failure SuccessExitStatus=3 4 RestartForceExitStatus=3 4 [Install] WantedBy=multi-user.target

Note that our Exec line is a little different than theirs, since we need it to run syncthing under the snap program.

$ sudo systemctl daemon-reload $ sudo systemctl start syncthing@USERNAME.service $ sudo systemctl enable syncthing@USERNAME.service

And there you have it, we have Syncthing! The web interface for the Ubuntu Core system is only accessible over your local network, but assuming you forwarded port 22000 on your router earlier, you should be able to sync with it from anywhere.

If you install the Syncthing desktop client (snap install syncthing in Ubuntu, dnf install syncthing-gtk in Fedora), you’ll be able to connect your other devices to each other. On each device that you connect to this one, make sure you set core.example.com as an Introducer. That way they will discover each other through it, which saves a bit of time.

Once your devices are all connected, it’s a good idea to go to Syncthing’s web interface at http://192.168.1.2:8384 and edit the settings for each device. You can enable “Auto Accept” so whenever a device shares a new folder with core.example.com, it will be accepted automatically.

Nextcloud + Syncthing

There is one last thing I did here. Syncthing and Nextcloud have some overlap, but I found myself using them for pretty different sorts of tasks. I use Nextcloud for media files and archives that I want to store on a single big hard drive, and occasionally stream over the network; and I use Syncthing for files that I want to have locally on every device.

Still, it would be nice if I could have Nextcloud’s web UI and sharing options with Syncthing’s files. In theory we could bind mount Syncthing’s data directory into Nextcloud’s data directory, but the Nextcloud and Syncthing services run as different users. So, that probably won’t go particularly well.

Instead, it works quite well to mount Syncthing’s data directory using SSH.

First, in Nextcloud, go to the Apps section and enable the “External storage support” app.

Now you need to go to Admin, and “External storages”, and allow users to mount external storage.

Finally, go to your Personal settings, choose “External storages”, add a folder named Syncthing, and tell it connect over SFTP. Give it the hostname of the system that has Syncthing (so, core.example.com), the username for the user that is running Syncthing (USERNAME), and the path to Syncthing’s data files (/home/USERNAME/syncthing). It will need an SSH key pair to authenticate.

When you click Generate keys it will create a key pair. You will need to copy and paste the public key (which appears in the text field) to /home/USERNAME/.ssh/authorized_keys.

If you try the gear icon to the right, you’ll find an option to enable sharing for the external storage, which is very useful here. Now you can use Nextcloud to view, share, or edit your files from Syncthing.

Backups

I spun tires for a while with backups, but eventually I settled on Restic. It is fast, efficient, and encrypted. I’m really impressed with it.

Unfortunately, the snap for Restic doesn’t support strict confinement, which means it won’t work on Ubuntu Core.  So I cheated. Let’s set this up under the root user.

You can find releases of Restic as prebuilt binaries. We’ll also need to install a snap that includes curl. (Or you can download the file on another system and transfer it with scp, but this blog post is too long already).

$ snap install demo-curl $ snap run demo-curl.curl -L "https://github.com/restic/restic/releases/download/v0.8.3/restic_0.8.3_linux_amd64.bz2" | bunzip2 > restic $ chmod +x restic $ sudo mkdir /root/bin $ sudo cp restic /root/bin

We need to figure out the environment variables we want for Restic. That depends on what kind of storage service you’re using. I created a file with those variables at /root/restic-MYACCOUNT.env. For Backblaze B2, mine looked like this:

#!/bin/sh export RESTIC_REPOSITORY="b2:core-example-com--1" export B2_ACCOUNT_ID="…" export B2_ACCOUNT_KEY="…" export RESTIC_PASSWORD="…"

Next, make a list of the files you’d like to back up in /root/backup-files.txt:

/media/data1/nextcloud/data/USERNAME/files /media/data1/syncthing/USERNAME /writable/system-data/

I added a couple of quick little helper scripts to handle the most common things you’ll be doing with Restic:

/root/bin/restic-MYACCOUNT.sh

#!/bin/sh . /root/restic-MYACCOUNT.env /root/bin/restic $@

Use this as a shortcut to run restic with the correct environment variables.

/root/bin/backups-push.sh

#!/bin/sh RESTIC="/root/bin/restic-MYACCOUNT.sh" RESTIC_ARGS="--cache-dir /root/.cache/restic" ${RESTIC} ${RESTIC_ARGS} backup --files-from /root/backup-files.txt --exclude ".stversions" --exclude-if-present ".backup-ignore" --exclude-caches

This will ignore any directory that contains a file named “.backup-ignore”. (So to stop a directory from being backed up, you can run touch /path/to/the/directory/.backup-ignore). This is a great way to save time if you have some big directories that don’t really need to be backed up, like a directory full of, um,  Linux ISOs shifty eyes.

/root/bin/backups-clean.sh

#!/bin/sh RESTIC="/root/bin/restic-MYACCOUNT.sh" RESTIC_ARGS="--cache-dir /root/.cache/restic" ${RESTIC} ${RESTIC_ARGS} forget --keep-daily 7 --keep-weekly 8 --keep-monthly 12 --prune ${RESTIC} ${RESTIC_ARGS} check

This will periodically remove old snapshots, prune unused blocks, and then check for errors.

Make sure all of those scripts are executable:

$ sudo chmod +x /root/bin/restic-MYACCOUNT.sh $ sudo chmod +x /root/bin/restic-push.sh $ sudo chmod +x /root/bin/restic-clean.sh

We still need to add systemd stuff, but let’s try this thing first!

$ sudo /root/bin/restic-MYACCOUNT.sh init $ sudo /root/bin/backups-push.sh $ sudo /root/bin/restic-MYACCOUNT.sh snapshots

Have fun playing with Restic, try restoring some files, note that you can list all the files in a snapshot and restore specific ones. It’s a really nice little backup tool.

It’s pretty easy to get systemd helping here as well. First let’s add our service file. This is a different kind of system service because it isn’t a daemon. Instead, it is a oneshot service. We’ll save it as /writable/system-data/etc/systemd/system/backups-task.service.

[Unit] Description=Regular system backups with Restic [Service] Type=oneshot ExecStart=/bin/sh /root/bin/backups-push.sh ExecStart=/bin/sh /root/bin/backups-clean.sh

Now we need to schedule it to run on a regular basis. Let’s create a systemd timer unit for that: /writable/system-data/etc/systemd/system/backups-task.timer.

[Unit] Description=Run backups-task daily [Timer] OnCalendar=09:00 UTC Persistent=true [Install] WantedBy=timers.target

One gotcha to notice here: with newer versions of systemd, you can use time zones like PDT or America/Vancouver for the OnCalendar entry, and you can test how that will work using systemd-analyze calendar "09:00 America/Vancouver. Alas, that is not the case in Ubuntu Core 16, so you’ll probably have the best luck using UTC and calculating timezones yourself.

Now that you have your timer and your service, you can test the service by starting it:

$ sudo systemctl start backups-task.service $ sudo systemctl status backups-task.service

If all goes well, enable the timer:

$ sudo systemctl start backups-task.timer $ sudo systemctl enable backups-task-timer

To see your timer, you can use systemctl list-timers:

$ sudo systemctl list-timers … Sat 2018-04-28 09:00:00 UTC 3h 30min left Fri 2018-04-27 09:00:36 UTC 20h ago backups-task.timer backups-task.service … Some notes on security

Some people (understandably) dislike running this kind of web service on port 80. Nextcloud’s Apache instance runs on port 80 and port 443 by default, but you can change that using snap set nextcloud ports.http=80 ports.https=443.  However, you may need to generate a self-signed SSL certificate in that case.

Nextcloud (like any daemon installed by Snappy) runs as root, but, as a snap, it is confined to a subset of the system. There is some official documentation about security and sandboxing in Ubuntu Core if you are interested. You can always run sudo snap run --shell nextcloud.occ to get an idea of what it has access to.

If you feel paranoid about how we gave Nextcloud access to all removable media, you can create a bind mount from /writable/system-data/var/snap/nextcloud/common/nextcloud to /media/data1/nextcloud, like we did for Syncthing, and snap disconnect nextcloud:removable-media. Now it only has access to those files on the other end of the bind mount.

Conclusion

So that’s everything!

This definitely isn’t a tiny amount of setup. It took an afternoon. (And it’ll probably take two or three years to pay for itself). But I’m impressed by how smoothly it all went, and with a few exceptions where I was nudged into loopy workarounds,  it feels simple and reproducible. If you’re looking at hosting more of your own files, I would happily recommend something like this.

Costales: Podcast Ubuntu y otras hierbas S02E06: UbuCon Europe 2018

Planet Ubuntu - Sht, 05/05/2018 - 3:12md
En esta ocasión, Francisco MolineroFrancisco Javier Teruelo y Marcos Costales, junto a los invitados Sergi Quiles, Paco Estrada (Compilando Podcast), Alejandro López (Slimbook), analizamos la tercera UbuCon Europe celebrada esta semana en Xixón.

Capítulo 6º de la segunda temporada
El podcast esta disponible para escuchar en:

Benjamin Mako Hill: Climbing Mount Rainier

Planet Ubuntu - Pre, 04/05/2018 - 12:38pd

Mount Rainier is an enormous glaciated volcano in Washington state. It’s  4,392 meters tall (14,410 ft) and extraordinary prominent. The mountain is 87 km (54m) away from Seattle. On clear days, it dominates the skyline.

Drumheller Fountain and Mt. Rainier on the University of Washington Campus (Photo by Frank Fujimoto)

Rainier’s presence has shaped the layout and structure of Seattle. Important roads are built to line up with it. The buildings on the University of Washington’s campus, where I work, are laid out to frame it along the central promenade. People in Seattle typically refer to Rainier simply as “the mountain.”  It is common to here Seattlites ask “is the mountain out?”

Having grown up in Seattle, I have an deep emotional connection to the mountain that’s difficult to explain to people who aren’t from here. I’ve seen Rainier thousands of times and every single time it takes my breath away. Every single day when I bike to work, I stop along UW’s “Rainier Vista” and look back to see if the mountain is out. If it is, I always—even if I’m running late for a meeting—stop for a moment to look at it. When I lived elsewhere and would fly to visit Seattle, seeing Rainier above the clouds from the plane was the moment that I felt like I was home.

Given this connection, I’ve always been interested in climbing Mt. Rainier.  Doing so typically takes at least a couple days and is difficult. About half of people who attempt typically fail to reach the top. For me, climbing rainier required an enormous amount of training and gear because, until recently, I had no experience with mountaineering. I’m not particularly interested in climbing mountains in general. I am interested in Rainier.

On Tuesday, Mika and I made our first climbing attempt and we both successfully made it to the summit. Due to the -15°C (5°F) temperatures and 88kph (55mph) winds at the top, I couldn’t get a picture at the top. But I feel like I’ve built a deeper connection with an old friend.

Other than the picture from UW campus, photos were all from my climb and taken by (in order): Jennifer Marie, Jonathan Neubauer, Mika Matsuzaki, Jonathan Neubauer, Jonathan Neubauer, Mika Matsuzaki, and Jake Holthaus.

Mark Shuttleworth: Scam alert

Planet Ubuntu - Enj, 03/05/2018 - 3:00md

Am writing briefly to say that I believe a scam or pyramid scheme is currently using my name fraudulently in South Africa. I am not going to link to the websites in question here, but if you are being pitched a make-money-fast story that refers to me and crypto-currency, you are most likely being targeted by fraudsters.

David Tomaschik: How the Twitter and GitHub Password Logging Issues Could Happen

Planet Ubuntu - Enj, 03/05/2018 - 9:00pd

There have recently been a couple of highly-publicized (at least in the security community) issues with two tech giants logging passwords in plaintext. First, GitHub found they were logging plaintext passwords on password reset. Then, Twitter found they were logging all plaintext passwords. Let me begin by saying that I have no insider knowledge of either bug, and I have never worked at either Twitter or GitHub, but I enjoy randomly speculating on the internet, so I thought I would speculate on this. (Especially since the /r/netsec thread on the Twitter article is amazingly full of misconceptions.)

A Password Primer

A few commenters on /r/netsec seem amazed that Twitter ever sees the plaintext password. They seem to believe that the hashing (or “encryption” for some users) occurs on the client. Nope. In very few places have I ever seen any kind of client-side hashing (password managers being a notable exception).

In the case of both GitHub and Twitter, you can look at the HTTP requests (using the Chrome inspector, Burp Suite, mitmproxy, or any number of tools) and see your plaintext password being sent to the server. Now, that’s not to say it’s on the wire in plaintext, only in the HTTP requests. Both sites use proper TLS implementations to tunnel the login, so a passive observer on the wire just sees encrypted traffic. However, inside that encrypted traffic, your password sits in plaintext.

Once the plaintext password arrives at the application server, your salted & hashed password is retrieved from the database, the same salt & hash algorithm is applied to the plaintext passwords, and the two results are compared. If they’re the same, you’re in, otherwise you get the nice “Login failed” screen. In order for this to work, the server must use the same input to both of the hash algorithms, and those inputs are the salt (from the database) and the plaintext password. So yes, the server sees your plaintext password.

Yes, it’s possible to do client-side hashing, but it’s complicated, and requires sending the salt from the server to the client (or using a deterministic salt), and possibly slow on mobile devices, and there’s lots of reasons companies don’t want to do it. Approximately the only security improvement is avoiding logging plaintext passwords (which is, unfortunately, exactly what happened here).

Large Scale Software

So another trope is “this should have been caught in code review.” Yeah, it turns out code review is not perfect, and nobody has a full overview of every line of code in the application. This isn’t the space program or aircraft control systems, where the code is frozen and reviewed. In most tech companies (as far as I can tell), releases are cut all the time with a handful of changes that were reviewed in isolation and occasionally have strange interactions. It does not surprise me at all for something like this to happen.

How it Might Have Happened

I’d like to reiterate: this is purely speculation. I don’t know any details at either company, and I suspect Twitter found their error because someone saw the GitHub news and said “we should double check our logs.”

Some people seem to think the login looked something like this:

1 2 3 4 def login(username, password): log(username + " has password " + password) stored = get_stored_password(username) return hash(password) == stored

This seems fairly obvious, and I’d like to think it would be quickly caught by the developer themselves, let alone any kind of code review. However, it’s far more likely that something like this is at play:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 def login(username, password): service_request = { 'service': 'login', 'environment': get_environment(), 'username': username, 'password': password, } result = make_service_request(service_request) return result.ok() def make_service_request(request_definition): if request_definition['environment'] != 'prod': log('making service request: ' + repr(request_definition)) backend = get_backend(request_definition['service']) return backend.issue_request(request_definition) def get_environment(): return os.getenv('ENVIRONMENT')

They might even have a test like this:

1 2 3 4 def test_make_service_request_no_logs_in_prod(): fake_request = {'environment': 'prod'} make_service_request(fake_request) assertNotCalled(log)

All of this would look great (well, acceptable, this is a blog post, not a real service) under code review. We log the requests in our test environment for debugging purposes. It’s never obvious that a login request is being logged, and in the environment prod it’s not. But maybe one day our service grows and we start deploying in multiple regions, and so we rename environments. What was prod becomes prod-us and we add prod-eu. All of a sudden, our code that has not been logging passwords starts logging passwords, and it didn’t even take a code push, just an environment variable to change!

In reality, their code is probably much more complex and even harder to see the pattern. I have spent multiple days in a team of multiple engineers trying to find one singular bug. We could produce it via black-box testing (i.e., pentest) but could not find it in the source code. It turned out to be a misconfigured dependency injection caused by strange inheritance rules.

Yes, it’s bad that GitHub and Twitter had these bugs. I don’t mean to apologize for them. But they handled them responsibly, and the whole community has had a chance to learn a lesson. If GitHub had not disclosed, I suspect Twitter would not have noticed for much longer. Other organizations are probably also checking.

Every organization will have security issues. It’s how you handle them that counts.

Chris Coulson: Debugging the debugger

Planet Ubuntu - Mër, 02/05/2018 - 11:02md

I use gdb quite often, but until recently I’ve never really needed to understand how it works or debug it before. I thought I’d document a recent issue I decided to take a look at – perhaps someone else will find it interesting or useful.

We run the rust testsuite when building rustc packages in Ubuntu. When preparing updates to rust 1.25 recently for Ubuntu 18.04 LTS, I hit a bunch of test failures on armhf which all looked very similar. Here’s an example test failure:

---- [debuginfo-gdb] debuginfo/borrowed-c-style-enum.rs stdout ---- NOTE: compiletest thinks it is using GDB with native rust support executing "/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/stage2/bin/rustc" "/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/src/test/debuginfo/borrowed-c-style-enum.rs" "-L" "/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo" "--target=armv7-unknown-linux-gnueabihf" "-C" "prefer-dynamic" "-o" "/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf" "-Crpath" "-Zmiri" "-Zunstable-options" "-Lnative=/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/native/rust-test-helpers" "-g" "-L" "/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf.gdb.aux" ------stdout------------------------------ ------stderr------------------------------ ------------------------------------------ NOTE: compiletest thinks it is using GDB version 8001000 executing "/usr/bin/gdb" "-quiet" "-batch" "-nx" "-command=/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.debugger.script" ------stdout------------------------------ GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "arm-linux-gnueabihf". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word". Breakpoint 1 at 0xcc4: file /<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/src/test/debuginfo/borrowed-c-style-enum.rs, line 61. Program received signal SIGSEGV, Segmentation fault. 0xf77c9f4e in ?? () from /lib/ld-linux-armhf.so.3 ------stderr------------------------------ /<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.debugger.script:10: Error in sourced command file: No symbol 'the_a_ref' in current context ------------------------------------------ error: line not found in debugger output: $1 = borrowed_c_style_enum::ABC::TheA status: exit code: 0 command: "/usr/bin/gdb" "-quiet" "-batch" "-nx" "-command=/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.debugger.script" stdout: ------------------------------------------ GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "arm-linux-gnueabihf". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word". Breakpoint 1 at 0xcc4: file /<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/src/test/debuginfo/borrowed-c-style-enum.rs, line 61. Program received signal SIGSEGV, Segmentation fault. 0xf77c9f4e in ?? () from /lib/ld-linux-armhf.so.3 ------------------------------------------ stderr: ------------------------------------------ /<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.debugger.script:10: Error in sourced command file: No symbol 'the_a_ref' in current context ------------------------------------------ thread '[debuginfo-gdb] debuginfo/borrowed-c-style-enum.rs' panicked at 'explicit panic', tools/compiletest/src/runtest.rs:2891:9 note: Run with `RUST_BACKTRACE=1` for a backtrace.

The failing tests are all running some commands in gdb, and the inferior (tracee) is crashing inside the dynamic loader (/lib/ld-linux-armhf.so.3) before running any rust code.

I managed to recreate this test failure on an armhf box, but when I installed the debug symbols for the dynamic loader (contained in the libc6-dbg package) so that I could attempt to debug these crashes, the failing tests all started to pass.

A quick search on the internet shows that I’m not the first person to hit this issue – for example, this bug reported in April 2016. According to the comments, the workaround is the same – installing the debug symbols for the dynamic loader (by installing the libc6-dbg package). This obviously isn’t right and I don’t particularly like walking away from something like this without understanding it, so I decided to spend some time trying to figure out what is going on.

This first thing I did was to load the missing debug symbols manually in gdb after hitting the crash, in order to hopefully get a useful backtrace:

$ gdb build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf ... (gdb) run Starting program: /home/ubuntu/src/rustc/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf Program received signal SIGSEGV, Segmentation fault. 0xf77c9f4e in ?? () from /lib/ld-linux-armhf.so.3 (gdb) info sharedlibrary From To Syms Read Shared Object Library 0xf77c7a40 0xf77dadd0 Yes (*) /lib/ld-linux-armhf.so.3 0xf771ce90 0xf778e288 No /home/ubuntu/src/rustc/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/../../stage2/lib/rustlib/armv7-unknown-linux-gnueabihf/lib/libstd-42d13165275d0302.so 0xf76c91f0 0xf76d394c No /lib/arm-linux-gnueabihf/libgcc_s.so.1 0xf75dad80 0xf7687a90 No /lib/arm-linux-gnueabihf/libc.so.6 0xf75b1a14 0xf75b2410 No /lib/arm-linux-gnueabihf/libdl.so.2 0xf759c810 0xf759edf0 No /lib/arm-linux-gnueabihf/librt.so.1 0xf757a210 0xf7585214 No /lib/arm-linux-gnueabihf/libpthread.so.0 (*): Shared library is missing debugging information. (gdb) add-symbol-file ~/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so 0xf77c7a40 add symbol table from file "/home/ubuntu/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so" at .text_addr = 0xf77c7a40 (y or n) y Reading symbols from /home/ubuntu/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so...done. (gdb) bt full #0 dl_main (phdr=<optimized out>, phnum=<optimized out>, user_entry=<optimized out>, auxv=<optimized out>) at rtld.c:2275 cnt = 1 afct = 0x0 head = <optimized out> ph = <optimized out> mode = <optimized out> main_map = <optimized out> file_size = 4294899100 file = <optimized out> has_interp = <optimized out> i = <optimized out> prelinked = <optimized out> rtld_is_main = <optimized out> tcbp = <optimized out> __PRETTY_FUNCTION__ = <error reading variable __PRETTY_FUNCTION__ (Cannot access memory at address 0x15810)> first_preload = <optimized out> r = <optimized out> rtld_ehdr = <optimized out> rtld_phdr = <optimized out> cnt = <optimized out> need_security_init = <optimized out> count_modids = <optimized out> preloads = <optimized out> npreloads = <optimized out> preload_file = <error reading variable preload_file (Cannot access memory at address 0x157fc)> rtld_multiple_ref = <optimized out> was_tls_init_tp_called = <optimized out></details> #1 0xf77d76d0 in _dl_sysdep_start (start_argptr=start_argptr@entry=0xfffef6b1, dl_main=0xf77c872d <dl_main>) at ../elf/dl-sysdep.c:253 phdr = <optimized out> phnum = <optimized out> user_entry = 4197241 av = <optimized out> #2 0xf77c8260 in _dl_start_final (arg=0xfffef6b1) at rtld.c:414 start_addr = <optimized out> start_addr = <optimized out> #3 _dl_start (arg=0xfffef6b1) at rtld.c:521 entry = <optimized out> #4 0xf77c7b90 in ?? () from /lib/ld-linux-armhf.so.3 library_path = <error reading variable library_path (Cannot access memory at address 0x28920)> version_info = <error reading variable version_info (Cannot access memory at address 0x28918)> any_debug = <error reading variable any_debug (Cannot access memory at address 0x28914)> _dl_rtld_libname = <error reading variable _dl_rtld_libname (Cannot access memory at address 0x298a8)> _dl_rtld_libname2 = <error reading variable _dl_rtld_libname2 (Cannot access memory at address 0x298b4)> tls_init_tp_called = <error reading variable tls_init_tp_called (Cannot access memory at address 0x29898)> audit_list = <error reading variable audit_list (Cannot access memory at address 0x298a4)> preloadlist = <error reading variable preloadlist (Cannot access memory at address 0x2891c)> _dl_skip_args = <error reading variable _dl_skip_args (Cannot access memory at address 0x2994c)> audit_list_string = <error reading variable audit_list_string (Cannot access memory at address 0x29968)> __stack_chk_guard = <error reading variable __stack_chk_guard (Cannot access memory at address 0x28968)> _rtld_global = <error reading variable _rtld_global (Cannot access memory at address 0x29060)> _rtld_global_ro = <error reading variable _rtld_global_ro (Cannot access memory at address 0x28970)> _dl_argc = <error reading variable _dl_argc (Cannot access memory at address 0x28910)> __GI__dl_argv = <error reading variable __GI__dl_argv (Cannot access memory at address 0x29894)> __pointer_chk_guard_local = <error reading variable __pointer_chk_guard_local (Cannot access memory at address 0x28964)>

You can grab the glibc source and see that the dynamic loader ends up here in elf/rtld.c:

if (__glibc_unlikely (GLRO(dl_naudit) > 0)) { struct link_map *head = GL(dl_ns)[LM_ID_BASE]._ns_loaded; /* Do not call the functions for any auditing object. */ if (head->l_auditing == 0) { struct audit_ifaces *afct = GLRO(dl_audit); for (unsigned int cnt = 0; cnt < GLRO(dl_naudit); ++cnt) { if (afct->activity != NULL) // ##CRASHES HERE## afct->activity (&head->l_audit[cnt].cookie, LA_ACT_CONSISTENT); afct = afct->next; } } }

The reason for the crash is that afct is NULL:

(gdb) p $_siginfo $1 = {si_signo = 11, si_errno = 0, si_code = 1, _sifields = {_pad = {0, 56, 19628232, 19628288, -156661788, 0, 80, 19811416, -156663808, -157316581, 104, 1073741824, 19811416, 96, 19811408, 80, -156661788, 14, 19551104, 96, 104, 13358248, 19551584, 19552160, 19552232, 19811488, 32, 128, 64}, _kill = {si_pid = 0, si_uid = 56}, _timer = {si_tid = 0, si_overrun = 56, si_sigval = {sival_int = 19628232, sival_ptr = 0x12b80c8}}, _rt = {si_pid = 0, si_uid = 56, si_sigval = {sival_int = 19628232, sival_ptr = 0x12b80c8}}, _sigchld = {si_pid = 0, si_uid = 56, si_status = 19628232, si_utime = 19628288, si_stime = -156661788}, _sigfault = {si_addr = 0x0}, _sigpoll = {si_band = 0, si_fd = 56}}} (gdb) p afct $2 = (struct audit_ifaces *) 0x0

A quick look through the dynamic loader code shows that this condition should be impossible to hit.

As the crash doesn’t happen with debug symbols, I thought I would attempt to debug it without the symbols. First of all, I set a breakpoint at the start of dl_main by specifying it at offset 0xcec in the .text section:

$ gdb build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf ... (gdb) starti Starting program: /home/ubuntu/src/rustc/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf Program stopped. 0xf77c7b80 in ?? () from /lib/ld-linux-armhf.so.3 (gdb) info sharedlibrary From To Syms Read Shared Object Library 0xf77c7a40 0xf77dadd0 Yes (*) /lib/ld-linux-armhf.so.3 (*): Shared library is missing debugging information. (gdb) break *0xf77c872c Breakpoint 1 at 0xf77c872c (gdb) cont Continuing. Program received signal SIGSEGV, Segmentation fault. 0xf77da458 in ?? () from /lib/ld-linux-armhf.so.3

Huh? It’s now crashed at a different place, without hitting our breakpoint at the start of dl_main. Loading the debug symbols again shows us where:

(gdb) add-symbol-file ~/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so 0xf77c7a40 add symbol table from file "/home/ubuntu/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so" at .text_addr = 0xf77c7a40 (y or n) y Reading symbols from /home/ubuntu/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so...done. (gdb) bt #0 ?? () at ../sysdeps/arm/armv7/multiarch/memcpy_impl.S:654 from /lib/ld-linux-armhf.so.3 #1 0xf77c871e in handle_ld_preload (preloadlist=<optimized out>, main_map=0x0) at rtld.c:848 #2 0x00000000 in ?? () Backtrace stopped: previous frame identical to this frame (corrupt stack?)

This doesn’t make much sense, but the fact that setting a breakpoint has altered the program flow is our first clue.

On Linux, gdb interacts with the inferior using the ptrace system call. The next thing I wanted to try was running gdb in strace in order to capture the ptrace syscalls, so that I could compare differences afterwards and see if I could find any more clues.

I created the following simple gdb command file:

file /home/ubuntu/src/rustc/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf run quit

I then ran gdb with this file inside strace, with the symbols for the dynamic loader installed. Here’s the log up until the point at which gdb calls PTRACE_CONT:

$ strace -t -eptrace gdb -quiet -batch -nx -command=~/test.script 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=21136, si_uid=1000, si_status=0, si_utime=0, si_stime=0} --- ... 13:08:35 ptrace(PTRACE_GETREGS, 21137, NULL, 0xffd6afec) = 0 13:08:35 ptrace(PTRACE_GETSIGINFO, 21137, NULL, {si_signo=SIGTRAP, si_code=SI_USER, si_pid=21137, si_uid=1000}) = 0 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21137, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} --- 13:08:35 ptrace(PTRACE_CONT, 21137, 0x1, SIG_0) = 0 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21137, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} --- 13:08:35 ptrace(PTRACE_GETREGS, 21137, NULL, 0xffd6afec) = 0 13:08:35 ptrace(PTRACE_GETSIGINFO, 21137, NULL, {si_signo=SIGTRAP, si_code=SI_USER, si_pid=21137, si_uid=1000}) = 0 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21138, si_uid=1000, si_status=SIGSTOP, si_utime=0, si_stime=0} --- 13:08:35 ptrace(PTRACE_SETOPTIONS, 21138, NULL, PTRACE_O_TRACESYSGOOD) = 0 13:08:35 ptrace(PTRACE_SETOPTIONS, 21138, NULL, PTRACE_O_TRACEFORK) = 0 13:08:35 ptrace(PTRACE_SETOPTIONS, 21138, NULL, PTRACE_O_TRACEFORK|PTRACE_O_TRACEVFORKDONE) = 0 13:08:35 ptrace(PTRACE_CONT, 21138, NULL, SIG_0) = 0 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21138, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} --- 13:08:35 ptrace(PTRACE_GETEVENTMSG, 21138, NULL, [21139]) = 0 13:08:35 ptrace(PTRACE_KILL, 21139) = 0 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=21139, si_uid=1000, si_status=SIGKILL, si_utime=0, si_stime=0} --- 13:08:35 ptrace(PTRACE_SETOPTIONS, 21138, NULL, PTRACE_O_EXITKILL) = 0 13:08:35 ptrace(PTRACE_KILL, 21138) = 0 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21138, si_uid=1000, si_status=SIGCHLD, si_utime=0, si_stime=0} --- 13:08:35 ptrace(PTRACE_KILL, 21138) = 0 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=21138, si_uid=1000, si_status=SIGKILL, si_utime=0, si_stime=0} --- 13:08:35 ptrace(PTRACE_SETOPTIONS, 21137, NULL, PTRACE_O_TRACESYSGOOD|PTRACE_O_TRACEFORK|PTRACE_O_TRACEVFORK|PTRACE_O_TRACECLONE|PTRACE_O_TRACEEXEC|PTRACE_O_TRACEVFORKDONE|PTRACE_O_EXITKILL) = 0 13:08:35 ptrace(PTRACE_GETREGSET, 21137, NT_PRSTATUS, [{iov_base=0xffd6b3b4, iov_len=72}]) = 0 13:08:35 ptrace(PTRACE_GETVFPREGS, 21137, NULL, 0xffd6b298) = 0 13:08:35 ptrace(PTRACE_GETREGSET, 21137, NT_PRSTATUS, [{iov_base=0xffd6b36c, iov_len=72}]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0x411efc, [NULL]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0x411efc, [NULL]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18bf00]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9a44, 0x4c18de01) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9ef8, [0xf00cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9ef8, [0xf00cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9ef8, [0xf00cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9ef8, [0xf00cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9ef8, [0xf00cbf00]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9ef8, 0xf00cde01) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cbf00]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb5b8, 0x603cde01) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639bf00]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb220, 0x4639de01) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5310, [0xbf00e71c]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5310, [0xbf00e71c]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5310, [0xbf00e71c]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5310, [0xbf00e71c]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5310, [0xbf00e71c]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5310, 0xde01e71c) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bbf00]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5bb0, 0x6d7bde01) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5d90, [0xe681bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5d90, [0xe681bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5d90, [0xe681bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5d90, [0xe681bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5d90, [0xe681bf00]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5d90, 0xe681de01) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18de01]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9a44, 0x4c18bf00) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cde01]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb5b8, 0x603cbf00) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639de01]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb220, 0x4639bf00) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bde01]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5bb0, 0x6d7bbf00) = 0 13:08:35 ptrace(PTRACE_CONT, 21137, 0x1, SIG_0) = 0

First of all, notice that there are several PTRACE_POKEDATA calls. These are used by gdb to write to memory locations in the process that we’re debugging, eg, to set breakpoints. For more information about how breakpoints work in gdb, this blog post has some good information. Basically, gdb writes an invalid instruction to the breakpoint location and this causes a SIGTRAP when executed, which is intercepted by gdb. When you continue over the breakpoint, gdb writes the original instruction back, single-steps over it, re-writes the invalid instruction and then continues execution.

This is an obvious way in which gdb can interfere with our process and make it crash, so I focused on these calls. I’ve filtered them out below:

13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9a44, 0x4c18de01) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9ef8, 0xf00cde01) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb5b8, 0x603cde01) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb220, 0x4639de01) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5310, 0xde01e71c) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5bb0, 0x6d7bde01) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5d90, 0xe681de01) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9a44, 0x4c18bf00) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb5b8, 0x603cbf00) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb220, 0x4639bf00) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5bb0, 0x6d7bbf00) = 0

Notice that the first 7 of these write the same 2-byte sequence – 0xde01. These are breakpoints in code that is running in Thumb mode (see arm_linux_thumb_le_breakpoint in gdb/arm-linux-tdep.c in the gdb source code). 0xde01 in Thumb mode is an undefined instruction.

(Note that the write to 0xf77d5310 is actually a breakpoint at 0xf77d5312, as 0xde01 appears in the 2 higher order bytes and this is little-endian).

We aren’t inserting any breakpoints ourselves – these breakpoints are set automatically by gdb to monitor various events in the dynamic loader during startup. This is something I wasn’t aware of before debugging this.

It may be useful to know how gdb determines the addresses on which to set breakpoints at startup. The dynamic loader exports various events as SystemTap probes, and data about these is stored in the .note.stapsdt ELF section. We can inspect this using readelf:

$ readelf -n /lib/ld-linux-armhf.so.3 Displaying notes found in: .note.gnu.build-id Owner Data size Description GNU 0x00000014 NT_GNU_BUILD_ID (unique build ID bitstring) Build ID: 3f3b9b4bfea2654f2cedf6db2d120b4e3a39ea7e Displaying notes found in: .note.stapsdt Owner Data size Description stapsdt 0x00000032 NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: init_start Location: 0x00002a44, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@.L1204 4@[r7, #52] stapsdt 0x0000002e NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: init_complete Location: 0x00002ef8, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@.L1207 4@r4 stapsdt 0x0000002e NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: map_failed Location: 0x00004220, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@[sp, #20] 4@r5 stapsdt 0x00000035 NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: map_start Location: 0x000045b8, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@[r7, #252] 4@[r7, #72] stapsdt 0x0000003c NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: map_complete Location: 0x0000e020, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@[fp, #20] 4@[r7, #36] 4@r4 stapsdt 0x00000036 NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: reloc_start Location: 0x0000e09e, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@[fp, #20] 4@[r7, #36] stapsdt 0x0000003e NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: reloc_complete Location: 0x0000e312, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@[fp, #20] 4@[r7, #36] 4@r4 stapsdt 0x00000037 NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: unmap_start Location: 0x0000ebb0, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@[r7, #104] 4@[r7, #80] stapsdt 0x0000003a NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: unmap_complete Location: 0x0000ed90, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@[r7, #104] 4@[r7, #80] stapsdt 0x00000029 NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: setjmp Location: 0x0001201c, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: 4@r0 -4@r1 4@r14 stapsdt 0x00000029 NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: longjmp Location: 0x00012088, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: 4@r0 -4@r1 4@r4 stapsdt 0x00000031 NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: longjmp_target Location: 0x000120ba, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: 4@r0 -4@r1 4@r14

GDB uses this information to map events to breakpoint addresses. You can read a bit more about gdb’s linker interface here, and more about userspace SystemTap probes here.

With a base address of 0xf77c7000, we can look at the PTRACE_POKEDATA calls and see that the addresses map to these probes:

  • 0xf77c9a44 => init_start
  • 0xf77c9ef8 => init_complete
  • 0xf77cb5b8 => map_start
  • 0xf77cb220 => map_failed
  • 0xf77d5312 => reloc_complete
  • 0xf77d5bb0 => unmap_start
  • 0xf77d5d90 => unmap_complete

This is consistent with the probe_info array in gdb/solib-svr4.c in the gdb source code.

I then ran gdb inside strace again, this time without the symbols for the dynamic loader installed. Here’s the log up until the point at which the inferior process crashes:

$ strace -t -eptrace gdb -quiet -batch -nx -command=~/test.script 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=21098, si_uid=1000, si_status=0, si_utime=0, si_stime=0} --- ... 13:01:50 ptrace(PTRACE_GETREGS, 21099, NULL, 0xffb84a9c) = 0 13:01:50 ptrace(PTRACE_GETSIGINFO, 21099, NULL, {si_signo=SIGTRAP, si_code=SI_USER, si_pid=21099, si_uid=1000}) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21099, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} --- 13:01:50 ptrace(PTRACE_CONT, 21099, 0x1, SIG_0) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21099, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} --- 13:01:50 ptrace(PTRACE_GETREGS, 21099, NULL, 0xffb84a9c) = 0 13:01:50 ptrace(PTRACE_GETSIGINFO, 21099, NULL, {si_signo=SIGTRAP, si_code=SI_USER, si_pid=21099, si_uid=1000}) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21100, si_uid=1000, si_status=SIGSTOP, si_utime=0, si_stime=0} --- 13:01:50 ptrace(PTRACE_SETOPTIONS, 21100, NULL, PTRACE_O_TRACESYSGOOD) = 0 13:01:50 ptrace(PTRACE_SETOPTIONS, 21100, NULL, PTRACE_O_TRACEFORK) = 0 13:01:50 ptrace(PTRACE_SETOPTIONS, 21100, NULL, PTRACE_O_TRACEFORK|PTRACE_O_TRACEVFORKDONE) = 0 13:01:50 ptrace(PTRACE_CONT, 21100, NULL, SIG_0) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21100, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} --- 13:01:50 ptrace(PTRACE_GETEVENTMSG, 21100, NULL, [21101]) = 0 13:01:50 ptrace(PTRACE_KILL, 21101) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=21101, si_uid=1000, si_status=SIGKILL, si_utime=0, si_stime=0} --- 13:01:50 ptrace(PTRACE_SETOPTIONS, 21100, NULL, PTRACE_O_EXITKILL) = 0 13:01:50 ptrace(PTRACE_KILL, 21100) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21100, si_uid=1000, si_status=SIGCHLD, si_utime=0, si_stime=0} --- 13:01:50 ptrace(PTRACE_KILL, 21100) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=21100, si_uid=1000, si_status=SIGKILL, si_utime=0, si_stime=0} --- 13:01:50 ptrace(PTRACE_SETOPTIONS, 21099, NULL, PTRACE_O_TRACESYSGOOD|PTRACE_O_TRACEFORK|PTRACE_O_TRACEVFORK|PTRACE_O_TRACECLONE|PTRACE_O_TRACEEXEC|PTRACE_O_TRACEVFORKDONE|PTRACE_O_EXITKILL) = 0 13:01:50 ptrace(PTRACE_GETREGSET, 21099, NT_PRSTATUS, [{iov_base=0xffb84e64, iov_len=72}]) = 0 13:01:50 ptrace(PTRACE_GETVFPREGS, 21099, NULL, 0xffb84d48) = 0 13:01:50 ptrace(PTRACE_GETREGSET, 21099, NT_PRSTATUS, [{iov_base=0xffb84e1c, iov_len=72}]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0x411efc, [NULL]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0x411efc, [NULL]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77c9a44, [0x4c18bf00]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77c9a44, [0x4c18bf00]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9a44, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77c9ef8, [0xf00cbf00]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77c9ef8, [0xf00cbf00]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9ef8, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77cb5b8, [0x603cbf00]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77cb5b8, [0x603cbf00]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb5b8, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77cb220, [0x4639bf00]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77cb220, [0x4639bf00]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb220, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5310, [0xbf00e71c]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5314, [0x4620e776]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5310, [0xbf00e71c]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5314, [0x4620e776]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5310, [0xbf00e71c]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5310, 0x1f0e71c) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5314, [0x4620e776]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5314, 0x4620e7f0) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5bb0, [0x6d7bbf00]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5bb0, [0x6d7bbf00]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5bb0, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5d90, [0xe681bf00]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5d90, [0xe681bf00]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5d90, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9a44, 0x4c18bf00) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb5b8, 0x603cbf00) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb220, 0x4639bf00) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5bb0, 0x6d7bbf00) = 0 13:01:50 ptrace(PTRACE_CONT, 21099, 0x1, SIG_0) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21099, si_uid=1000, si_status=SIGSEGV, si_utime=0, si_stime=0} ---

Focusing again on the PTRACE_POKEDATA calls:

13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9a44, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9ef8, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb5b8, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb220, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5310, 0x1f0e71c) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5314, 0x4620e7f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5bb0, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5d90, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9a44, 0x4c18bf00) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb5b8, 0x603cbf00) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb220, 0x4639bf00) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5bb0, 0x6d7bbf00) = 0

We see writes to the same 7 addresses where breakpoints were set during the first run, but now there’s a different byte sequence and an extra write. This time, gdb is writing a 4-byte sequence – 0xe7f001f0 to 6 addresses. These are breakpoints for code running in ARM mode (see eabi_linux_arm_le_breakpoint in gdb/arm-linux-tdep.c in the gdb source code). The 2 writes to 0xf77d5310 and 0xf77d5314 are a single breakpoint at 0xf77d5312 (there are 2 writes because it is not on a 4-byte boundary).

Checking the ARMv7 reference manual shows that 0xe7f001f0 is an undefined instruction in ARM mode. However, this byte sequence is decoded as the following valid instructions in Thumb mode:

lsl r0, r6, #7 b #-16

So, it takes the contents of r6, does a logical shift left by 7, writes it to r0 and then does an unconditional branch backwards by 16 bytes. This is quite likely going to cause our program (in this case, the dynamic loader) to go off the rails and crash with a less than useful stacktrace, which is the behaviour we’re seeing.

Why is this happening?

The next step was to figure out why gdb is inserting the ARM breakpoint instruction sequence instead of the Thumb one. To do this, I needed to understand where the breakpoints are written, and grepping the source code suggests the PTRACE_POKEDATA calls happen in inf_ptrace_peek_poke in gdb/inf-ptrace.c (actually, you won’t find PTRACE_POKEDATA here – it’s PT_WRITE_D which is defined in /usr/include/sys/ptrace.h).

Running gdb inside gdb with the dynamic loader debug symbols installed and setting a breakpoint on inf_ptrace_peek_poke shows me the call stack. Note that I set a breakpoint by line number, as inf_ptrace_peek_poke is inlined and it was the only way I could get the conditional breakpoint to work:

$ gdb --args gdb --command=~/test.script ... (gdb) break ./gdb/inf-ptrace.c:578 if writebuf != 0x0 Breakpoint 1 at 0x51218: file ./gdb/inf-ptrace.c, line 578. (gdb) run Starting program: /usr/bin/gdb --command=\~/test.script Cannot parse expression `.L1207 4@r4'. warning: Probes-based dynamic linker interface failed. Reverting to original interface. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1". GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "arm-linux-gnueabihf". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word". Breakpoint 1, inf_ptrace_xfer_partial (ops=<optimized out>, object=<optimized out>, annex=<optimized out>, readbuf=0x0, writebuf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=2, xfered_len=0xfffeee18) at ./gdb/inf-ptrace.c:578 578 ./gdb/inf-ptrace.c: No such file or directory. (gdb) p/x offset $1 = 0xf77c9a44 (gdb) bt #0 inf_ptrace_xfer_partial (ops=<optimized out>, object=<optimized out>, annex=<optimized out>, readbuf=0x0, writebuf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=2, xfered_len=0xfffeee18) at ./gdb/inf-ptrace.c:578 #1 0x00457512 in linux_xfer_partial (ops=0x8306e0, object=<optimized out>, annex=0x0, readbuf=0x0, writebuf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=2, xfered_len=0xfffeee18) at ./gdb/linux-nat.c:4280 #2 0x004576da in linux_nat_xfer_partial (ops=0x8306e0, object=TARGET_OBJECT_MEMORY, annex=<optimized out>, readbuf=0x0, writebuf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=2, xfered_len=0xfffeee18) at ./gdb/linux-nat.c:3908 #3 0x005f64f4 in raw_memory_xfer_partial (ops=ops@entry=0x8306e0, readbuf=readbuf@entry=0x0, writebuf=writebuf@entry=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", memaddr=4152138308, len=len@entry=2, xfered_len=xfered_len@entry=0xfffeee18) at ./gdb/target.c:1064 #4 0x005f6a98 in target_xfer_partial (ops=ops@entry=0x8306e0, object=object@entry=TARGET_OBJECT_RAW_MEMORY, annex=annex@entry=0x0, readbuf=readbuf@entry=0x0, writebuf=writebuf@entry=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=<optimized out>, xfered_len=xfered_len@entry=0xfffeee18) at ./gdb/target.c:1298 #5 0x005f7030 in target_write_partial (xfered_len=0xfffeee18, len=2, offset=<optimized out>, buf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", annex=0x0, object=TARGET_OBJECT_RAW_MEMORY, ops=0x8306e0) at ./gdb/target.c:1554 #6 target_write_with_progress (ops=0x8306e0, object=object@entry=TARGET_OBJECT_RAW_MEMORY, annex=annex@entry=0x0, buf=buf@entry=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=len@entry=2, progress=progress@entry=0x0, baton=baton@entry=0x0) at ./gdb/target.c:1821 #7 0x005f70d2 in target_write (len=2, offset=2, buf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", annex=0x0, object=TARGET_OBJECT_RAW_MEMORY, ops=<optimized out>) at ./gdb/target.c:1847 #8 target_write_raw_memory (memaddr=memaddr@entry=4152138308, myaddr=myaddr@entry=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", len=len@entry=2) at ./gdb/target.c:1473 #9 0x00590c8e in default_memory_insert_breakpoint (gdbarch=<optimized out>, bp_tgt=0x9236f0) at ./gdb/mem-break.c:66 #10 0x004de3aa in bkpt_insert_location (bl=0x923698) at ./gdb/breakpoint.c:12525 #11 0x004e8426 in insert_bp_location (bl=bl@entry=0x923698, tmp_error_stream=tmp_error_stream@entry=0xfffef07c, disabled_breaks=disabled_breaks@entry=0xfffeefec, hw_breakpoint_error=hw_breakpoint_error@entry=0xfffeeff0, hw_bp_error_explained_already=hw_bp_error_explained_already@entry=0xfffeeff4) at ./gdb/breakpoint.c:2553 #12 0x004e9556 in insert_breakpoint_locations () at ./gdb/breakpoint.c:2977 #13 update_global_location_list (insert_mode=insert_mode@entry=UGLL_MAY_INSERT) at ./gdb/breakpoint.c:12177 #14 0x004ea0a0 in update_global_location_list_nothrow (insert_mode=UGLL_MAY_INSERT) at ./gdb/breakpoint.c:12215 #15 0x004ea484 in create_solib_event_breakpoint_1 (insert_mode=UGLL_MAY_INSERT, address=address@entry=4152138308, gdbarch=gdbarch@entry=0x0) at ./gdb/breakpoint.c:7555 #16 create_solib_event_breakpoint (gdbarch=gdbarch@entry=0x928fb0, address=address@entry=4152138308) at ./gdb/breakpoint.c:7562 #17 0x004497bc in svr4_create_probe_breakpoints (objfile=0x933888, probes=0xfffef148, gdbarch=0x928fb0) at ./gdb/solib-svr4.c:2089 #18 svr4_create_solib_event_breakpoints (gdbarch=0x928fb0, address=<optimized out>) at ./gdb/solib-svr4.c:2173 #19 0x00449c5c in enable_break (from_tty=<optimized out>, info=<optimized out>) at ./gdb/solib-svr4.c:2465 #20 svr4_solib_create_inferior_hook (from_tty=<optimized out>) at ./gdb/solib-svr4.c:3057 #21 0x0056bba6 in post_create_inferior (target=0x801084 <current_target>, from_tty=from_tty@entry=0) at ./gdb/infcmd.c:469 #22 0x0056c736 in run_command_1 (args=<optimized out>, from_tty=0, run_how=RUN_NORMAL) at ./gdb/infcmd.c:665 #23 0x00465334 in cmd_func (cmd=<optimized out>, args=<optimized out>, from_tty=<optimized out>) at ./gdb/cli/cli-decode.c:1886 #24 0x006062a6 in execute_command (p=<optimized out>, p@entry=0x880f10 "run", from_tty=0) at ./gdb/top.c:630 #25 0x00548760 in command_handler (command=0x880f10 "run") at ./gdb/event-top.c:583 #26 0x00606a66 in read_command_file (stream=stream@entry=0x874ae0) at ./gdb/top.c:424 #27 0x004684e2 in script_from_file (stream=stream@entry=0x874ae0, file=file@entry=0xfffef7d0 "~/test.script") at ./gdb/cli/cli-script.c:1592 #28 0x004639bc in source_script_from_stream (file_to_open=0xfffef7d0 "~/test.script", file=0xfffef7d0 "~/test.script", stream=0x874ae0) at ./gdb/cli/cli-cmds.c:568 #29 source_script_with_search (file=0xfffef7d0 "~/test.script", from_tty=<optimized out>, search_path=<optimized out>) at ./gdb/cli/cli-cmds.c:604 #30 0x0058821a in catch_command_errors (command=0x463a89 <source_script(char const*, int)>, arg=0xfffef7d0 "~/test.script", from_tty=1) at ./gdb/main.c:379 #31 0x00588ea0 in captured_main_1 (context=<optimized out>) at ./gdb/main.c:1125 #32 captured_main (data=<optimized out>) at ./gdb/main.c:1147 #33 gdb_main (args=<optimized out>) at ./gdb/main.c:1173 #34 0x004343ac in main (argc=<optimized out>, argv=<optimized out>) at ./gdb/gdb.c:32

Frame 9 (default_memory_insert_breakpoint) looks like it will probably be interesting to us. Taking a look at what it does:

int default_memory_insert_breakpoint (struct gdbarch *gdbarch, struct bp_target_info *bp_tgt) { CORE_ADDR addr = bp_tgt->placed_address; const unsigned char *bp; gdb_byte *readbuf; int bplen; int val; /* Determine appropriate breakpoint contents and size for this address. */ bp = gdbarch_sw_breakpoint_from_kind (gdbarch, bp_tgt->kind, &bplen); /* Save the memory contents in the shadow_contents buffer and then write the breakpoint instruction. */ readbuf = (gdb_byte *) alloca (bplen); val = target_read_memory (addr, readbuf, bplen); if (val == 0) { ... bp_tgt->shadow_len = bplen; memcpy (bp_tgt->shadow_contents, readbuf, bplen); val = target_write_raw_memory (addr, bp, bplen); } return val; }

The call to gdbarch_sw_breakpoint_from_kind appears to return the bytes written for our breakpoint. gdbarch_sw_breakpoint_from_kind delegates to arm_sw_breakpoint_from_kind in gdb/arm-tdep.c. (The gdbarch_ functions provides a way for architecture independent code in gdb to call functions specific to the architecture associated with the target). Taking a look at what this does:

static const gdb_byte * arm_sw_breakpoint_from_kind (struct gdbarch *gdbarch, int kind, int *size) { struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch); switch (kind) { case ARM_BP_KIND_ARM: *size = tdep->arm_breakpoint_size; return tdep->arm_breakpoint; case ARM_BP_KIND_THUMB: *size = tdep->thumb_breakpoint_size; return tdep->thumb_breakpoint; case ARM_BP_KIND_THUMB2: *size = tdep->thumb2_breakpoint_size; return tdep->thumb2_breakpoint; default: gdb_assert_not_reached ("unexpected arm breakpoint kind"); } }

So, arm_breakpoint_from_kind returns an ARM, Thumb or Thumb2 breakpoint instruction sequence depending on the value of kind. If we switch to frame 9, we should be able to inspect the value of kind:

(gdb) f 9 #9 0x00590c8e in default_memory_insert_breakpoint (gdbarch=<optimized out>, bp_tgt=0x9236f0) at ./gdb/mem-break.c:66 66 ./gdb/mem-break.c: No such file or directory. (gdb) p bp_tgt->kind $2 = 2

2 is ARM_BP_KIND_THUMB, so this appears to check out. Moving further up the stack, we find that kind is determined in frame 10 (bkpt_insert_location in gdb/breakpoint.c). Let’s have a look at what that does:

static int bkpt_insert_location (struct bp_location *bl) { CORE_ADDR addr = bl->target_info.reqstd_address; bl->target_info.kind = breakpoint_kind (bl, &addr); bl->target_info.placed_address = addr; if (bl->loc_type == bp_loc_hardware_breakpoint) return target_insert_hw_breakpoint (bl->gdbarch, &bl->target_info); else return target_insert_breakpoint (bl->gdbarch, &bl->target_info); }

This calls breakpoint_kind, which delegates to arm_breakpoint_kind_from_pc in gdb/arm-tdep.c via gdbarch_breakpoint_kind_from_pc. arm_breakpoint_kind_from_pc maps the breakpoint address to an instruction set and returns one of three values – ARM_BP_KIND_ARM, ARM_BP_KIND_THUMB or ARM_BP_KIND_THUMB2. From looking at arm_breakpoint_kind_from_pc, we can see the most interesting part is a call to arm_pc_is_thumb. Let’s have a look at how this works:

int arm_pc_is_thumb (struct gdbarch *gdbarch, CORE_ADDR memaddr) { struct bound_minimal_symbol sym; char type; ... /* If bit 0 of the address is set, assume this is a Thumb address. */ if (IS_THUMB_ADDR (memaddr)) return 1;

So, first of all it checks whether bit 0 of the breakpoint address is set. Looking at the SystemTap probes in .note.stapsdt from our earlier readelf output, we can see that this is not the case for any probe. Following on:

/* If the user wants to override the symbol table, let him. */ if (strcmp (arm_force_mode_string, "arm") == 0) return 0; if (strcmp (arm_force_mode_string, "thumb") == 0) return 1; /* ARM v6-M and v7-M are always in Thumb mode. */ if (gdbarch_tdep (gdbarch)->is_m) return 1;

We’re not forcing the mode and this isn’t ARM v6-M or v7-M, so, continuing:

/* If there are mapping symbols, consult them. */ type = arm_find_mapping_symbol (memaddr, NULL); if (type) return type == 't';

arm_find_mapping_symbol tries to find a mapping symbol associated with the breakpoint address. Mapping symbols are a special type of symbol used to identify transitions between ARM and Thumb instruction sets (see this information). Breaking here in gdb shows that there isn’t a mapping symbol associated with the init_start probe:

(gdb) break ./gdb/arm-tdep.c:434 Breakpoint 2 at 0x43ec3c: file ./gdb/arm-tdep.c, line 434. (gdb) run ... Breakpoint 2, arm_pc_is_thumb (gdbarch=gdbarch@entry=0x928fb0, memaddr=memaddr@entry=4152138308) at ./gdb/arm-tdep.c:434 434 ./gdb/arm-tdep.c: No such file or directory. (gdb) p/x memaddr $2 = 0xf77c9a44 (gdb) p type $3 = 0 '\000'

So, continuing to the next step:

/* Thumb functions have a "special" bit set in minimal symbols. */ sym = lookup_minimal_symbol_by_pc (memaddr); if (sym.minsym) return (MSYMBOL_IS_SPECIAL (sym.minsym));

lookup_minimal_symbol_by_pc tries to map the breakpoint address to a function symbol. MSYMBOL_IS_SPECIAL(sym.minsym) expands to sym.minsym->target_flag_1 and is 1 if bit 0 of the symbol’s target address is set, indicating that the function is called in Thumb mode (see arm_elf_make_msymbol_special in gdb/arm-tdep.c for where this is set). Breaking here in gdb shows that this succeeds:

(gdb) break ./gdb/arm-tdep.c:439 Breakpoint 3 at 0x43ec54: file ./gdb/arm-tdep.c, line 439. (gdb) cont Continuing. Breakpoint 3, arm_pc_is_thumb (gdbarch=gdbarch@entry=0x928fb0, memaddr=memaddr@entry=4152138308) at ./gdb/arm-tdep.c:439 439 in ./gdb/arm-tdep.c (gdb) p sym.minsym $4 = (minimal_symbol *) 0x964278 (gdb) p *sym.minsym $5 = {mginfo = {name = 0x952ff8 "dl_main", value = {ivalue = 5932, block = 0x172c, bytes = 0x172c <error: Cannot access memory at address 0x172c>, address = 5932, common_block = 0x172c, chain = 0x172c}, language_specific = {obstack = 0x0, demangled_name = 0x0}, language = language_auto, ada_mangled = 0, section = 10}, size = 10516, filename = 0x949a48 "rtld.c", type = mst_file_text, created_by_gdb = 0, target_flag_1 = 1, target_flag_2 = 0, has_size = 1, hash_next = 0x0, demangled_hash_next = 0x0} (gdb) p sym.minsym->target_flag_1 $6 = 1

It indicates that the init_start probe is in dl_main, and that it is called in Thumb mode.

We can use readelf to inspect the symbol table and verify that this is correct:

$ readelf -s libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so | grep dl_main 42: 0000172d 10516 FUNC LOCAL DEFAULT 11 dl_main

Note that bit 0 of the target address is set.

If lookup_minimal_symbol_by_pc fails, then we’re basically out of luck and arm_pc_is_thumb will return 0 (indicating that the breakpoint address is in an area that is executing ARM instructions). But this depends on the .symtab ELF section being present, so there is an obvious issue here as this is stripped from the binary in the build (and shipped in a separate debug object).

I then ran gdb in gdb without the dynamic loader symbols installed and set a breakpoint on default_memory_insert_breakpoint:

$ gdb --args gdb --batch --command=~/test.script (gdb) set debug-file-directory /home/ubuntu/libc6-syms/usr/lib/debug/:/usr/lib/debug/ (gdb) break default_memory_insert_breakpoint(gdbarch*, bp_target_info*) Breakpoint 1 at 0x190c34: file ./gdb/mem-break.c, line 39. (gdb) run Starting program: /usr/bin/gdb --batch --command=\~/test.script Cannot parse expression `.L1207 4@r4'. warning: Probes-based dynamic linker interface failed. Reverting to original interface. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1". Breakpoint 1, default_memory_insert_breakpoint (gdbarch=0x9288c0, bp_tgt=0x922fe8) at ./gdb/mem-break.c:39 39 ./gdb/mem-break.c: No such file or directory. (gdb) bt #0 default_memory_insert_breakpoint (gdbarch=0x9288c0, bp_tgt=0x922fe8) at ./gdb/mem-break.c:39 #1 0x004de3aa in bkpt_insert_location (bl=0x922f90) at ./gdb/breakpoint.c:12525 #2 0x004e8426 in insert_bp_location (bl=bl@entry=0x922f90, tmp_error_stream=tmp_error_stream@entry=0xfffef07c, disabled_breaks=disabled_breaks@entry=0xfffeefec, hw_breakpoint_error=hw_breakpoint_error@entry=0xfffeeff0, hw_bp_error_explained_already=hw_bp_error_explained_already@entry=0xfffeeff4) at ./gdb/breakpoint.c:2553 #3 0x004e9556 in insert_breakpoint_locations () at ./gdb/breakpoint.c:2977 #4 update_global_location_list (insert_mode=insert_mode@entry=UGLL_MAY_INSERT) at ./gdb/breakpoint.c:12177 #5 0x004ea0a0 in update_global_location_list_nothrow (insert_mode=UGLL_MAY_INSERT) at ./gdb/breakpoint.c:12215 #6 0x004ea484 in create_solib_event_breakpoint_1 (insert_mode=UGLL_MAY_INSERT, address=address@entry=4152138308, gdbarch=gdbarch@entry=0x0) at ./gdb/breakpoint.c:7555 #7 create_solib_event_breakpoint (gdbarch=gdbarch@entry=0x9288c0, address=address@entry=4152138308) at ./gdb/breakpoint.c:7562 #8 0x004497bc in svr4_create_probe_breakpoints (objfile=0x933238, probes=0xfffef148, gdbarch=0x9288c0) at ./gdb/solib-svr4.c:2089 #9 svr4_create_solib_event_breakpoints (gdbarch=0x9288c0, address=<optimized out>) at ./gdb/solib-svr4.c:2173 #10 0x00449c5c in enable_break (from_tty=<optimized out>, info=<optimized out>) at ./gdb/solib-svr4.c:2465 #11 svr4_solib_create_inferior_hook (from_tty=<optimized out>) at ./gdb/solib-svr4.c:3057 #12 0x0056bba6 in post_create_inferior (target=0x801084 <current_target>, from_tty=from_tty@entry=0) at ./gdb/infcmd.c:469 #13 0x0056c736 in run_command_1 (args=<optimized out>, from_tty=0, run_how=RUN_NORMAL) at ./gdb/infcmd.c:665 #14 0x00465334 in cmd_func (cmd=<optimized out>, args=<optimized out>, from_tty=<optimized out>) at ./gdb/cli/cli-decode.c:1886 #15 0x006062a6 in execute_command (p=<optimized out>, p@entry=0x874758 "run", from_tty=0) at ./gdb/top.c:630 #16 0x00548760 in command_handler (command=0x874758 "run") at ./gdb/event-top.c:583 #17 0x00606a66 in read_command_file (stream=stream@entry=0x875da8) at ./gdb/top.c:424 #18 0x004684e2 in script_from_file (stream=stream@entry=0x875da8, file=file@entry=0xfffef7d0 "~/test.script") at ./gdb/cli/cli-script.c:1592 #19 0x004639bc in source_script_from_stream (file_to_open=0xfffef7d0 "~/test.script", file=0xfffef7d0 "~/test.script", stream=0x875da8) at ./gdb/cli/cli-cmds.c:568 #20 source_script_with_search (file=0xfffef7d0 "~/test.script", from_tty=<optimized out>, search_path=<optimized out>) at ./gdb/cli/cli-cmds.c:604 #21 0x0058821a in catch_command_errors (command=0x463a89 <source_script(char const*, int)>, arg=0xfffef7d0 "~/test.script", from_tty=0) at ./gdb/main.c:379 #22 0x00588ea0 in captured_main_1 (context=<optimized out>) at ./gdb/main.c:1125 #23 captured_main (data=<optimized out>) at ./gdb/main.c:1147 #24 gdb_main (args=<optimized out>) at ./gdb/main.c:1173 #25 0x004343ac in main (argc=<optimized out>, argv=<optimized out>) at ./gdb/gdb.c:32 (gdb) p/x bp_tgt->placed_address $1 = 0xf77c9a44 (gdb) p bp_tgt->kind $2 = 4

Sure enough, default_memory_insert_breakpoint is called this time with kind == 4 (ARM_BP_KIND_ARM) which seems to be incorrect. Setting a breakpoint in arm_pc_is_thumb again, we can verify that the reason for this is that the call to lookup_minimal_symbol_by_pc fails:

(gdb) break ./gdb/arm-tdep.c:439 Breakpoint 1 at 0x3ec54: file ./gdb/arm-tdep.c, line 439. (gdb) run Starting program: /usr/bin/gdb --command=\~/test.script Cannot parse expression `.L1207 4@r4'. warning: Probes-based dynamic linker interface failed. Reverting to original interface. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1". GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "arm-linux-gnueabihf". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word". Breakpoint 1, arm_pc_is_thumb (gdbarch=0x928fb0, memaddr=4152138308) at ./gdb/arm-tdep.c:439 439 ./gdb/arm-tdep.c: No such file or directory. (gdb) p/x memaddr $1 = 0xf77c9a44 (gdb) p sym.minsym $2 = (minimal_symbol *) 0x0

This results in arm_pc_is_thumb returning 0 and arm_breakpoint_kind_from_pc returning ARM_BP_KIND_ARM, which results in arm_sw_breakpoint_from_kind returning the wrong breakpoint instruction sequence.

TL;DR

GDB causes a crash in the dynamic loader (ld.so) on armv7 if ld.so has been stripped of its symbol table, because it is unable to correctly determine the appropriate instruction set when inserting probe event breakpoints.

If you’ve got to the end of this (congratulations), then you’re probably going to be disappointed to hear that I’m not sure what the proper fix is – this isn’t really my area of expertise. For the rustc package, I just added a Build-Depends: libc6-dbg [armhf] as a workaround for now, and that might even be the correct fix. But, it’s certainly nicer to understand why it didn’t work in the first place.

Jonathan Riddell: All New Hitchhiker’s Guide to the Galaxy and Black Sails

Planet Ubuntu - Mër, 02/05/2018 - 12:36md

One of the complains about the new streaming entertainment world is that it removes the collective experience of everyone watching the same programme on telly the night before then discussing it. At least in the international world I tend to live in that was never much of an option and instead it is a common topic of conversation now when I meet with people around the world to discuss the best series from the world of media. So allow me to recommend a couple which seem to have missed many people’s conciousness.

On the original streaming media site BBC iPlayer radio there’s a whole new 6th series of Hitchhiker’s Guide to the Galaxy. 40 years in the making and still full of whimsical understated comedy about life. And best of all they’re repeating the 1st, 2nd and just started 3rd series of the show.

Back in telly land I was reluctant to pay money for the privilage of spending my life watching telly but a student discount made Amazon Prime a tempting offer for my girlfriend.  I discovered Black Sails which is the best telly I’ve ever seen.  A prequal to Scottish classic Treasure Island with Captain Flint (who you’ll remember only appears in the original book as a parrot) and John Silver, it impressively mixes in real life pirates from 18th century Carribean.  The production qualities are superb, filming on water is always expensive (see Water World or Titanic or even Lost) and here they had to recreate several near full-size sailing boats.  The plotting is ideal with allegiances changing every episode or two in a mostly plausable way.  And it successfully ends before running out of energy.  I’m a fan.

Meanwhile on Netflix I wasn’t especially interested in the new Star Trek but it turns out to include space tardegrades and therefor became much more exciting.

 

by

Faqet

Subscribe to AlbLinux agreguesi