You are here

Planet Ubuntu

Subscribe to Feed Planet Ubuntu
Planet Ubuntu - http://planet.ubuntu.com/
Përditësimi: 6 months 1 javë më parë

Raphaël Hertzog: My Free Software Activities in April 2018

Hën, 07/05/2018 - 11:17md

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

pkg-security team

I improved the packaging of openvas-scanner and openvas-manager so that they mostly work out of the box with a dedicated redis database pre-configured and with certificates created in the postinst. I merged a patch for cross-build support in mac-robber and another patch from chkrootkit to avoid unexpected noise in quiet mode.

I prepared an update of openscap-daemon to fix the RC bug #896766 and to update to a new upstream release. I pinged the package maintainer to look into the autopkgtest failure (that I did not introduce). I sponsored hashcat 4.1.0.

Distro Tracker

While it slowed down, I continued to get merge requests. I merged two of them for some newcomer bugs:

I reviewed a merge request suggesting to add a “team search” feature.

I did some work of my own too: I fixed many exceptions that have been seen in production with bad incoming emails and with unexpected maintainer emails. I also updated the contributor guide to match the new workflow with salsa and with the new pre-generated database and its associated helper script (to download it and configure the application accordingly). During this process I also filed a GitLab issue about the latest artifact download URL not working as advertised.

I filed many issues (#13 to #19) for things that were only stored in my personal TODO list.

Misc Debian work

Bug Reports. I filed bug #894732 on mk-build-deps to filter build dependencies to include/install based on build profiles. For reprepro, I always found the explanation about FilterList very confusing (bug #895045). I filed and fixed a bug on mirrorbrain with redirection to HTTPS URLs.

I also investigated #894979 and concluded that the CA certificates keystore file generated with OpenJDK 9 is not working properly with OpenJDK 8. This got fixed in ca-certificates-java.

Sponsorship. I sponsored pylint-plugin-utils 0.2.6-2.

Packaging. I uploaded oca-core (still in NEW) and ccextractor for Freexian customers. I also uploaded python-num2words (dependency for oca-core). I fixed the RC bug #891541 on lua-posix.

Live team. I reviewed better handling of missing host dependency on live-build and reviewed a live-boot merge request to ensure that the FQDN returned by DHCP was working properly in the initrd.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Timo Jyrinki: Converting an existing installation to LUKS using luksipc - 2018 notes

Hën, 07/05/2018 - 1:08md
Time for a laptop upgrade. Encryption was still not the default for the new Dell XPS 13 Developer Edition (9370) that shipped with Ubuntu 16.04 LTS, so I followed my own notes from 3 years ago together with the official documentation to convert the unencrypted OEM Ubuntu installation to LUKS during the weekend. This only took under 1h altogether.

On this new laptop model, EFI boot was already in use, Secure Boot was enabled and the SSD had GPT from the beginning. The only thing I wanted to change thus was the / to be encrypted.

Some notes for 2018 to clarify what is needed and what is not needed:
  • Before luksipc, remember to resize existing partitions to have 10 MB of free space at the end of the / partition, and also create a new partition of eg 1 GB size partition for /boot.
  • To get the code and compile luksipc on Ubuntu 16.04.4 LTS live USB, just apt install git build-essential is needed. cryptsetup package is already installed.
  • After luksipc finishes and you've added your own passphrase and removed the initial key (slot 0), it's useful to cryptsetup luksOpen it and mount it still under the live session - however, when using ext4, the mounting fails due to a size mismatch in ext4 metadata! This is simple to correct: sudo resize2fs /dev/mapper/root. Nothing else is needed.
  • I mounted both the newly encrypted volume (to /mnt) and the new /boot volume (to /mnt2 which I created), and moved /boot/* from the former to latter.
  • I edited /etc/fstab of the encrypted volume to add the /boot partition
  • Mounted as following in /mnt:
    • mount -o bind /dev dev
    • mount -o bind /sys sys
    • mount -t proc proc proc
  • Then:
    • chroot /mnt
    • mount -a # (to mount /boot and /boot/efi)
    • Edited files /etc/crypttab (added one line: root UUID none luks) and /etc/grub/default (I copied over my overkill configuration that specifies all of cryptopts and cryptdevice some of which may be obsolete, but at least one of them and root=/dev/mapper/root is probably needed).
    • Ran grub-install ; update-grub ; mkinitramfs -k all -c (notably no other parameters were needed)
    • Rebooted.
  • What I did not need to do:
    • Modify anything in /etc/initramfs-tools.
If the passphrase input shows on your next boot, but your correct passphrase isn't accepted, it's likely that the initramfs wasn't properly updated yet. I first forgot to run the mkinitramfs command and faced this.

Dylan McCall: My tiny file server with Ubuntu Core, Nextcloud and Syncthing

Dje, 06/05/2018 - 6:38pd

My annual Dropbox renewal date was coming up, and I thought to myself “I’m working with servers all the time. I shouldn’t need to pay someone else for this.” I was also knee deep in a math course, so I felt like procrastinating.

I’m really happy with the result, so I thought I would explain it for anyone else who wants to do the same. Here’s what I was aiming for:

  • Safe, convenient archiving for big files.
  • Instant sync between devices for stuff I’m working on.
  • Access over LAN from home, and over the Internet from anywhere else.
  • Regular, encrypted offsite backups.
  • Compact, low power hardware that I can stick in a closet and forget about.
  • Some semblance of security, at least so a compromised service won’t put the rest of the system at risk.

The hardware

I dabbled with a BeagleBoard that I used for an embedded systems course, and I pondered a Raspberry Pi with a case. I decided against both of those, because I wanted something with a bit more wiggle room. And besides, I like having a BeagleBoard free to mess around with now and then.

In the end, I picked out an Intel NUC, and I threw in an old SSD and a stick of RAM:

It’s tiny, it’s quiet, and it looks okay too! (Just find somewhere to hide the power brick). My only real complaint is the wifi hardware doesn’t work with older Linux kernels, but that wasn’t a big deal for my needs and I’m sure it will work in the future.

The software

I installed Ubuntu Core 16, which is delightful. Installing it is a bit surprising for the uninitiated because there isn’t really an install process: you just clone the image to the drive you want to boot from and you’re done. It’s easier if you do this while the drive is connected to another computer. (I didn’t feel like switching around SATA cables in my desktop, so I needed to write a different OS to a flash drive, boot from that on the NUC, transfer the Ubuntu Core image to there, then dd that image to the SSD. Kind of weird for this use case).

Now that I figured out how to run it, I’ve been enjoying how this system is designed to minimize the time you need to spend with your device connected to a screen and keyboard like some kind of savage. There’s a simple setup process (configure networking, log in to your Ubuntu One account), and that’s it. You can bury the thing somewhere and SSH to it from now on. In fact, you’re pretty much forced to: you don’t even get a login prompt. Chances are you won’t need to SSH to the system anyway since it keeps itself up to date. As someone who obsesses over loose threads, I’m finding this all very

Although, with that in mind, one important thing: if you haven’t played with Ubuntu for a while, head over to login.ubuntu.com and make sure your SSH keys are up to date. The first time I set it up, I realized I had a bunch of obsolete SSH keys in my account and I had no way to reach the system from the laptop I was using. Fixing that meant changing Ubuntu Core’s writable files from another operating system. (I would love to know if there is a better way).

The other software

Okay, using Ubuntu Core is probably a bit weird when I want to run all these servers and I’m probably a little picky, but it’s so elegant! And, happily, there are Snap packages for both Nextcloud and Syncthing. I ended up using both.

I really like how files you can edit are tucked away in /writable. For this guide,  I always refer to things by their full paths under /writable. I found thinking like that spared me getting lost in files that I couldn’t change, and it helped to emphasize the nature of this system.

DNS

Before I get to the fun stuff, there were some networking conundrums I needed to solve.

First, public DNS. My router has some buttons if you want to use a dynamic DNS service, but I just rolled my own thing. To start off, I added some additional records for my DNS pointing at my home IP address. My web host has an API for editing DNS rules, so I set up a dynamic DNS after everything else was working, but I will get to that further along.

Next, my router didn’t support hairpinning (or NAT Loopback), so requests to core.example.com were still resolving to my public IP address, which means way too many hops for sending data around. My ridiculous solution: I’ll run my own DNS server, darnit.

To get started, check the network configuration in /writable/system-data/etc/netplan/00-snapd-config.yaml. You’ll want to make sure the system requests a static IP address (I used 192.168.1.2) and uses its own nameservers. Mine looks like this:

network: ethernets: eth0: dhcp4: false dhcp6: false addresses: [192.168.1.2/24, '2001:1::2/64'] gateway4: 192.168.1.1 nameservers: addresses: [8.8.8.8, 8.8.4.4] version: 2

After changing the Netplan configuration, use sudo netplan generate to update the system.

For the actual DNS server, we can install an unofficial snap that provides dnsmasq:

$ snap install dnsmasq-escoand

You’ll want to edit  /writable/system-data/etc/hosts so the service’s domains resolve to the devices’s local IP address:

127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.1.2 core.example.com fe80::96c6:91ff:fe1a:6581 core.example.com

Now it’s safe to go into your router’s configuration, reserve an IP address for this device, and set it as your DNS server:

And that solved it.

To check, run tracepath from another computer on your network and the result should be something simple like this:

$ tracepath core.example.com 1?: [LOCALHOST] pmtu 1500 1: core.example.com 0.789ms reached 1: core.example.com 0.816ms reached Resume: pmtu 1500 hops 1 back 1

While you’re looking at the router, you may as well forward some ports, too. By default you need TCP ports 80 and 443 for Nextcloud, and 22000 for Syncthing.

Nextcloud

The Nextcloud snap is fantastic. It already works out of the box: it adds a system service for its copy of Apache on port 80, and it comes with a bunch of scripts for setting up common things like SSL certificates. I wanted to use an external hard drive for its data store, so I needed to configure the mount point for that and grant the necessary permissions for the snap to access removable media.

Let’s set up that mount point first. These are configured with Systemd mount units, so we’ll want to create a file like /writable/system-data/etc/systemd/system/media-data1.mount. You need to tell it how to identify the storage device. (I always give them nice volume labels when I format them so it’s easy to use that). Note that the name of the unit file must correspond to the full name of the mount point:

[Unit] Description=Mount unit for data1 [Mount] What=/dev/disk/by-label/data1 Where=/media/data1 Type=ext4 [Install] WantedBy=multi-user.target

One super cool thing here is you can start and stop the mount unit just like any other system service:

$ sudo systemctl daemon-reload $ sudo systemctl start media-data1.mount $ sudo systemctl enable media-data1.mount

Now let’s set up Nextcloud. The code repository for the Nextcloud snap has lots of documentation if you need.

$ snap install nextcloud $ snap connect nextcloud:removable-media :removable-media $ sudo snap run nextcloud.manual-install USERNAME PASSWORD $ snap stop nextcloud

Before we do anything else we need to tell Nextcloud to store its data in /media/data1/nextcloud/, and allow access through the public domain from earlier. To do that, edit /writable/system-data/var/snap/nextcloud/current/nextcloud/config/config.php:

<?php $CONFIG = array ( 'apps_paths' => array ( … ), … 'trusted_domains' => array ( 0 => 'localhost', 1 => 'core.example.com' ), 'datadirectory' => '/media/data1/nextcloud/data', … );

Move the existing data directory to the new location, and restart the service:

$ snap stop nextcloud $ sudo mkdir /media/data1/nextcloud $ sudo mv /writable/system-data/var/snap/nextcloud/common/nextcloud/data /media/data1/nextcloud/ $ snap start nextcloud

Now you can enable HTTPS. There is a lets-encrypt option (for letsencrypt.org), which is very convenient:

$ sudo snap run nextcloud.enable-https lets-encrypt -d $ sudo snap run nextcloud.enable-https lets-encrypt

At this point you should be able to reach Nextcloud from another computer on your network, or remotely, using the same domain.

Syncthing

If you aren’t me, you can probably stop here and use Nextcloud, but I decided Nextcloud wasn’t quite right for all of my files, so I added Syncthing to the mix. It’s like a peer to peer Dropbox, with a somewhat more geeky interface. You can link your devices by globally unique IDs, and they’ll find the best way to connect to each other and automatically sync files between your shared folders. It’s very elegant, but I wasn’t sure about using it without some kind of central repository. This way my systems will sync between each other when they can, but there’s one central device that is always there, ready to send or receive the newest versions of everything.

Syncthing has a snap, but it is a bit different from Nextcloud, so the package needed a few extra steps. Syncthing, like Dropbox, runs one instance for each user, instead of a monolithic service that serves many users. So, it doesn’t install a system service of its own, and we’ll need to figure that out. First, let’s install the package:

$ snap install syncthing $ snap connect syncthing:home :home $ snap run syncthing

Once you’re satisfied, you can stop syncthing. That isn’t very useful yet, but we needed to run it once to create a configuration file.

So, first, we need to give syncthing a place to put its data, replacing “USERNAME” with your system username:

$ sudo mkdir /media/data1/syncthing $ sudo chown USERNAME:USERNAME /media/data1/syncthing

Unfortunately, you’ll find that the syncthing application doesn’t have access to /media/data1, and its snap doesn’t support the removable-media interface, so it’s limited to your home folder. But that’s okay, we can solve this by creating a bind mount. Let’s create a mount unit in /writable/system-data/etc/systemd/system/home-USERNAME-syncthing.mount:

[Unit] Description=Mount unit for USERNAME-syncthing [Mount] What=/media/data1/syncthing/USERNAME Where=/home/USERNAME/syncthing Type=none Options=bind [Install] WantedBy=multi-user.target

(If you’re wondering, yes, systemd figures out that it needs to mount media-data1 before it can create this bind mount, so don’t worry about that).

$ sudo systemctl daemon-reload $ sudo systemctl start home-USERNAME-syncthing.mount $ sudo systemctl enable home-USERNAME-syncthing.mount

Now update Syncthing’s configuration and tell it to put all of its shared folders in that directory. Open /home/USERNAME/snap/syncthing/common/syncthing/config.xml in your favourite editor, and make sure you have something like this:

<configuration version="27"> <folder id="default" label="Default Folder" path="/home/USERNAME/syncthing/Sync" type="readwrite" rescanIntervalS="60" fsWatcherEnabled="false" fsWatcherDelayS="10" ignorePerms="false" autoNormalize="true"> … </folder> <device id="…" name="core.example.com" compression="metadata" introducer="false" skipIntroductionRemovals="false" introducedBy=""> <address>dynamic</address> <paused>false</paused> <autoAcceptFolders>false</autoAcceptFolders> </device> <gui enabled="true" tls="false" debugging="false"> <address>192.168.1.2:8384</address> … </gui> <options> <defaultFolderPath>/home/USERNAME/syncthing</defaultFolderPath> </options> </configuration>

With those changes, Syncthing will create new folders inside /home/USERNAME/syncthing, you can move the default “Sync” folder there as well, and its web interface will be accessible over your local network at http://192.168.1.2:8384. (I’m not enabling TLS here, for two reasons: it’s just the local network, and Nextcloud enables HSTS for the core.example.com domain, so things get confusing when you try to access it like that).

You can try snap run syncthing again, just to be sure.

Now we need to add a service file so Syncthing runs automatically. We could create a service that has the User field filled in and it always runs as a certain user, but for this yupe of service it doesn’t hurt to set it up as a template unit. Happily, Syncthing’s documentation provides a unit file we can borrow, so we don’t need to do much thinking here. You’ll need to create a file called /writable/system-data/etc/systemd/system/syncthing@.service:

[Unit] Description=Syncthing - Open Source Continuous File Synchronization for %I Documentation=man:syncthing(1) After=network.target [Service] User=%i ExecStart=/usr/bin/snap run syncthing -no-browser -logflags=0 Restart=on-failure SuccessExitStatus=3 4 RestartForceExitStatus=3 4 [Install] WantedBy=multi-user.target

Note that our Exec line is a little different than theirs, since we need it to run syncthing under the snap program.

$ sudo systemctl daemon-reload $ sudo systemctl start syncthing@USERNAME.service $ sudo systemctl enable syncthing@USERNAME.service

And there you have it, we have Syncthing! The web interface for the Ubuntu Core system is only accessible over your local network, but assuming you forwarded port 22000 on your router earlier, you should be able to sync with it from anywhere.

If you install the Syncthing desktop client (snap install syncthing in Ubuntu, dnf install syncthing-gtk in Fedora), you’ll be able to connect your other devices to each other. On each device that you connect to this one, make sure you set core.example.com as an Introducer. That way they will discover each other through it, which saves a bit of time.

Once your devices are all connected, it’s a good idea to go to Syncthing’s web interface at http://192.168.1.2:8384 and edit the settings for each device. You can enable “Auto Accept” so whenever a device shares a new folder with core.example.com, it will be accepted automatically.

Nextcloud + Syncthing

There is one last thing I did here. Syncthing and Nextcloud have some overlap, but I found myself using them for pretty different sorts of tasks. I use Nextcloud for media files and archives that I want to store on a single big hard drive, and occasionally stream over the network; and I use Syncthing for files that I want to have locally on every device.

Still, it would be nice if I could have Nextcloud’s web UI and sharing options with Syncthing’s files. In theory we could bind mount Syncthing’s data directory into Nextcloud’s data directory, but the Nextcloud and Syncthing services run as different users. So, that probably won’t go particularly well.

Instead, it works quite well to mount Syncthing’s data directory using SSH.

First, in Nextcloud, go to the Apps section and enable the “External storage support” app.

Now you need to go to Admin, and “External storages”, and allow users to mount external storage.

Finally, go to your Personal settings, choose “External storages”, add a folder named Syncthing, and tell it connect over SFTP. Give it the hostname of the system that has Syncthing (so, core.example.com), the username for the user that is running Syncthing (USERNAME), and the path to Syncthing’s data files (/home/USERNAME/syncthing). It will need an SSH key pair to authenticate.

When you click Generate keys it will create a key pair. You will need to copy and paste the public key (which appears in the text field) to /home/USERNAME/.ssh/authorized_keys.

If you try the gear icon to the right, you’ll find an option to enable sharing for the external storage, which is very useful here. Now you can use Nextcloud to view, share, or edit your files from Syncthing.

Backups

I spun tires for a while with backups, but eventually I settled on Restic. It is fast, efficient, and encrypted. I’m really impressed with it.

Unfortunately, the snap for Restic doesn’t support strict confinement, which means it won’t work on Ubuntu Core.  So I cheated. Let’s set this up under the root user.

You can find releases of Restic as prebuilt binaries. We’ll also need to install a snap that includes curl. (Or you can download the file on another system and transfer it with scp, but this blog post is too long already).

$ snap install demo-curl $ snap run demo-curl.curl -L "https://github.com/restic/restic/releases/download/v0.8.3/restic_0.8.3_linux_amd64.bz2" | bunzip2 > restic $ chmod +x restic $ sudo mkdir /root/bin $ sudo cp restic /root/bin

We need to figure out the environment variables we want for Restic. That depends on what kind of storage service you’re using. I created a file with those variables at /root/restic-MYACCOUNT.env. For Backblaze B2, mine looked like this:

#!/bin/sh export RESTIC_REPOSITORY="b2:core-example-com--1" export B2_ACCOUNT_ID="…" export B2_ACCOUNT_KEY="…" export RESTIC_PASSWORD="…"

Next, make a list of the files you’d like to back up in /root/backup-files.txt:

/media/data1/nextcloud/data/USERNAME/files /media/data1/syncthing/USERNAME /writable/system-data/

I added a couple of quick little helper scripts to handle the most common things you’ll be doing with Restic:

/root/bin/restic-MYACCOUNT.sh

#!/bin/sh . /root/restic-MYACCOUNT.env /root/bin/restic $@

Use this as a shortcut to run restic with the correct environment variables.

/root/bin/backups-push.sh

#!/bin/sh RESTIC="/root/bin/restic-MYACCOUNT.sh" RESTIC_ARGS="--cache-dir /root/.cache/restic" ${RESTIC} ${RESTIC_ARGS} backup --files-from /root/backup-files.txt --exclude ".stversions" --exclude-if-present ".backup-ignore" --exclude-caches

This will ignore any directory that contains a file named “.backup-ignore”. (So to stop a directory from being backed up, you can run touch /path/to/the/directory/.backup-ignore). This is a great way to save time if you have some big directories that don’t really need to be backed up, like a directory full of, um,  Linux ISOs shifty eyes.

/root/bin/backups-clean.sh

#!/bin/sh RESTIC="/root/bin/restic-MYACCOUNT.sh" RESTIC_ARGS="--cache-dir /root/.cache/restic" ${RESTIC} ${RESTIC_ARGS} forget --keep-daily 7 --keep-weekly 8 --keep-monthly 12 --prune ${RESTIC} ${RESTIC_ARGS} check

This will periodically remove old snapshots, prune unused blocks, and then check for errors.

Make sure all of those scripts are executable:

$ sudo chmod +x /root/bin/restic-MYACCOUNT.sh $ sudo chmod +x /root/bin/restic-push.sh $ sudo chmod +x /root/bin/restic-clean.sh

We still need to add systemd stuff, but let’s try this thing first!

$ sudo /root/bin/restic-MYACCOUNT.sh init $ sudo /root/bin/backups-push.sh $ sudo /root/bin/restic-MYACCOUNT.sh snapshots

Have fun playing with Restic, try restoring some files, note that you can list all the files in a snapshot and restore specific ones. It’s a really nice little backup tool.

It’s pretty easy to get systemd helping here as well. First let’s add our service file. This is a different kind of system service because it isn’t a daemon. Instead, it is a oneshot service. We’ll save it as /writable/system-data/etc/systemd/system/backups-task.service.

[Unit] Description=Regular system backups with Restic [Service] Type=oneshot ExecStart=/bin/sh /root/bin/backups-push.sh ExecStart=/bin/sh /root/bin/backups-clean.sh

Now we need to schedule it to run on a regular basis. Let’s create a systemd timer unit for that: /writable/system-data/etc/systemd/system/backups-task.timer.

[Unit] Description=Run backups-task daily [Timer] OnCalendar=09:00 UTC Persistent=true [Install] WantedBy=timers.target

One gotcha to notice here: with newer versions of systemd, you can use time zones like PDT or America/Vancouver for the OnCalendar entry, and you can test how that will work using systemd-analyze calendar "09:00 America/Vancouver. Alas, that is not the case in Ubuntu Core 16, so you’ll probably have the best luck using UTC and calculating timezones yourself.

Now that you have your timer and your service, you can test the service by starting it:

$ sudo systemctl start backups-task.service $ sudo systemctl status backups-task.service

If all goes well, enable the timer:

$ sudo systemctl start backups-task.timer $ sudo systemctl enable backups-task-timer

To see your timer, you can use systemctl list-timers:

$ sudo systemctl list-timers … Sat 2018-04-28 09:00:00 UTC 3h 30min left Fri 2018-04-27 09:00:36 UTC 20h ago backups-task.timer backups-task.service … Some notes on security

Some people (understandably) dislike running this kind of web service on port 80. Nextcloud’s Apache instance runs on port 80 and port 443 by default, but you can change that using snap set nextcloud ports.http=80 ports.https=443.  However, you may need to generate a self-signed SSL certificate in that case.

Nextcloud (like any daemon installed by Snappy) runs as root, but, as a snap, it is confined to a subset of the system. There is some official documentation about security and sandboxing in Ubuntu Core if you are interested. You can always run sudo snap run --shell nextcloud.occ to get an idea of what it has access to.

If you feel paranoid about how we gave Nextcloud access to all removable media, you can create a bind mount from /writable/system-data/var/snap/nextcloud/common/nextcloud to /media/data1/nextcloud, like we did for Syncthing, and snap disconnect nextcloud:removable-media. Now it only has access to those files on the other end of the bind mount.

Conclusion

So that’s everything!

This definitely isn’t a tiny amount of setup. It took an afternoon. (And it’ll probably take two or three years to pay for itself). But I’m impressed by how smoothly it all went, and with a few exceptions where I was nudged into loopy workarounds,  it feels simple and reproducible. If you’re looking at hosting more of your own files, I would happily recommend something like this.

Costales: Podcast Ubuntu y otras hierbas S02E06: UbuCon Europe 2018

Sht, 05/05/2018 - 3:12md
En esta ocasión, Francisco MolineroFrancisco Javier Teruelo y Marcos Costales, junto a los invitados Sergi Quiles, Paco Estrada (Compilando Podcast), Alejandro López (Slimbook), analizamos la tercera UbuCon Europe celebrada esta semana en Xixón.

Capítulo 6º de la segunda temporada
El podcast esta disponible para escuchar en:

Benjamin Mako Hill: Climbing Mount Rainier

Pre, 04/05/2018 - 12:38pd

Mount Rainier is an enormous glaciated volcano in Washington state. It’s  4,392 meters tall (14,410 ft) and extraordinary prominent. The mountain is 87 km (54m) away from Seattle. On clear days, it dominates the skyline.

Drumheller Fountain and Mt. Rainier on the University of Washington Campus (Photo by Frank Fujimoto)

Rainier’s presence has shaped the layout and structure of Seattle. Important roads are built to line up with it. The buildings on the University of Washington’s campus, where I work, are laid out to frame it along the central promenade. People in Seattle typically refer to Rainier simply as “the mountain.”  It is common to here Seattlites ask “is the mountain out?”

Having grown up in Seattle, I have an deep emotional connection to the mountain that’s difficult to explain to people who aren’t from here. I’ve seen Rainier thousands of times and every single time it takes my breath away. Every single day when I bike to work, I stop along UW’s “Rainier Vista” and look back to see if the mountain is out. If it is, I always—even if I’m running late for a meeting—stop for a moment to look at it. When I lived elsewhere and would fly to visit Seattle, seeing Rainier above the clouds from the plane was the moment that I felt like I was home.

Given this connection, I’ve always been interested in climbing Mt. Rainier.  Doing so typically takes at least a couple days and is difficult. About half of people who attempt typically fail to reach the top. For me, climbing rainier required an enormous amount of training and gear because, until recently, I had no experience with mountaineering. I’m not particularly interested in climbing mountains in general. I am interested in Rainier.

On Tuesday, Mika and I made our first climbing attempt and we both successfully made it to the summit. Due to the -15°C (5°F) temperatures and 88kph (55mph) winds at the top, I couldn’t get a picture at the top. But I feel like I’ve built a deeper connection with an old friend.

Other than the picture from UW campus, photos were all from my climb and taken by (in order): Jennifer Marie, Jonathan Neubauer, Mika Matsuzaki, Jonathan Neubauer, Jonathan Neubauer, Mika Matsuzaki, and Jake Holthaus.

Mark Shuttleworth: Scam alert

Enj, 03/05/2018 - 3:00md

Am writing briefly to say that I believe a scam or pyramid scheme is currently using my name fraudulently in South Africa. I am not going to link to the websites in question here, but if you are being pitched a make-money-fast story that refers to me and crypto-currency, you are most likely being targeted by fraudsters.

David Tomaschik: How the Twitter and GitHub Password Logging Issues Could Happen

Enj, 03/05/2018 - 9:00pd

There have recently been a couple of highly-publicized (at least in the security community) issues with two tech giants logging passwords in plaintext. First, GitHub found they were logging plaintext passwords on password reset. Then, Twitter found they were logging all plaintext passwords. Let me begin by saying that I have no insider knowledge of either bug, and I have never worked at either Twitter or GitHub, but I enjoy randomly speculating on the internet, so I thought I would speculate on this. (Especially since the /r/netsec thread on the Twitter article is amazingly full of misconceptions.)

A Password Primer

A few commenters on /r/netsec seem amazed that Twitter ever sees the plaintext password. They seem to believe that the hashing (or “encryption” for some users) occurs on the client. Nope. In very few places have I ever seen any kind of client-side hashing (password managers being a notable exception).

In the case of both GitHub and Twitter, you can look at the HTTP requests (using the Chrome inspector, Burp Suite, mitmproxy, or any number of tools) and see your plaintext password being sent to the server. Now, that’s not to say it’s on the wire in plaintext, only in the HTTP requests. Both sites use proper TLS implementations to tunnel the login, so a passive observer on the wire just sees encrypted traffic. However, inside that encrypted traffic, your password sits in plaintext.

Once the plaintext password arrives at the application server, your salted & hashed password is retrieved from the database, the same salt & hash algorithm is applied to the plaintext passwords, and the two results are compared. If they’re the same, you’re in, otherwise you get the nice “Login failed” screen. In order for this to work, the server must use the same input to both of the hash algorithms, and those inputs are the salt (from the database) and the plaintext password. So yes, the server sees your plaintext password.

Yes, it’s possible to do client-side hashing, but it’s complicated, and requires sending the salt from the server to the client (or using a deterministic salt), and possibly slow on mobile devices, and there’s lots of reasons companies don’t want to do it. Approximately the only security improvement is avoiding logging plaintext passwords (which is, unfortunately, exactly what happened here).

Large Scale Software

So another trope is “this should have been caught in code review.” Yeah, it turns out code review is not perfect, and nobody has a full overview of every line of code in the application. This isn’t the space program or aircraft control systems, where the code is frozen and reviewed. In most tech companies (as far as I can tell), releases are cut all the time with a handful of changes that were reviewed in isolation and occasionally have strange interactions. It does not surprise me at all for something like this to happen.

How it Might Have Happened

I’d like to reiterate: this is purely speculation. I don’t know any details at either company, and I suspect Twitter found their error because someone saw the GitHub news and said “we should double check our logs.”

Some people seem to think the login looked something like this:

1 2 3 4 def login(username, password): log(username + " has password " + password) stored = get_stored_password(username) return hash(password) == stored

This seems fairly obvious, and I’d like to think it would be quickly caught by the developer themselves, let alone any kind of code review. However, it’s far more likely that something like this is at play:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 def login(username, password): service_request = { 'service': 'login', 'environment': get_environment(), 'username': username, 'password': password, } result = make_service_request(service_request) return result.ok() def make_service_request(request_definition): if request_definition['environment'] != 'prod': log('making service request: ' + repr(request_definition)) backend = get_backend(request_definition['service']) return backend.issue_request(request_definition) def get_environment(): return os.getenv('ENVIRONMENT')

They might even have a test like this:

1 2 3 4 def test_make_service_request_no_logs_in_prod(): fake_request = {'environment': 'prod'} make_service_request(fake_request) assertNotCalled(log)

All of this would look great (well, acceptable, this is a blog post, not a real service) under code review. We log the requests in our test environment for debugging purposes. It’s never obvious that a login request is being logged, and in the environment prod it’s not. But maybe one day our service grows and we start deploying in multiple regions, and so we rename environments. What was prod becomes prod-us and we add prod-eu. All of a sudden, our code that has not been logging passwords starts logging passwords, and it didn’t even take a code push, just an environment variable to change!

In reality, their code is probably much more complex and even harder to see the pattern. I have spent multiple days in a team of multiple engineers trying to find one singular bug. We could produce it via black-box testing (i.e., pentest) but could not find it in the source code. It turned out to be a misconfigured dependency injection caused by strange inheritance rules.

Yes, it’s bad that GitHub and Twitter had these bugs. I don’t mean to apologize for them. But they handled them responsibly, and the whole community has had a chance to learn a lesson. If GitHub had not disclosed, I suspect Twitter would not have noticed for much longer. Other organizations are probably also checking.

Every organization will have security issues. It’s how you handle them that counts.

Chris Coulson: Debugging the debugger

Mër, 02/05/2018 - 11:02md

I use gdb quite often, but until recently I’ve never really needed to understand how it works or debug it before. I thought I’d document a recent issue I decided to take a look at – perhaps someone else will find it interesting or useful.

We run the rust testsuite when building rustc packages in Ubuntu. When preparing updates to rust 1.25 recently for Ubuntu 18.04 LTS, I hit a bunch of test failures on armhf which all looked very similar. Here’s an example test failure:

---- [debuginfo-gdb] debuginfo/borrowed-c-style-enum.rs stdout ---- NOTE: compiletest thinks it is using GDB with native rust support executing "/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/stage2/bin/rustc" "/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/src/test/debuginfo/borrowed-c-style-enum.rs" "-L" "/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo" "--target=armv7-unknown-linux-gnueabihf" "-C" "prefer-dynamic" "-o" "/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf" "-Crpath" "-Zmiri" "-Zunstable-options" "-Lnative=/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/native/rust-test-helpers" "-g" "-L" "/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf.gdb.aux" ------stdout------------------------------ ------stderr------------------------------ ------------------------------------------ NOTE: compiletest thinks it is using GDB version 8001000 executing "/usr/bin/gdb" "-quiet" "-batch" "-nx" "-command=/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.debugger.script" ------stdout------------------------------ GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "arm-linux-gnueabihf". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word". Breakpoint 1 at 0xcc4: file /<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/src/test/debuginfo/borrowed-c-style-enum.rs, line 61. Program received signal SIGSEGV, Segmentation fault. 0xf77c9f4e in ?? () from /lib/ld-linux-armhf.so.3 ------stderr------------------------------ /<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.debugger.script:10: Error in sourced command file: No symbol 'the_a_ref' in current context ------------------------------------------ error: line not found in debugger output: $1 = borrowed_c_style_enum::ABC::TheA status: exit code: 0 command: "/usr/bin/gdb" "-quiet" "-batch" "-nx" "-command=/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.debugger.script" stdout: ------------------------------------------ GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "arm-linux-gnueabihf". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word". Breakpoint 1 at 0xcc4: file /<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/src/test/debuginfo/borrowed-c-style-enum.rs, line 61. Program received signal SIGSEGV, Segmentation fault. 0xf77c9f4e in ?? () from /lib/ld-linux-armhf.so.3 ------------------------------------------ stderr: ------------------------------------------ /<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.debugger.script:10: Error in sourced command file: No symbol 'the_a_ref' in current context ------------------------------------------ thread '[debuginfo-gdb] debuginfo/borrowed-c-style-enum.rs' panicked at 'explicit panic', tools/compiletest/src/runtest.rs:2891:9 note: Run with `RUST_BACKTRACE=1` for a backtrace.

The failing tests are all running some commands in gdb, and the inferior (tracee) is crashing inside the dynamic loader (/lib/ld-linux-armhf.so.3) before running any rust code.

I managed to recreate this test failure on an armhf box, but when I installed the debug symbols for the dynamic loader (contained in the libc6-dbg package) so that I could attempt to debug these crashes, the failing tests all started to pass.

A quick search on the internet shows that I’m not the first person to hit this issue – for example, this bug reported in April 2016. According to the comments, the workaround is the same – installing the debug symbols for the dynamic loader (by installing the libc6-dbg package). This obviously isn’t right and I don’t particularly like walking away from something like this without understanding it, so I decided to spend some time trying to figure out what is going on.

This first thing I did was to load the missing debug symbols manually in gdb after hitting the crash, in order to hopefully get a useful backtrace:

$ gdb build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf ... (gdb) run Starting program: /home/ubuntu/src/rustc/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf Program received signal SIGSEGV, Segmentation fault. 0xf77c9f4e in ?? () from /lib/ld-linux-armhf.so.3 (gdb) info sharedlibrary From To Syms Read Shared Object Library 0xf77c7a40 0xf77dadd0 Yes (*) /lib/ld-linux-armhf.so.3 0xf771ce90 0xf778e288 No /home/ubuntu/src/rustc/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/../../stage2/lib/rustlib/armv7-unknown-linux-gnueabihf/lib/libstd-42d13165275d0302.so 0xf76c91f0 0xf76d394c No /lib/arm-linux-gnueabihf/libgcc_s.so.1 0xf75dad80 0xf7687a90 No /lib/arm-linux-gnueabihf/libc.so.6 0xf75b1a14 0xf75b2410 No /lib/arm-linux-gnueabihf/libdl.so.2 0xf759c810 0xf759edf0 No /lib/arm-linux-gnueabihf/librt.so.1 0xf757a210 0xf7585214 No /lib/arm-linux-gnueabihf/libpthread.so.0 (*): Shared library is missing debugging information. (gdb) add-symbol-file ~/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so 0xf77c7a40 add symbol table from file "/home/ubuntu/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so" at .text_addr = 0xf77c7a40 (y or n) y Reading symbols from /home/ubuntu/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so...done. (gdb) bt full #0 dl_main (phdr=<optimized out>, phnum=<optimized out>, user_entry=<optimized out>, auxv=<optimized out>) at rtld.c:2275 cnt = 1 afct = 0x0 head = <optimized out> ph = <optimized out> mode = <optimized out> main_map = <optimized out> file_size = 4294899100 file = <optimized out> has_interp = <optimized out> i = <optimized out> prelinked = <optimized out> rtld_is_main = <optimized out> tcbp = <optimized out> __PRETTY_FUNCTION__ = <error reading variable __PRETTY_FUNCTION__ (Cannot access memory at address 0x15810)> first_preload = <optimized out> r = <optimized out> rtld_ehdr = <optimized out> rtld_phdr = <optimized out> cnt = <optimized out> need_security_init = <optimized out> count_modids = <optimized out> preloads = <optimized out> npreloads = <optimized out> preload_file = <error reading variable preload_file (Cannot access memory at address 0x157fc)> rtld_multiple_ref = <optimized out> was_tls_init_tp_called = <optimized out></details> #1 0xf77d76d0 in _dl_sysdep_start (start_argptr=start_argptr@entry=0xfffef6b1, dl_main=0xf77c872d <dl_main>) at ../elf/dl-sysdep.c:253 phdr = <optimized out> phnum = <optimized out> user_entry = 4197241 av = <optimized out> #2 0xf77c8260 in _dl_start_final (arg=0xfffef6b1) at rtld.c:414 start_addr = <optimized out> start_addr = <optimized out> #3 _dl_start (arg=0xfffef6b1) at rtld.c:521 entry = <optimized out> #4 0xf77c7b90 in ?? () from /lib/ld-linux-armhf.so.3 library_path = <error reading variable library_path (Cannot access memory at address 0x28920)> version_info = <error reading variable version_info (Cannot access memory at address 0x28918)> any_debug = <error reading variable any_debug (Cannot access memory at address 0x28914)> _dl_rtld_libname = <error reading variable _dl_rtld_libname (Cannot access memory at address 0x298a8)> _dl_rtld_libname2 = <error reading variable _dl_rtld_libname2 (Cannot access memory at address 0x298b4)> tls_init_tp_called = <error reading variable tls_init_tp_called (Cannot access memory at address 0x29898)> audit_list = <error reading variable audit_list (Cannot access memory at address 0x298a4)> preloadlist = <error reading variable preloadlist (Cannot access memory at address 0x2891c)> _dl_skip_args = <error reading variable _dl_skip_args (Cannot access memory at address 0x2994c)> audit_list_string = <error reading variable audit_list_string (Cannot access memory at address 0x29968)> __stack_chk_guard = <error reading variable __stack_chk_guard (Cannot access memory at address 0x28968)> _rtld_global = <error reading variable _rtld_global (Cannot access memory at address 0x29060)> _rtld_global_ro = <error reading variable _rtld_global_ro (Cannot access memory at address 0x28970)> _dl_argc = <error reading variable _dl_argc (Cannot access memory at address 0x28910)> __GI__dl_argv = <error reading variable __GI__dl_argv (Cannot access memory at address 0x29894)> __pointer_chk_guard_local = <error reading variable __pointer_chk_guard_local (Cannot access memory at address 0x28964)>

You can grab the glibc source and see that the dynamic loader ends up here in elf/rtld.c:

if (__glibc_unlikely (GLRO(dl_naudit) > 0)) { struct link_map *head = GL(dl_ns)[LM_ID_BASE]._ns_loaded; /* Do not call the functions for any auditing object. */ if (head->l_auditing == 0) { struct audit_ifaces *afct = GLRO(dl_audit); for (unsigned int cnt = 0; cnt < GLRO(dl_naudit); ++cnt) { if (afct->activity != NULL) // ##CRASHES HERE## afct->activity (&head->l_audit[cnt].cookie, LA_ACT_CONSISTENT); afct = afct->next; } } }

The reason for the crash is that afct is NULL:

(gdb) p $_siginfo $1 = {si_signo = 11, si_errno = 0, si_code = 1, _sifields = {_pad = {0, 56, 19628232, 19628288, -156661788, 0, 80, 19811416, -156663808, -157316581, 104, 1073741824, 19811416, 96, 19811408, 80, -156661788, 14, 19551104, 96, 104, 13358248, 19551584, 19552160, 19552232, 19811488, 32, 128, 64}, _kill = {si_pid = 0, si_uid = 56}, _timer = {si_tid = 0, si_overrun = 56, si_sigval = {sival_int = 19628232, sival_ptr = 0x12b80c8}}, _rt = {si_pid = 0, si_uid = 56, si_sigval = {sival_int = 19628232, sival_ptr = 0x12b80c8}}, _sigchld = {si_pid = 0, si_uid = 56, si_status = 19628232, si_utime = 19628288, si_stime = -156661788}, _sigfault = {si_addr = 0x0}, _sigpoll = {si_band = 0, si_fd = 56}}} (gdb) p afct $2 = (struct audit_ifaces *) 0x0

A quick look through the dynamic loader code shows that this condition should be impossible to hit.

As the crash doesn’t happen with debug symbols, I thought I would attempt to debug it without the symbols. First of all, I set a breakpoint at the start of dl_main by specifying it at offset 0xcec in the .text section:

$ gdb build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf ... (gdb) starti Starting program: /home/ubuntu/src/rustc/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf Program stopped. 0xf77c7b80 in ?? () from /lib/ld-linux-armhf.so.3 (gdb) info sharedlibrary From To Syms Read Shared Object Library 0xf77c7a40 0xf77dadd0 Yes (*) /lib/ld-linux-armhf.so.3 (*): Shared library is missing debugging information. (gdb) break *0xf77c872c Breakpoint 1 at 0xf77c872c (gdb) cont Continuing. Program received signal SIGSEGV, Segmentation fault. 0xf77da458 in ?? () from /lib/ld-linux-armhf.so.3

Huh? It’s now crashed at a different place, without hitting our breakpoint at the start of dl_main. Loading the debug symbols again shows us where:

(gdb) add-symbol-file ~/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so 0xf77c7a40 add symbol table from file "/home/ubuntu/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so" at .text_addr = 0xf77c7a40 (y or n) y Reading symbols from /home/ubuntu/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so...done. (gdb) bt #0 ?? () at ../sysdeps/arm/armv7/multiarch/memcpy_impl.S:654 from /lib/ld-linux-armhf.so.3 #1 0xf77c871e in handle_ld_preload (preloadlist=<optimized out>, main_map=0x0) at rtld.c:848 #2 0x00000000 in ?? () Backtrace stopped: previous frame identical to this frame (corrupt stack?)

This doesn’t make much sense, but the fact that setting a breakpoint has altered the program flow is our first clue.

On Linux, gdb interacts with the inferior using the ptrace system call. The next thing I wanted to try was running gdb in strace in order to capture the ptrace syscalls, so that I could compare differences afterwards and see if I could find any more clues.

I created the following simple gdb command file:

file /home/ubuntu/src/rustc/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf run quit

I then ran gdb with this file inside strace, with the symbols for the dynamic loader installed. Here’s the log up until the point at which gdb calls PTRACE_CONT:

$ strace -t -eptrace gdb -quiet -batch -nx -command=~/test.script 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=21136, si_uid=1000, si_status=0, si_utime=0, si_stime=0} --- ... 13:08:35 ptrace(PTRACE_GETREGS, 21137, NULL, 0xffd6afec) = 0 13:08:35 ptrace(PTRACE_GETSIGINFO, 21137, NULL, {si_signo=SIGTRAP, si_code=SI_USER, si_pid=21137, si_uid=1000}) = 0 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21137, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} --- 13:08:35 ptrace(PTRACE_CONT, 21137, 0x1, SIG_0) = 0 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21137, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} --- 13:08:35 ptrace(PTRACE_GETREGS, 21137, NULL, 0xffd6afec) = 0 13:08:35 ptrace(PTRACE_GETSIGINFO, 21137, NULL, {si_signo=SIGTRAP, si_code=SI_USER, si_pid=21137, si_uid=1000}) = 0 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21138, si_uid=1000, si_status=SIGSTOP, si_utime=0, si_stime=0} --- 13:08:35 ptrace(PTRACE_SETOPTIONS, 21138, NULL, PTRACE_O_TRACESYSGOOD) = 0 13:08:35 ptrace(PTRACE_SETOPTIONS, 21138, NULL, PTRACE_O_TRACEFORK) = 0 13:08:35 ptrace(PTRACE_SETOPTIONS, 21138, NULL, PTRACE_O_TRACEFORK|PTRACE_O_TRACEVFORKDONE) = 0 13:08:35 ptrace(PTRACE_CONT, 21138, NULL, SIG_0) = 0 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21138, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} --- 13:08:35 ptrace(PTRACE_GETEVENTMSG, 21138, NULL, [21139]) = 0 13:08:35 ptrace(PTRACE_KILL, 21139) = 0 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=21139, si_uid=1000, si_status=SIGKILL, si_utime=0, si_stime=0} --- 13:08:35 ptrace(PTRACE_SETOPTIONS, 21138, NULL, PTRACE_O_EXITKILL) = 0 13:08:35 ptrace(PTRACE_KILL, 21138) = 0 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21138, si_uid=1000, si_status=SIGCHLD, si_utime=0, si_stime=0} --- 13:08:35 ptrace(PTRACE_KILL, 21138) = 0 13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=21138, si_uid=1000, si_status=SIGKILL, si_utime=0, si_stime=0} --- 13:08:35 ptrace(PTRACE_SETOPTIONS, 21137, NULL, PTRACE_O_TRACESYSGOOD|PTRACE_O_TRACEFORK|PTRACE_O_TRACEVFORK|PTRACE_O_TRACECLONE|PTRACE_O_TRACEEXEC|PTRACE_O_TRACEVFORKDONE|PTRACE_O_EXITKILL) = 0 13:08:35 ptrace(PTRACE_GETREGSET, 21137, NT_PRSTATUS, [{iov_base=0xffd6b3b4, iov_len=72}]) = 0 13:08:35 ptrace(PTRACE_GETVFPREGS, 21137, NULL, 0xffd6b298) = 0 13:08:35 ptrace(PTRACE_GETREGSET, 21137, NT_PRSTATUS, [{iov_base=0xffd6b36c, iov_len=72}]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0x411efc, [NULL]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0x411efc, [NULL]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18bf00]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9a44, 0x4c18de01) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9ef8, [0xf00cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9ef8, [0xf00cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9ef8, [0xf00cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9ef8, [0xf00cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9ef8, [0xf00cbf00]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9ef8, 0xf00cde01) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cbf00]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb5b8, 0x603cde01) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639bf00]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb220, 0x4639de01) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5310, [0xbf00e71c]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5310, [0xbf00e71c]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5310, [0xbf00e71c]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5310, [0xbf00e71c]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5310, [0xbf00e71c]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5310, 0xde01e71c) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bbf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bbf00]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5bb0, 0x6d7bde01) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5d90, [0xe681bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5d90, [0xe681bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5d90, [0xe681bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5d90, [0xe681bf00]) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5d90, [0xe681bf00]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5d90, 0xe681de01) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18de01]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9a44, 0x4c18bf00) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cde01]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb5b8, 0x603cbf00) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639de01]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb220, 0x4639bf00) = 0 13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bde01]) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5bb0, 0x6d7bbf00) = 0 13:08:35 ptrace(PTRACE_CONT, 21137, 0x1, SIG_0) = 0

First of all, notice that there are several PTRACE_POKEDATA calls. These are used by gdb to write to memory locations in the process that we’re debugging, eg, to set breakpoints. For more information about how breakpoints work in gdb, this blog post has some good information. Basically, gdb writes an invalid instruction to the breakpoint location and this causes a SIGTRAP when executed, which is intercepted by gdb. When you continue over the breakpoint, gdb writes the original instruction back, single-steps over it, re-writes the invalid instruction and then continues execution.

This is an obvious way in which gdb can interfere with our process and make it crash, so I focused on these calls. I’ve filtered them out below:

13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9a44, 0x4c18de01) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9ef8, 0xf00cde01) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb5b8, 0x603cde01) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb220, 0x4639de01) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5310, 0xde01e71c) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5bb0, 0x6d7bde01) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5d90, 0xe681de01) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9a44, 0x4c18bf00) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb5b8, 0x603cbf00) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb220, 0x4639bf00) = 0 13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5bb0, 0x6d7bbf00) = 0

Notice that the first 7 of these write the same 2-byte sequence – 0xde01. These are breakpoints in code that is running in Thumb mode (see arm_linux_thumb_le_breakpoint in gdb/arm-linux-tdep.c in the gdb source code). 0xde01 in Thumb mode is an undefined instruction.

(Note that the write to 0xf77d5310 is actually a breakpoint at 0xf77d5312, as 0xde01 appears in the 2 higher order bytes and this is little-endian).

We aren’t inserting any breakpoints ourselves – these breakpoints are set automatically by gdb to monitor various events in the dynamic loader during startup. This is something I wasn’t aware of before debugging this.

It may be useful to know how gdb determines the addresses on which to set breakpoints at startup. The dynamic loader exports various events as SystemTap probes, and data about these is stored in the .note.stapsdt ELF section. We can inspect this using readelf:

$ readelf -n /lib/ld-linux-armhf.so.3 Displaying notes found in: .note.gnu.build-id Owner Data size Description GNU 0x00000014 NT_GNU_BUILD_ID (unique build ID bitstring) Build ID: 3f3b9b4bfea2654f2cedf6db2d120b4e3a39ea7e Displaying notes found in: .note.stapsdt Owner Data size Description stapsdt 0x00000032 NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: init_start Location: 0x00002a44, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@.L1204 4@[r7, #52] stapsdt 0x0000002e NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: init_complete Location: 0x00002ef8, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@.L1207 4@r4 stapsdt 0x0000002e NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: map_failed Location: 0x00004220, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@[sp, #20] 4@r5 stapsdt 0x00000035 NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: map_start Location: 0x000045b8, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@[r7, #252] 4@[r7, #72] stapsdt 0x0000003c NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: map_complete Location: 0x0000e020, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@[fp, #20] 4@[r7, #36] 4@r4 stapsdt 0x00000036 NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: reloc_start Location: 0x0000e09e, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@[fp, #20] 4@[r7, #36] stapsdt 0x0000003e NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: reloc_complete Location: 0x0000e312, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@[fp, #20] 4@[r7, #36] 4@r4 stapsdt 0x00000037 NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: unmap_start Location: 0x0000ebb0, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@[r7, #104] 4@[r7, #80] stapsdt 0x0000003a NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: unmap_complete Location: 0x0000ed90, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: -4@[r7, #104] 4@[r7, #80] stapsdt 0x00000029 NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: setjmp Location: 0x0001201c, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: 4@r0 -4@r1 4@r14 stapsdt 0x00000029 NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: longjmp Location: 0x00012088, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: 4@r0 -4@r1 4@r4 stapsdt 0x00000031 NT_STAPSDT (SystemTap probe descriptors) Provider: rtld Name: longjmp_target Location: 0x000120ba, Base: 0x00017b9c, Semaphore: 0x00000000 Arguments: 4@r0 -4@r1 4@r14

GDB uses this information to map events to breakpoint addresses. You can read a bit more about gdb’s linker interface here, and more about userspace SystemTap probes here.

With a base address of 0xf77c7000, we can look at the PTRACE_POKEDATA calls and see that the addresses map to these probes:

  • 0xf77c9a44 => init_start
  • 0xf77c9ef8 => init_complete
  • 0xf77cb5b8 => map_start
  • 0xf77cb220 => map_failed
  • 0xf77d5312 => reloc_complete
  • 0xf77d5bb0 => unmap_start
  • 0xf77d5d90 => unmap_complete

This is consistent with the probe_info array in gdb/solib-svr4.c in the gdb source code.

I then ran gdb inside strace again, this time without the symbols for the dynamic loader installed. Here’s the log up until the point at which the inferior process crashes:

$ strace -t -eptrace gdb -quiet -batch -nx -command=~/test.script 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=21098, si_uid=1000, si_status=0, si_utime=0, si_stime=0} --- ... 13:01:50 ptrace(PTRACE_GETREGS, 21099, NULL, 0xffb84a9c) = 0 13:01:50 ptrace(PTRACE_GETSIGINFO, 21099, NULL, {si_signo=SIGTRAP, si_code=SI_USER, si_pid=21099, si_uid=1000}) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21099, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} --- 13:01:50 ptrace(PTRACE_CONT, 21099, 0x1, SIG_0) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21099, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} --- 13:01:50 ptrace(PTRACE_GETREGS, 21099, NULL, 0xffb84a9c) = 0 13:01:50 ptrace(PTRACE_GETSIGINFO, 21099, NULL, {si_signo=SIGTRAP, si_code=SI_USER, si_pid=21099, si_uid=1000}) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21100, si_uid=1000, si_status=SIGSTOP, si_utime=0, si_stime=0} --- 13:01:50 ptrace(PTRACE_SETOPTIONS, 21100, NULL, PTRACE_O_TRACESYSGOOD) = 0 13:01:50 ptrace(PTRACE_SETOPTIONS, 21100, NULL, PTRACE_O_TRACEFORK) = 0 13:01:50 ptrace(PTRACE_SETOPTIONS, 21100, NULL, PTRACE_O_TRACEFORK|PTRACE_O_TRACEVFORKDONE) = 0 13:01:50 ptrace(PTRACE_CONT, 21100, NULL, SIG_0) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21100, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} --- 13:01:50 ptrace(PTRACE_GETEVENTMSG, 21100, NULL, [21101]) = 0 13:01:50 ptrace(PTRACE_KILL, 21101) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=21101, si_uid=1000, si_status=SIGKILL, si_utime=0, si_stime=0} --- 13:01:50 ptrace(PTRACE_SETOPTIONS, 21100, NULL, PTRACE_O_EXITKILL) = 0 13:01:50 ptrace(PTRACE_KILL, 21100) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21100, si_uid=1000, si_status=SIGCHLD, si_utime=0, si_stime=0} --- 13:01:50 ptrace(PTRACE_KILL, 21100) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=21100, si_uid=1000, si_status=SIGKILL, si_utime=0, si_stime=0} --- 13:01:50 ptrace(PTRACE_SETOPTIONS, 21099, NULL, PTRACE_O_TRACESYSGOOD|PTRACE_O_TRACEFORK|PTRACE_O_TRACEVFORK|PTRACE_O_TRACECLONE|PTRACE_O_TRACEEXEC|PTRACE_O_TRACEVFORKDONE|PTRACE_O_EXITKILL) = 0 13:01:50 ptrace(PTRACE_GETREGSET, 21099, NT_PRSTATUS, [{iov_base=0xffb84e64, iov_len=72}]) = 0 13:01:50 ptrace(PTRACE_GETVFPREGS, 21099, NULL, 0xffb84d48) = 0 13:01:50 ptrace(PTRACE_GETREGSET, 21099, NT_PRSTATUS, [{iov_base=0xffb84e1c, iov_len=72}]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0x411efc, [NULL]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0x411efc, [NULL]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77c9a44, [0x4c18bf00]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77c9a44, [0x4c18bf00]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9a44, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77c9ef8, [0xf00cbf00]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77c9ef8, [0xf00cbf00]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9ef8, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77cb5b8, [0x603cbf00]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77cb5b8, [0x603cbf00]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb5b8, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77cb220, [0x4639bf00]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77cb220, [0x4639bf00]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb220, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5310, [0xbf00e71c]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5314, [0x4620e776]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5310, [0xbf00e71c]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5314, [0x4620e776]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5310, [0xbf00e71c]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5310, 0x1f0e71c) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5314, [0x4620e776]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5314, 0x4620e7f0) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5bb0, [0x6d7bbf00]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5bb0, [0x6d7bbf00]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5bb0, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5d90, [0xe681bf00]) = 0 13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5d90, [0xe681bf00]) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5d90, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9a44, 0x4c18bf00) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb5b8, 0x603cbf00) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb220, 0x4639bf00) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5bb0, 0x6d7bbf00) = 0 13:01:50 ptrace(PTRACE_CONT, 21099, 0x1, SIG_0) = 0 13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21099, si_uid=1000, si_status=SIGSEGV, si_utime=0, si_stime=0} ---

Focusing again on the PTRACE_POKEDATA calls:

13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9a44, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9ef8, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb5b8, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb220, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5310, 0x1f0e71c) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5314, 0x4620e7f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5bb0, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5d90, 0xe7f001f0) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9a44, 0x4c18bf00) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb5b8, 0x603cbf00) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb220, 0x4639bf00) = 0 13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5bb0, 0x6d7bbf00) = 0

We see writes to the same 7 addresses where breakpoints were set during the first run, but now there’s a different byte sequence and an extra write. This time, gdb is writing a 4-byte sequence – 0xe7f001f0 to 6 addresses. These are breakpoints for code running in ARM mode (see eabi_linux_arm_le_breakpoint in gdb/arm-linux-tdep.c in the gdb source code). The 2 writes to 0xf77d5310 and 0xf77d5314 are a single breakpoint at 0xf77d5312 (there are 2 writes because it is not on a 4-byte boundary).

Checking the ARMv7 reference manual shows that 0xe7f001f0 is an undefined instruction in ARM mode. However, this byte sequence is decoded as the following valid instructions in Thumb mode:

lsl r0, r6, #7 b #-16

So, it takes the contents of r6, does a logical shift left by 7, writes it to r0 and then does an unconditional branch backwards by 16 bytes. This is quite likely going to cause our program (in this case, the dynamic loader) to go off the rails and crash with a less than useful stacktrace, which is the behaviour we’re seeing.

Why is this happening?

The next step was to figure out why gdb is inserting the ARM breakpoint instruction sequence instead of the Thumb one. To do this, I needed to understand where the breakpoints are written, and grepping the source code suggests the PTRACE_POKEDATA calls happen in inf_ptrace_peek_poke in gdb/inf-ptrace.c (actually, you won’t find PTRACE_POKEDATA here – it’s PT_WRITE_D which is defined in /usr/include/sys/ptrace.h).

Running gdb inside gdb with the dynamic loader debug symbols installed and setting a breakpoint on inf_ptrace_peek_poke shows me the call stack. Note that I set a breakpoint by line number, as inf_ptrace_peek_poke is inlined and it was the only way I could get the conditional breakpoint to work:

$ gdb --args gdb --command=~/test.script ... (gdb) break ./gdb/inf-ptrace.c:578 if writebuf != 0x0 Breakpoint 1 at 0x51218: file ./gdb/inf-ptrace.c, line 578. (gdb) run Starting program: /usr/bin/gdb --command=\~/test.script Cannot parse expression `.L1207 4@r4'. warning: Probes-based dynamic linker interface failed. Reverting to original interface. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1". GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "arm-linux-gnueabihf". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word". Breakpoint 1, inf_ptrace_xfer_partial (ops=<optimized out>, object=<optimized out>, annex=<optimized out>, readbuf=0x0, writebuf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=2, xfered_len=0xfffeee18) at ./gdb/inf-ptrace.c:578 578 ./gdb/inf-ptrace.c: No such file or directory. (gdb) p/x offset $1 = 0xf77c9a44 (gdb) bt #0 inf_ptrace_xfer_partial (ops=<optimized out>, object=<optimized out>, annex=<optimized out>, readbuf=0x0, writebuf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=2, xfered_len=0xfffeee18) at ./gdb/inf-ptrace.c:578 #1 0x00457512 in linux_xfer_partial (ops=0x8306e0, object=<optimized out>, annex=0x0, readbuf=0x0, writebuf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=2, xfered_len=0xfffeee18) at ./gdb/linux-nat.c:4280 #2 0x004576da in linux_nat_xfer_partial (ops=0x8306e0, object=TARGET_OBJECT_MEMORY, annex=<optimized out>, readbuf=0x0, writebuf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=2, xfered_len=0xfffeee18) at ./gdb/linux-nat.c:3908 #3 0x005f64f4 in raw_memory_xfer_partial (ops=ops@entry=0x8306e0, readbuf=readbuf@entry=0x0, writebuf=writebuf@entry=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", memaddr=4152138308, len=len@entry=2, xfered_len=xfered_len@entry=0xfffeee18) at ./gdb/target.c:1064 #4 0x005f6a98 in target_xfer_partial (ops=ops@entry=0x8306e0, object=object@entry=TARGET_OBJECT_RAW_MEMORY, annex=annex@entry=0x0, readbuf=readbuf@entry=0x0, writebuf=writebuf@entry=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=<optimized out>, xfered_len=xfered_len@entry=0xfffeee18) at ./gdb/target.c:1298 #5 0x005f7030 in target_write_partial (xfered_len=0xfffeee18, len=2, offset=<optimized out>, buf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", annex=0x0, object=TARGET_OBJECT_RAW_MEMORY, ops=0x8306e0) at ./gdb/target.c:1554 #6 target_write_with_progress (ops=0x8306e0, object=object@entry=TARGET_OBJECT_RAW_MEMORY, annex=annex@entry=0x0, buf=buf@entry=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=len@entry=2, progress=progress@entry=0x0, baton=baton@entry=0x0) at ./gdb/target.c:1821 #7 0x005f70d2 in target_write (len=2, offset=2, buf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", annex=0x0, object=TARGET_OBJECT_RAW_MEMORY, ops=<optimized out>) at ./gdb/target.c:1847 #8 target_write_raw_memory (memaddr=memaddr@entry=4152138308, myaddr=myaddr@entry=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", len=len@entry=2) at ./gdb/target.c:1473 #9 0x00590c8e in default_memory_insert_breakpoint (gdbarch=<optimized out>, bp_tgt=0x9236f0) at ./gdb/mem-break.c:66 #10 0x004de3aa in bkpt_insert_location (bl=0x923698) at ./gdb/breakpoint.c:12525 #11 0x004e8426 in insert_bp_location (bl=bl@entry=0x923698, tmp_error_stream=tmp_error_stream@entry=0xfffef07c, disabled_breaks=disabled_breaks@entry=0xfffeefec, hw_breakpoint_error=hw_breakpoint_error@entry=0xfffeeff0, hw_bp_error_explained_already=hw_bp_error_explained_already@entry=0xfffeeff4) at ./gdb/breakpoint.c:2553 #12 0x004e9556 in insert_breakpoint_locations () at ./gdb/breakpoint.c:2977 #13 update_global_location_list (insert_mode=insert_mode@entry=UGLL_MAY_INSERT) at ./gdb/breakpoint.c:12177 #14 0x004ea0a0 in update_global_location_list_nothrow (insert_mode=UGLL_MAY_INSERT) at ./gdb/breakpoint.c:12215 #15 0x004ea484 in create_solib_event_breakpoint_1 (insert_mode=UGLL_MAY_INSERT, address=address@entry=4152138308, gdbarch=gdbarch@entry=0x0) at ./gdb/breakpoint.c:7555 #16 create_solib_event_breakpoint (gdbarch=gdbarch@entry=0x928fb0, address=address@entry=4152138308) at ./gdb/breakpoint.c:7562 #17 0x004497bc in svr4_create_probe_breakpoints (objfile=0x933888, probes=0xfffef148, gdbarch=0x928fb0) at ./gdb/solib-svr4.c:2089 #18 svr4_create_solib_event_breakpoints (gdbarch=0x928fb0, address=<optimized out>) at ./gdb/solib-svr4.c:2173 #19 0x00449c5c in enable_break (from_tty=<optimized out>, info=<optimized out>) at ./gdb/solib-svr4.c:2465 #20 svr4_solib_create_inferior_hook (from_tty=<optimized out>) at ./gdb/solib-svr4.c:3057 #21 0x0056bba6 in post_create_inferior (target=0x801084 <current_target>, from_tty=from_tty@entry=0) at ./gdb/infcmd.c:469 #22 0x0056c736 in run_command_1 (args=<optimized out>, from_tty=0, run_how=RUN_NORMAL) at ./gdb/infcmd.c:665 #23 0x00465334 in cmd_func (cmd=<optimized out>, args=<optimized out>, from_tty=<optimized out>) at ./gdb/cli/cli-decode.c:1886 #24 0x006062a6 in execute_command (p=<optimized out>, p@entry=0x880f10 "run", from_tty=0) at ./gdb/top.c:630 #25 0x00548760 in command_handler (command=0x880f10 "run") at ./gdb/event-top.c:583 #26 0x00606a66 in read_command_file (stream=stream@entry=0x874ae0) at ./gdb/top.c:424 #27 0x004684e2 in script_from_file (stream=stream@entry=0x874ae0, file=file@entry=0xfffef7d0 "~/test.script") at ./gdb/cli/cli-script.c:1592 #28 0x004639bc in source_script_from_stream (file_to_open=0xfffef7d0 "~/test.script", file=0xfffef7d0 "~/test.script", stream=0x874ae0) at ./gdb/cli/cli-cmds.c:568 #29 source_script_with_search (file=0xfffef7d0 "~/test.script", from_tty=<optimized out>, search_path=<optimized out>) at ./gdb/cli/cli-cmds.c:604 #30 0x0058821a in catch_command_errors (command=0x463a89 <source_script(char const*, int)>, arg=0xfffef7d0 "~/test.script", from_tty=1) at ./gdb/main.c:379 #31 0x00588ea0 in captured_main_1 (context=<optimized out>) at ./gdb/main.c:1125 #32 captured_main (data=<optimized out>) at ./gdb/main.c:1147 #33 gdb_main (args=<optimized out>) at ./gdb/main.c:1173 #34 0x004343ac in main (argc=<optimized out>, argv=<optimized out>) at ./gdb/gdb.c:32

Frame 9 (default_memory_insert_breakpoint) looks like it will probably be interesting to us. Taking a look at what it does:

int default_memory_insert_breakpoint (struct gdbarch *gdbarch, struct bp_target_info *bp_tgt) { CORE_ADDR addr = bp_tgt->placed_address; const unsigned char *bp; gdb_byte *readbuf; int bplen; int val; /* Determine appropriate breakpoint contents and size for this address. */ bp = gdbarch_sw_breakpoint_from_kind (gdbarch, bp_tgt->kind, &bplen); /* Save the memory contents in the shadow_contents buffer and then write the breakpoint instruction. */ readbuf = (gdb_byte *) alloca (bplen); val = target_read_memory (addr, readbuf, bplen); if (val == 0) { ... bp_tgt->shadow_len = bplen; memcpy (bp_tgt->shadow_contents, readbuf, bplen); val = target_write_raw_memory (addr, bp, bplen); } return val; }

The call to gdbarch_sw_breakpoint_from_kind appears to return the bytes written for our breakpoint. gdbarch_sw_breakpoint_from_kind delegates to arm_sw_breakpoint_from_kind in gdb/arm-tdep.c. (The gdbarch_ functions provides a way for architecture independent code in gdb to call functions specific to the architecture associated with the target). Taking a look at what this does:

static const gdb_byte * arm_sw_breakpoint_from_kind (struct gdbarch *gdbarch, int kind, int *size) { struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch); switch (kind) { case ARM_BP_KIND_ARM: *size = tdep->arm_breakpoint_size; return tdep->arm_breakpoint; case ARM_BP_KIND_THUMB: *size = tdep->thumb_breakpoint_size; return tdep->thumb_breakpoint; case ARM_BP_KIND_THUMB2: *size = tdep->thumb2_breakpoint_size; return tdep->thumb2_breakpoint; default: gdb_assert_not_reached ("unexpected arm breakpoint kind"); } }

So, arm_breakpoint_from_kind returns an ARM, Thumb or Thumb2 breakpoint instruction sequence depending on the value of kind. If we switch to frame 9, we should be able to inspect the value of kind:

(gdb) f 9 #9 0x00590c8e in default_memory_insert_breakpoint (gdbarch=<optimized out>, bp_tgt=0x9236f0) at ./gdb/mem-break.c:66 66 ./gdb/mem-break.c: No such file or directory. (gdb) p bp_tgt->kind $2 = 2

2 is ARM_BP_KIND_THUMB, so this appears to check out. Moving further up the stack, we find that kind is determined in frame 10 (bkpt_insert_location in gdb/breakpoint.c). Let’s have a look at what that does:

static int bkpt_insert_location (struct bp_location *bl) { CORE_ADDR addr = bl->target_info.reqstd_address; bl->target_info.kind = breakpoint_kind (bl, &addr); bl->target_info.placed_address = addr; if (bl->loc_type == bp_loc_hardware_breakpoint) return target_insert_hw_breakpoint (bl->gdbarch, &bl->target_info); else return target_insert_breakpoint (bl->gdbarch, &bl->target_info); }

This calls breakpoint_kind, which delegates to arm_breakpoint_kind_from_pc in gdb/arm-tdep.c via gdbarch_breakpoint_kind_from_pc. arm_breakpoint_kind_from_pc maps the breakpoint address to an instruction set and returns one of three values – ARM_BP_KIND_ARM, ARM_BP_KIND_THUMB or ARM_BP_KIND_THUMB2. From looking at arm_breakpoint_kind_from_pc, we can see the most interesting part is a call to arm_pc_is_thumb. Let’s have a look at how this works:

int arm_pc_is_thumb (struct gdbarch *gdbarch, CORE_ADDR memaddr) { struct bound_minimal_symbol sym; char type; ... /* If bit 0 of the address is set, assume this is a Thumb address. */ if (IS_THUMB_ADDR (memaddr)) return 1;

So, first of all it checks whether bit 0 of the breakpoint address is set. Looking at the SystemTap probes in .note.stapsdt from our earlier readelf output, we can see that this is not the case for any probe. Following on:

/* If the user wants to override the symbol table, let him. */ if (strcmp (arm_force_mode_string, "arm") == 0) return 0; if (strcmp (arm_force_mode_string, "thumb") == 0) return 1; /* ARM v6-M and v7-M are always in Thumb mode. */ if (gdbarch_tdep (gdbarch)->is_m) return 1;

We’re not forcing the mode and this isn’t ARM v6-M or v7-M, so, continuing:

/* If there are mapping symbols, consult them. */ type = arm_find_mapping_symbol (memaddr, NULL); if (type) return type == 't';

arm_find_mapping_symbol tries to find a mapping symbol associated with the breakpoint address. Mapping symbols are a special type of symbol used to identify transitions between ARM and Thumb instruction sets (see this information). Breaking here in gdb shows that there isn’t a mapping symbol associated with the init_start probe:

(gdb) break ./gdb/arm-tdep.c:434 Breakpoint 2 at 0x43ec3c: file ./gdb/arm-tdep.c, line 434. (gdb) run ... Breakpoint 2, arm_pc_is_thumb (gdbarch=gdbarch@entry=0x928fb0, memaddr=memaddr@entry=4152138308) at ./gdb/arm-tdep.c:434 434 ./gdb/arm-tdep.c: No such file or directory. (gdb) p/x memaddr $2 = 0xf77c9a44 (gdb) p type $3 = 0 '\000'

So, continuing to the next step:

/* Thumb functions have a "special" bit set in minimal symbols. */ sym = lookup_minimal_symbol_by_pc (memaddr); if (sym.minsym) return (MSYMBOL_IS_SPECIAL (sym.minsym));

lookup_minimal_symbol_by_pc tries to map the breakpoint address to a function symbol. MSYMBOL_IS_SPECIAL(sym.minsym) expands to sym.minsym->target_flag_1 and is 1 if bit 0 of the symbol’s target address is set, indicating that the function is called in Thumb mode (see arm_elf_make_msymbol_special in gdb/arm-tdep.c for where this is set). Breaking here in gdb shows that this succeeds:

(gdb) break ./gdb/arm-tdep.c:439 Breakpoint 3 at 0x43ec54: file ./gdb/arm-tdep.c, line 439. (gdb) cont Continuing. Breakpoint 3, arm_pc_is_thumb (gdbarch=gdbarch@entry=0x928fb0, memaddr=memaddr@entry=4152138308) at ./gdb/arm-tdep.c:439 439 in ./gdb/arm-tdep.c (gdb) p sym.minsym $4 = (minimal_symbol *) 0x964278 (gdb) p *sym.minsym $5 = {mginfo = {name = 0x952ff8 "dl_main", value = {ivalue = 5932, block = 0x172c, bytes = 0x172c <error: Cannot access memory at address 0x172c>, address = 5932, common_block = 0x172c, chain = 0x172c}, language_specific = {obstack = 0x0, demangled_name = 0x0}, language = language_auto, ada_mangled = 0, section = 10}, size = 10516, filename = 0x949a48 "rtld.c", type = mst_file_text, created_by_gdb = 0, target_flag_1 = 1, target_flag_2 = 0, has_size = 1, hash_next = 0x0, demangled_hash_next = 0x0} (gdb) p sym.minsym->target_flag_1 $6 = 1

It indicates that the init_start probe is in dl_main, and that it is called in Thumb mode.

We can use readelf to inspect the symbol table and verify that this is correct:

$ readelf -s libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so | grep dl_main 42: 0000172d 10516 FUNC LOCAL DEFAULT 11 dl_main

Note that bit 0 of the target address is set.

If lookup_minimal_symbol_by_pc fails, then we’re basically out of luck and arm_pc_is_thumb will return 0 (indicating that the breakpoint address is in an area that is executing ARM instructions). But this depends on the .symtab ELF section being present, so there is an obvious issue here as this is stripped from the binary in the build (and shipped in a separate debug object).

I then ran gdb in gdb without the dynamic loader symbols installed and set a breakpoint on default_memory_insert_breakpoint:

$ gdb --args gdb --batch --command=~/test.script (gdb) set debug-file-directory /home/ubuntu/libc6-syms/usr/lib/debug/:/usr/lib/debug/ (gdb) break default_memory_insert_breakpoint(gdbarch*, bp_target_info*) Breakpoint 1 at 0x190c34: file ./gdb/mem-break.c, line 39. (gdb) run Starting program: /usr/bin/gdb --batch --command=\~/test.script Cannot parse expression `.L1207 4@r4'. warning: Probes-based dynamic linker interface failed. Reverting to original interface. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1". Breakpoint 1, default_memory_insert_breakpoint (gdbarch=0x9288c0, bp_tgt=0x922fe8) at ./gdb/mem-break.c:39 39 ./gdb/mem-break.c: No such file or directory. (gdb) bt #0 default_memory_insert_breakpoint (gdbarch=0x9288c0, bp_tgt=0x922fe8) at ./gdb/mem-break.c:39 #1 0x004de3aa in bkpt_insert_location (bl=0x922f90) at ./gdb/breakpoint.c:12525 #2 0x004e8426 in insert_bp_location (bl=bl@entry=0x922f90, tmp_error_stream=tmp_error_stream@entry=0xfffef07c, disabled_breaks=disabled_breaks@entry=0xfffeefec, hw_breakpoint_error=hw_breakpoint_error@entry=0xfffeeff0, hw_bp_error_explained_already=hw_bp_error_explained_already@entry=0xfffeeff4) at ./gdb/breakpoint.c:2553 #3 0x004e9556 in insert_breakpoint_locations () at ./gdb/breakpoint.c:2977 #4 update_global_location_list (insert_mode=insert_mode@entry=UGLL_MAY_INSERT) at ./gdb/breakpoint.c:12177 #5 0x004ea0a0 in update_global_location_list_nothrow (insert_mode=UGLL_MAY_INSERT) at ./gdb/breakpoint.c:12215 #6 0x004ea484 in create_solib_event_breakpoint_1 (insert_mode=UGLL_MAY_INSERT, address=address@entry=4152138308, gdbarch=gdbarch@entry=0x0) at ./gdb/breakpoint.c:7555 #7 create_solib_event_breakpoint (gdbarch=gdbarch@entry=0x9288c0, address=address@entry=4152138308) at ./gdb/breakpoint.c:7562 #8 0x004497bc in svr4_create_probe_breakpoints (objfile=0x933238, probes=0xfffef148, gdbarch=0x9288c0) at ./gdb/solib-svr4.c:2089 #9 svr4_create_solib_event_breakpoints (gdbarch=0x9288c0, address=<optimized out>) at ./gdb/solib-svr4.c:2173 #10 0x00449c5c in enable_break (from_tty=<optimized out>, info=<optimized out>) at ./gdb/solib-svr4.c:2465 #11 svr4_solib_create_inferior_hook (from_tty=<optimized out>) at ./gdb/solib-svr4.c:3057 #12 0x0056bba6 in post_create_inferior (target=0x801084 <current_target>, from_tty=from_tty@entry=0) at ./gdb/infcmd.c:469 #13 0x0056c736 in run_command_1 (args=<optimized out>, from_tty=0, run_how=RUN_NORMAL) at ./gdb/infcmd.c:665 #14 0x00465334 in cmd_func (cmd=<optimized out>, args=<optimized out>, from_tty=<optimized out>) at ./gdb/cli/cli-decode.c:1886 #15 0x006062a6 in execute_command (p=<optimized out>, p@entry=0x874758 "run", from_tty=0) at ./gdb/top.c:630 #16 0x00548760 in command_handler (command=0x874758 "run") at ./gdb/event-top.c:583 #17 0x00606a66 in read_command_file (stream=stream@entry=0x875da8) at ./gdb/top.c:424 #18 0x004684e2 in script_from_file (stream=stream@entry=0x875da8, file=file@entry=0xfffef7d0 "~/test.script") at ./gdb/cli/cli-script.c:1592 #19 0x004639bc in source_script_from_stream (file_to_open=0xfffef7d0 "~/test.script", file=0xfffef7d0 "~/test.script", stream=0x875da8) at ./gdb/cli/cli-cmds.c:568 #20 source_script_with_search (file=0xfffef7d0 "~/test.script", from_tty=<optimized out>, search_path=<optimized out>) at ./gdb/cli/cli-cmds.c:604 #21 0x0058821a in catch_command_errors (command=0x463a89 <source_script(char const*, int)>, arg=0xfffef7d0 "~/test.script", from_tty=0) at ./gdb/main.c:379 #22 0x00588ea0 in captured_main_1 (context=<optimized out>) at ./gdb/main.c:1125 #23 captured_main (data=<optimized out>) at ./gdb/main.c:1147 #24 gdb_main (args=<optimized out>) at ./gdb/main.c:1173 #25 0x004343ac in main (argc=<optimized out>, argv=<optimized out>) at ./gdb/gdb.c:32 (gdb) p/x bp_tgt->placed_address $1 = 0xf77c9a44 (gdb) p bp_tgt->kind $2 = 4

Sure enough, default_memory_insert_breakpoint is called this time with kind == 4 (ARM_BP_KIND_ARM) which seems to be incorrect. Setting a breakpoint in arm_pc_is_thumb again, we can verify that the reason for this is that the call to lookup_minimal_symbol_by_pc fails:

(gdb) break ./gdb/arm-tdep.c:439 Breakpoint 1 at 0x3ec54: file ./gdb/arm-tdep.c, line 439. (gdb) run Starting program: /usr/bin/gdb --command=\~/test.script Cannot parse expression `.L1207 4@r4'. warning: Probes-based dynamic linker interface failed. Reverting to original interface. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1". GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "arm-linux-gnueabihf". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word". Breakpoint 1, arm_pc_is_thumb (gdbarch=0x928fb0, memaddr=4152138308) at ./gdb/arm-tdep.c:439 439 ./gdb/arm-tdep.c: No such file or directory. (gdb) p/x memaddr $1 = 0xf77c9a44 (gdb) p sym.minsym $2 = (minimal_symbol *) 0x0

This results in arm_pc_is_thumb returning 0 and arm_breakpoint_kind_from_pc returning ARM_BP_KIND_ARM, which results in arm_sw_breakpoint_from_kind returning the wrong breakpoint instruction sequence.

TL;DR

GDB causes a crash in the dynamic loader (ld.so) on armv7 if ld.so has been stripped of its symbol table, because it is unable to correctly determine the appropriate instruction set when inserting probe event breakpoints.

If you’ve got to the end of this (congratulations), then you’re probably going to be disappointed to hear that I’m not sure what the proper fix is – this isn’t really my area of expertise. For the rustc package, I just added a Build-Depends: libc6-dbg [armhf] as a workaround for now, and that might even be the correct fix. But, it’s certainly nicer to understand why it didn’t work in the first place.

Jonathan Riddell: All New Hitchhiker’s Guide to the Galaxy and Black Sails

Mër, 02/05/2018 - 12:36md

One of the complains about the new streaming entertainment world is that it removes the collective experience of everyone watching the same programme on telly the night before then discussing it. At least in the international world I tend to live in that was never much of an option and instead it is a common topic of conversation now when I meet with people around the world to discuss the best series from the world of media. So allow me to recommend a couple which seem to have missed many people’s conciousness.

On the original streaming media site BBC iPlayer radio there’s a whole new 6th series of Hitchhiker’s Guide to the Galaxy. 40 years in the making and still full of whimsical understated comedy about life. And best of all they’re repeating the 1st, 2nd and just started 3rd series of the show.

Back in telly land I was reluctant to pay money for the privilage of spending my life watching telly but a student discount made Amazon Prime a tempting offer for my girlfriend.  I discovered Black Sails which is the best telly I’ve ever seen.  A prequal to Scottish classic Treasure Island with Captain Flint (who you’ll remember only appears in the original book as a parrot) and John Silver, it impressively mixes in real life pirates from 18th century Carribean.  The production qualities are superb, filming on water is always expensive (see Water World or Titanic or even Lost) and here they had to recreate several near full-size sailing boats.  The plotting is ideal with allegiances changing every episode or two in a mostly plausable way.  And it successfully ends before running out of energy.  I’m a fan.

Meanwhile on Netflix I wasn’t especially interested in the new Star Trek but it turns out to include space tardegrades and therefor became much more exciting.

 

by

Launchpad News: Launchpad news, June 2017 – April 2018

Mar, 01/05/2018 - 8:19md

Once again it’s been a while since we posted a general update, so here’s a changelog-style summary of what we’ve been up to.  As usual, this changelog preserves a reasonable amount of technical detail, but I’ve omitted changes that were purely internal refactoring with no externally-visible effects.

Answers
  • Hide questions on inactive projects from the results of non-pillar-specific searches
Blueprints
  • Optimise the main query on Person:+upcomingwork (#1692120)
  • Apply the spec privacy check on Person:+upcomingwork only to relevant specs (#1696519)
  • Move base clauses for specification searches into a CTE to avoid slow sequential scans
Bugs
  • Switch to HTTPS for CVE references
  • Fix various failures to sync from Red Hat’s Bugzilla instance (#1678486)
Build farm
  • Send the necessary set of archive signing keys to builders (#1626739)
  • Hide the virt/nonvirt queue portlets on BuilderSet:+index if they’d be empty
  • Add a feature flag which can be used to prevent dispatching any build under a given minimum score
  • Write files fetched from builders to a temporary name, and only rename them into place on success
  • Emit the build URL at the start of build logs
Code
  • Fix crash when scanning a Git-based MP when we need to link a new RevisionAuthor to an existing Person (#1693543)
  • Add source ref name to breadcrumbs for Git-based MPs; this gets the ref name into the page title, which makes it easier to find Git-based MPs in browser history
  • Allow registry experts to delete recipes
  • Explicitly mark the local apt archive for recipe builds as trusted (#1701826)
  • Set +code as the default view on the code layer for (Person)DistributionSourcePackage
  • Improve handling of branches with various kinds of partial data
  • Add and export BranchMergeProposal.scheduleDiffUpdates (#483945)
  • Move “Updating repository…” notice above the list of branches so that it’s harder to miss (#1745161)
  • Upgrade to Pygments 2.2.0, including better formatting of *.md files (#1740903)
  • Sort cancelled-before-starting recipe builds to the end of the build history (#746140)
  • Clean up the {Branch,GitRef}:+register-merge UI slightly
  • Optimise merge detection when the branch has no landing candidates
Infrastructure
  • Use correct method separator in Allow headers (#1717682)
  • Optimise lp_sitecustomize so that bin/py starts up more quickly
  • Add a utility to make it easier to run Launchpad code inside lxc exec
  • Convert lp-source-dependencies to git
  • Remove the post-webservice-GET commit
  • Convert build system to virtualenv and pip, unblocking many upgrades of dependencies
  • Use eslint to lint JavaScript files
  • Tidy up various minor problems in the top-level Makefile (#483782)
  • Offering ECDSA or Ed25519 SSH keys to Launchpad SSH servers no longer causes a hang, although it still isn’t possible to use them for authentication (#830679)
  • Reject SSH public keys that Twisted can’t load (#230144)
  • Backport GPGME file descriptor handling improvements to fix timeouts importing GPG keys (#1753019)
  • Improve OOPSes for jobs
  • Switch the site-wide search to Bing Custom Search, since Google Site Search has been discontinued
  • Don’t send email to direct recipients without active accounts
Registry
  • Fix the privacy banner on PersonProduct pages
  • Show GPG fingerprints rather than collidable short key IDs (#1576142)
  • Fix PersonSet.getPrecachedPersonsFromIDs to handle teams with mailing lists
  • Optimise front page, mainly by gathering more statistics periodically rather than on the fly
  • Construct public keyserver links using HTTPS without an explicit port (#1739110)
  • Fall back to emailing the team owner if the team has no admins (#1270141)
Snappy
  • Log some useful information from authorising macaroons while uploading snaps to the store, to make it easier to diagnose problems
  • Extract more useful error messages when snap store operations fail (#1650461, #1687068)
  • Send mail rather than OOPSing if refreshing snap store upload macaroons fails (#1668368)
  • Automatically retry snap store upload attempts that return 502 or 503
  • Initialise git submodules in snap builds (#1694413)
  • Make SnapStoreUploadJob retries go via celery and be much more responsive (#1689282)
  • Run snap builds in LXD containers, allowing them to install snaps as build-dependencies
  • Allow setting Snap.git_path directly on the webservice
  • Batch snap listing views (#1722562)
  • Fix AJAX update of snap builds table to handle all build statuses
  • Set SNAPCRAFT_BUILD_INFO=1 to tell snapcraft to generate a manifest
  • Only emit snap:build:0.1 webhooks from SnapBuild.updateStatus if the status has changed
  • Expose extended error messages (with external link) for snap build jobs (#1729580)
  • Begin work on allowing snap builds to install snapcraft as a snap; this can currently be set up via the API, and work is in progress to add UI and to migrate to this as the default (#1737994)
  • Add an admin option to disable external network access for snap builds
  • Export ISnapSet.findByOwner on the webservice
  • Prefer Snap.store_name over Snap.name for the “name” argument dispatched to snap builds
  • Pass build URL to snapcraft using SNAPCRAFT_IMAGE_INFO
  • Add an option to build source tarballs for snaps (#1763639)
Soyuz (package management)
  • Stop SourcePackagePublishingHistory.getPublishedBinaries materialising rows outside the current batch; this fixes webservice timeouts for sources with large numbers of binaries (#1695113)
  • Implement proxying of PackageUpload binary files via the webapp, since DistroSeries:+queue now assumes that that works (#1697680)
  • Truncate signing key common-names to 64 characters (#1608615)
  • Allow setting a relative build score on live filesystems (#1452543)
  • Add signing support for vmlinux for use on ppc64el Opal (and compatible) firmware
  • Run live filesystem builds in LXD containers, allowing them to install snaps as build-dependencies
  • Accept a “debug” entry in live filesystem build metadata, which enables detailed live-build debugging
  • Accept and ignore options (e.g. [trusted=yes]) in sources.list lines passed via external_dependencies
  • Send proper email notifications about most failures to parse the .changes file (#499438)
  • Ensure that PPA .htpasswd salts are drawn from the correct alphabet (#1722209)
  • Improve DpkgArchitectureCache‘s timeline handling, and speed it up a bit in some cases (#1062638)
  • Support passing a snap channel into a live filesystem build through the environment
  • Add support for passing apt proxies to live-build
  • Allow anonymous launchpad.View on IDistributionSourcePackage
  • Handle queries starting with “ppa:” when searching the PPA vocabulary
  • Make PackageTranslationsUploadJob download librarian files to disk rather than memory
  • Send email notifications when an upload is signed with an expired key
  • Add Release, Release.gpg, and InRelease to by-hash directories
  • After publishing a custom file, mark its target suite as dirty so that it will be published (#1509026)
Translations
  • Fix text_to_html to not parse HTML as a C format string
  • Fall back to the package name from AC_INIT when expanding $(PACKAGE) in translation configuration files if no other definition can be found
Miscellaneous
  • Show a search icon for pickers where possible rather than “Choose…”

Jeremy Bicha: Congratulations Ubuntu and Fedora

Mar, 01/05/2018 - 5:32md

Congratulations to Ubuntu and Fedora on their latest releases.

This Fedora 28 release is special because it is believed to be the first release in their long history to release exactly when it was originally scheduled.

The Ubuntu 18.04 LTS release is the biggest release for the Ubuntu Desktop in 5 years as it returns to a lightly customized GNOME desktop. For reference, the biggest changes from vanilla GNOME are the custom Ambiance theme and the inclusion of the popular AppIndicator and Dock extensions (the Dock extension being a simplified version of the famous Dash to Dock). Maybe someday I could do a post about the smaller changes.

I think one of the more interesting occurrences for fans of Linux desktops is that these releases of two of the biggest LInux distributions occurred within days of each other. I expect this alignment to continue (although maybe not quite as dramatically as this time) since the Fedora and Ubuntu beta releases will happen at similar times and I expect Fedora won’t slip far from its intended release dates again.

Serge Hallyn: Tagged window manager views

Dje, 29/04/2018 - 6:54md

I find myself talking about these pretty frequently, and it seems many people have never actually heard about them, so a blog post seems appropriate.

Window managers traditionally present (for “advanced” users) “virtual” desktops and/or “multiple” desktops. Different window managers will have slightly different implementations and terminology, but typically I think of virtual desktops as being an MxN matrix of screen-sized desktops, and multiple desktops as being some number of disjoint MxN matrices. (In some cases there are only multiple 1×1 desktops) If you’re a MacOS user, I believe you’re limited to a linear array (say, 5 desktops), but even tvtwm back in the early 90s did matrices. In the late 90s Enlightenment offered a cool way of combining virtual and multiple desktops: As usual, you could go left/right/up/down to switch between virtual desktops, but in addition you had a bar across one edge of the screen which you could use to drag the current desktop so as to reveal the underlying desktop. Then you could do it again to see the next underlying one, etc. So you could peek and move windows between the multiple desktops.

Now, if you are using a tiling window manager like dwm, wmii, or awesome, you may think you have the same kinds of virtual desktops. But in fact what you have is a clever ‘tagged view’ implementation. This lets you pretend that you have virtual desktops, but tagged views are far more powerful.

In a tagged view, you define ‘tags’ (like desktop names), and assign one or more tags to each window. Your current screen is a view of one or more tags. In this way you can dynamically switch the set of windows displayed.

For instance, you could assign tag ‘1’ or ‘mail’ to your mail window; ‘2’ or ‘web’ to your browser; ‘3’ or ‘work’ as well as ‘1’ to one terminal, and ‘4’ or ‘notes’ to another terminal. Now if you view tag ‘1’, you will see the mail and first terminal; if you view 1+2, you will see those plus your browser. If you view 2+3, you will see the browser and first terminal but not the mail window.

As you can see, if you don’t know about this, you can continue to use tagged views as though they were simply multiple desktops. But you can’t get this flexibility with regular multiple or virtual desktops.

(This may be a case where a video would be worth more than a bunch of text.)

So in conclusion – when I complain about the primitive window manager on MacOS, I’m not just name-calling. A four-finger gesture to expose the 1xN virtual desktops just isn’t nearly as useful as being able to precisely control which windows I see together on the desktop.

Full Circle Magazine: issue 132

Pre, 27/04/2018 - 9:33md

This month:
* Command & Conquer
* How-To : Python, Freeplane, and Ubuntu Touch
* Graphics : Inkscape
* Everyday Linux
* Researching With Linux
* My Opinion
* My Story
* Book Review: Cracking Codes With Python
* Ubuntu Games: Dwarf Fortress
plus: News, Q&A, and much more.

HELP SUPPORT FULL CIRCLE MAGAZINE:

 

 

 

 

 

English
(EPUB)

Lubuntu Blog: Lubuntu 18.04 LTS (Bionic Beaver) Released!

Pre, 27/04/2018 - 8:14pd
Thanks to all the hard work from our contributors, Lubuntu 18.04 LTS has been released! With the codename Bionic Beaver, Lubuntu 18.04 LTS is the 14th release of Lubuntu, with support until April of 2021. What is Lubuntu? Lubuntu is an official Ubuntu flavor which uses the Lightweight X11 Desktop Environment (LXDE). The project’s goal […]

Ubuntu Studio: Ubuntu Studio 18.04 Released

Pre, 27/04/2018 - 7:11pd
We are happy to announce the release of our latest version, Ubuntu Studio 18.04 Bionic Beaver! Unlike the other Ubuntu flavors, this release of Ubuntu Studio is not a Long-Term Suppport (LTS) release. As a regular release, it will be supported for 9 months. Although it is not a Long-Term Support release, it is still […]

Xubuntu: Xubuntu 18.04 released!

Pre, 27/04/2018 - 1:13pd

The Xubuntu team is happy to announce the immediate release of Xubuntu 18.04. Xubuntu 18.04 is a long-term support (LTS) release and will be supported for 3 years, until April 2021.

The final release images are available as torrents immediately from the links below.

64-bit systems32-bit systems

The images are also available as direct downloads from xubuntu.org/getxubuntu/. As the main server and mirrors might be busy for the first few days after the release, we recommend using the torrents if possible.

We’d like to thank everybody who contributed to this release of Xubuntu!

Support

For support with the release, navigate to Help & Support for a complete list of methods to get help.

Highlights and Known Issues

The below is just a quick peek at the most important highlights and issues. More updates and new features are listed on the very thorough release notes.

Highlights
  • Some GNOME applications are replaced with corresponding MATE applications for improved consistency with almost identical set of features
  • The Sound Indicator plugin is replaced with the Xfce PulseAudio plugin in the panel, improving controlling the volume and multimedia applications from the panel
  • The new xfce4-notifyd panel plugin is included, allowing you to easily toggle “Do Not Disturb” mode for notifications as well as view missed notifications
  • Significantly improved menu editing with a new MenuLibre version
  • Better support for HiDPI screens, better consistency and other improvements from the Greybird GTK+ theme
Known Issues
  • The “Force UEFI installation” dialog has non-working Go Back/Continue buttons (1724482).
  • The automatically selected keyboard layout does not necessarily match the chosen region (1706859).

As always, check Launchpad for bugs related to your hardware to make sure there aren’t any critical bugs that could potentially make your system unbootable or otherwise unusable.

Corey Bryant: OpenStack Queens for Ubuntu 18.04 LTS

Pre, 27/04/2018 - 12:54pd

Hi All,

It’s release day!

With today’s release of Ubuntu 18.04 LTS (the Bionic Beaver) the Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Queens on Ubuntu 18.04 LTS. This release of Ubuntu is a Long Term Support release that will be supported for 5 years.

Further details for the Ubuntu 18.04 release can be found at: https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes.

And further details for the OpenStack Queens release can be found at:  https://www.openstack.org/software/queens

Ubuntu 18.04 LTS

No extra steps are required required; just start installing OpenStack!

Ubuntu 16.04 LTS

If you’re interested in OpenStack Queens on Ubuntu 16.04, please refer to this post which coincided with the upstream OpenStack Queens release.

Packages

The 18.04 archive includes updates for:

aodh, barbican, ceilometer, ceph (12.2.4), cinder, congress, designate, designate-dashboard, dpdk (17.11), glance, glusterfs (3.13.2), gnocchi, heat, heat-dashboard, horizon, ironic, keystone, libvirt (4.0.0), magnum, manila, manila-ui, mistral, murano, murano-dashboard, networking-bagpipe, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-odl, networking-ovn, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-lbaas, neutron-lbaas-dashboard, neutron-taas, neutron-vpnaas, nova, nova-lxd, openstack-trove, openvswitch (2.9.0), panko, qemu (2.11), rabbitmq-server (3.6.10), sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, and zaqar.

For a full list of packages and versions, please refer to [0].

Branch Package Builds

If you want to try out the latest updates to stable branches, we are delivering continuously integrated packages on each upstream commit in the following PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka
sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata
sudo add-apt-repository ppa:openstack-ubuntu-testing/pike
sudo add-apt-repository ppa:openstack-ubuntu-testing/queens

bear in mind these are built per-commitish (30 min checks for new commits at the moment) so ymmv from time-to-time.

Reporting bugs

If you run into any issues please report bugs using the ‘ubuntu-bug’ tool:

sudo ubuntu-bug nova-conductor

this will ensure that bugs get logged in the right place in Launchpad.

Thank you to all who contributed to OpenStack Queens and Ubuntu Bionic both upstream and in Debian/Ubuntu packaging!

Regards,
Corey
(on behalf of the Ubuntu OpenStack team)

[0] http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/queens_versions.html

Ubuntu MATE: Ubuntu MATE 18.04 LTS Final Release

Pre, 27/04/2018 - 12:00pd

Charles Babbage wasn't lying when he said "The only thing that would make my Difference Engine any better would be a modern customisable desktop environment that didn't deviate from traditional desktop paradigms (unless I wanted it to.)" In a long lost diary entry Ada Lovelace scribbled "If only my code could be matched to an OS that had a perfect blend of usability and style accompanied by a handpicked selection of quality software packages." ENIAC, moments before being unplugged in 1956, spat out a final message: "Give us a reboot when Ubuntu MATE 18.04 LTS is out will ya?"

Dust off 20,000 vacuum tubes and check those 5,000,000 hand soldered joints because Ubuntu MATE 18.04 LTS is here and it's time to power it up.

MATE Desktop 1.20.1 inclusion alone is enough to make Babbage weep with joy but there is still more. Caja is primed to encrypt your secrets with GnuPG, and with bulk renaming built right in to the file manager you can finally deal with those pesky family reunion photos.

Got a fancy new display and itty bitty pixels? HiDPI support bounds into this LTS, it's so dynamic you won't know what to do with yourself. We have tweaked desktop layouts, improved global menus, refined our Head-Up Display (HUD) and updated Brisk Menu. It's dandy!

We could scream it all here in this blurb, instead we suggest you take a scroll through the notes below and behold the majesty that is Ubuntu MATE 18.04 LTS.

We will scream "Thank You!" however. A "Thank You!" to everyone who contributed code, documentation, artwork, bug reports, translations or artwork. A "Thank You!" to the members of our community forum who offer advice and support to those who request it. A "Thank You!" to everyone who has supported the Ubuntu MATE crowd funding that helps reward and incentivise developers who work on MATE Desktop, Ubuntu MATE and associated technologies in their spare time. You are the 20,000 vacuum tubes and the 5,000,000 hand soldered joints that make up Ubuntu MATE. We couldn't be prouder.


No one reads the release notes, isn't that right DasGeek? So when our friend Stuart Langridge was reviewing our draft release notes and commented that they didn't speak to him, we thought "all right, we can fix that". Stuart, since you are such a special snowflake and no one else will read these notes, here they are, bespoke release notes just for you!

What changed since the Ubuntu MATE 16.04 LTS release?

Just about everything! Ubuntu MATE 18.04 is rammed to the rafters with new features and improvements compared to 16.04.

MATE Desktop 1.20.1

The MATE Desktop has transitioned from the GTK 2.24 based MATE 1.12 to the very latest MATE 1.20.1 based on GTK 3.22. This migration has been several years in the making, and most of 2016 and 2017 was spent refining the GTK3 implementation. The move to GTK3 has made it possible to introduce many of the new features you'll read about below.

Support for libinput has been added and is now the default input handler for mouse and touchpad, which has resulted in much improved responsiveness and support for multi-finger touch gestures.

Thanks to our friends at Hypra.fr accessibility support (particularly for visually impaired users) has seen continued development and improvement. MATE Desktop is proud to provide visually impaired users the most accessible open source desktop environment.

HiDPI

High DPI displays have a high resolution relative to their physical size that results in an increased pixel density compared to standard DPI displays. They are mostly found in high-end laptops and monitors. Our friends at elementary OS wrote a great blog post explaining What is HIDPI and why does it matter.

MATE Desktop 1.20 supports HiDPI displays and if you have one then Ubuntu MATE will automatically enable pixel scaling, presenting you with a super crisp desktop and applications. HiDPI hints for Qt applications are also pushed to the environment to improve cross toolkit integration. Every aspect of the Ubuntu MATE, its themes, its applications, its icons, its toolkit assets have been updated to take advantage of HiDPI.

Should you have a HiDPI display and want to disable HiDPI scaling you can do so via MATE Tweak (for the desktop) and Login Window (for the greeter), both are available in the Control Centre.

The File Manager (Caja)

We've added some new features to the file manager (Caja).

  • Added Advanced bulk rename - A batch renaming extension.
  • Added Encryption - An extension which allows encryption and decryption of files using GnuPG.
  • Added Hash checking - An extension for computing and validating message digests or checksums.
  • Added Advanced ACL properties - An extension to edit access control lists (ACLs) and extended attributes (xattr).
  • Updated Folder Color - An extension for applying custom colours and emblems to folders and files.
  • Replaced the deprecated caja-gksu with caja-admin which uses PolicyKit to elevate permissions in the file manager for administrative tasks.

gksu is deprecated and being removed from Debian. We are aligning with that objective by replacing all use of gksu with PolicyKit.


Window Manager (Marco)

If your hardware/drivers support DRI3 then the window manager (Marco) compositing is now hardware accelerated. This dramatically improves 3D rendering performance, particularly in games. If your hardware doesn't support DRI3 then Marco will fall back to a software compositor.

Marco now supports drag to quadrant window tiling, cursor keys can be used to navigate the Alt+Tabswitcher and keyboard shortcuts to move windows to another monitor were added.

Desktop layouts

Using MATE Tweak you can try out the various desktop layouts to find one that suits you, and either stick with it or use it as a basis to create your own custom desktop layout.

A new layout has been added to the collection for the Ubuntu 18.04 release Ubuntu MATE 18.04. It is called Familiar and is based on the Traditional layout with the menu-bar (Applications, Places, System) replaced by Brisk Menu. Familiar is now the the default layout. Traditional will continue to be shipped, unchanged, and will be available via MATE Tweak for those who prefer it.

Here are some screenshots of the desktop layouts included in Ubuntu MATE to give you a feel for how you can configure your desktop experience.



  • Familiar - the default experience, a familiar two panel layout with a searchable menu
  • Mutiny - application dock, searchable launcher and global menus similar to Unity 7
  • Cupertino - a dock and top panel with searchable launcher and global menus similar to macOS
  • Redmond - single bottom panel with a searchable menu, similar to the taskbar in Windows
  • Pantheon - a dock and top panel with a searchable menu
  • Contemporary - modernised two panel layout featuring a searchable menu with global menus
  • Netbook - a compact, single top panel layout, ideal for small screens
  • Traditional Traditional - two panel layout featuring the iconic 'Applications, Places, System' menu

In order to create or improve the desktop layouts described above we've spent the last two years working on a number of projects across the MATE ecosystem that have enabled us to offer 8 different desktop layouts, each providing a different desktop experience. Here's some of the projects we worked on to make it all possible.

Super key

Super key (also known as the Windows key) support is available in the majority of the desktop layouts. This means Super can be used to activate the menus/launchers, and other key-bindings that include the Super key also continue to function correctly.

MATE Dock Applet, used in the Mutiny layout, also includes launching or switching to docked items based on their position using in the dock using Super + 1, Super + 2 which will be familiar to Unity 7 users. Super + L is also recognised as a screen lock key-binding along with the usual Ctrl + Alt + L that MATE Desktop users expect.

Global Menu

The Global Menu implementation has switched from TopMenu to Vala Panel Appmenu which is compatible with GTK, Qt, LibreOffice, Firefox/Thunderbird, Google Chrome, Electron and others.

Global Menus are integrated in the Mutiny and Cupertino desktop layouts but can be added to any panel, for those who just prefer to use global menus or those who want to maximise screen space available to their applications.


Indicators

Ubuntu MATE 18.04 now uses Indicators by default in all layouts. If you've used Ubuntu, these will be familiar. Indicators offer better accessibility support and ease of use over notification area applets. The volume in Indicator Sound can now be overdriven, so it is consistent with the MATE sound preferences. Notification area applets are still supported as a fallback.

We've been improving Indicator support from release to release for some time now. In Ubuntu MATE 17.10 many of the panel layouts offered a complete line up of Indicators, all of which are fully compatible with MATE. The default Indicators are:

  • Optimus (only available if you have nvidia prime capable hardware and drivers)
  • Bluetooth
  • Network
  • Power
  • Messages
  • Sound
  • Session
MATE Dock Applet

MATE Dock Applet is used in the Mutiny and Netbook layouts, but anyone can add it to a panel to create custom panel arrangements. MATE Dock Applet has seen many improvements over the last 2 years, here are some of the highlights:

Icon scrolling

Icon scrolling is automatically enabled when the dock applet runs out of space on the panel to expand into. Move the mouse over either the first or last icon in the dock, if scrolling is possible in that direction the icon will darken and an arrow will be displayed over it. If you hover the mouse pointer over an icon in this state, the dock will scroll in the indicated direction. Icon scrolling is automatically configured and enabled when using the Mutiny desktop layout, when using any other layout scrolling can be enabled via the MATE Dock Applet preferences.

Icon matching

MATE Dock Applet no longer uses its own method of matching icons to applications and instead uses BAMF. This mean the applet is lot better at matching applications and windows to their dock icons.

Assorted improvements
  • Window lists and action lists now have rounded corners and point to their icon in the dock.
  • The delay before action lists appear when the mouse hovers over a dock icon can now be set in the preferences dialog.
  • Apps can now be pinned to specific workspaces, in other words their app icons only appear in the dock when a particular workspace is active. This allows users to customise the dock for each workspace they use.
  • When unpinning an application a notification is now displayed which allows the operation to be undone.
  • The appearance of progress bars on dock icons has been improved.
  • Popup windows (action lists and window lists) no longer steal focus from other windows.
Brisk Menu

Brisk Menu is an efficient, searchable, menu for the MATE Desktop. We've collaborated with the Solus Project, the maintainers of Brisk Menu. A number of features have been added so that, like Ubuntu MATE itself, Brisk Menu is chameleonic. You'll find Brisk Menu is used in several of the Ubuntu MATE desktop layouts and is presented slightly differently in each.

The Mutiny and Cupertino desktop layouts make use of a new dash-style launcher, which enables a fullscreen searchable application launcher while the other layouts present Brisk Menu as a more traditional menu.


MATE Window Applets

MATE Window Applets make it possible to add window controls (mazimise, minimise and close) to a panel. We used Window Applets to enhance the Mutiny and Netbook layouts so that both will now remove window controls from maximised windows and relocate the window controls in the panel.


Head-Up Display

A favourite of Unity 7 users is the Head-Up Display (HUD) which provides a way to search for and run menu-bar commands without your fingers ever leaving the keyboard. The HUD can be enabled via MATE Tweak. You activate the HUD by tapping Alt, you then enter a search query to find menu items, highlight the one you want and press enter to trigger it.

If you're trying to find that single filter in Gimp but can't remember which filter category it fits into or if you can't recall if preferences sits under File, Edit or Tools in your favourite browser, you can just search for it rather than hunting through the menus.

The purpose of the HUD is to keep your fingers on the keyboard and improve the efficiency in driving the menus for keyboard centric users. We've locally integrated the HUD for similar reasons; if you're looking at an application, why move the HUD to the top of screen away from where your eyes are already focused? Keeping the HUD within the context of the active application eliminates refocusing your attention to a different part of the screen, particularly helpful for users with high resolution or multi-display workstations.


The HUD now has a 250ms (default) timeout, holding Alt any longer won't trigger the HUD. This is consistent with how the HUD in Unity 7 works. The HUD is also HiDPI aware now.

MATE Tweak

MATE Tweak can now toggle HiDPI mode between auto detection, regular scaling and forced scaling. HiDPI mode changes are dynamically applied and we've added a button to launch the Font preferences so users with HiDPI displays can fine tune their font DPI.

MATE Tweak has a deep understanding of Brisk Menu and Global Menu capabilities and manages them transparently while switching layouts. Switching layouts is far more reliable now too. We've removed the Interface section from MATE Tweak. Sadly, all the features the Interface section tweaked have been dropped from GTK3, making the whole section redundant. When saving a panel layout the Dock status will be saved too.


Ubuntu MATE Welcome

Welcome and Boutique have been given some love. The software listings in the Boutique have been refreshed, with some applications being removed, many updated and some new additions Welcome now has snappier animations and transitions. Applications selected for installation or removal via the Software Boutique are now added to a queue so you can select several installs and removals and process them all at once.

Browser Selection

A new Browser Selection screen has been added so you can quickly install your preferred browser.


System telemetry

Ubuntu MATE Welcome can submit anonymised system information, generated during an install or upgrade, that will help the developers better understand what devices Ubuntu MATE is being used on. This data will be transmitted one time only and includes basic system components but nothing that is uniquely identifiable. Here is an example telemetry report from the workstation of the Ubuntu MATE lead developer. We kindly request that if you install Ubuntu MATE you participate in sending us a telemetry report.


General improvements Minimal Installation

The Minimal Install is a new option presented in the installer that will install just the MATE Desktop, its utilities, its themes and Firefox. All the other applications such as office suite, email client, video player, audio manager, etc. are not installed. If you're interested, here is the complete list of software that is removed from a full Ubuntu MATE 18.04 installation to make the minimal install.


So, who's this aimed at? There are users who like to uninstall the software they do not need or want to build out their own desktop experience. So for those users, a minimal install is a great platform to build on. For those of you interested in creating "kiosk" style devices, such as homebrew Steam machines or Kodi boxes, then a minimal install is another useful starting point.

Documentation

The Ubuntu MATE Guide is a comprehensive introduction to MATE Desktop and Ubuntu MATE including how to use everything we ship by default, along with detailed instruction on how to tailor, tweak and customise Ubuntu MATE to suit your preferences.


Buy the books

Print and ebook versions of the books Ubuntu MATE: Upgrading from Windows or OSX and Using Ubuntu MATE and Its Applications are available from our shop.

Shop Slick Greeter

Ubuntu MATE switched to Slick Greeter during the 17.10 development cycle, which still uses LightDM under the hood but is far more attractive and HiDPI aware.


Slick Greeter Settings

We worked with our friends at Lubuntu and Ubuntu Budgie to land a configuration utility for Slick Greeter just moments before the final freeze window closed for 18.04.


Artwork Themes

The Ubuntu MATE themes have been uplifted from GTK2 to GTK3 including the addition of a new dark variant of the Ambiant-MATE theme. We've worked tirelessly on all the Ubuntu MATE themes making them fully compatible with GTK 3.22 and ensuring every pixel is placed exactly where it should be. Michael Tunnel from TuxDigital retouched countless art assets for the Ubuntu MATE themes including scaled variants for use on HiDPI displays. The Ubuntu MATE icon theme was given a facelift thanks to our friends at elementary OS and the default mouse pointer cursors use the new upstream MATE theme which is also HiDPI aware. Finally, blink and you'll miss it, the Ubuntu MATE Plymouth theme (boot logo) is now HiDPI aware.

Backgrounds

We are no longer shipping mate-backgrounds by default. They have served us well, but are looking a little stale now. We have created a new selection of high quality wallpapers comprised of some abstract designs and high resolution photos from unsplash.com.

Emoji

We've switched to Noto Sans for users of Japanese, Chinese and Korean fonts and glyphs. MATE Desktop 1.20 supports emoji input, so we've added a colour emoji font too.

You can enter emoji in one of two ways, type Ctrl + Shift + e an e prompt will appear and you can type usual emoji, such as :-), and it will automatically change to a glyph. Alternatively you can right click in the input area and select Insert Emoji that will display the emoji picker below.


Major Applications

Accompanying MATE Desktop 1.20.1 and Linux 4.15 are Firefox 59.0.2, VLC 3.0.1, LibreOffice 6.0.3.2 and Thunderbird 52.7.0.


See the Ubuntu 18.04 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 18.04 LTS

We've even redesigned the download page so it's even easier to get started.

Download Upgrading from Ubuntu MATE 16.04 or 17.10
  • Open the "Software & Updates" from the Control Center.
  • Select the 3rd Tab called "Updates".
  • Set the "Notify me of a new Ubuntu version" dropdown menu to "Long-term support versions".
  • Press Alt+F2 and type in update-manager into the command box.
  • Update Manager should open up and tell you: New distribution release '18.04' is available.
    • If not you can also use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click "Upgrade" and follow the on-screen instructions.
Get the Ubuntu MATE snaps

When the upgrade is complete and you're logged in, open a terminal and execute:

snap install ubuntu-mate-welcome --classic snap install software-boutique --classic snap install pulsemixer

The snap packages above are installed when performing a clean install of Ubuntu MATE 18.04, but are not automatically installed when upgrading from an earlier release.

Raspberry Pi images

We're planning on releasing Ubuntu MATE images for the Raspberry Pi around the time 18.04.1 is released, which should be sometime in July. It takes about a month to get the Raspberry Pi images built and tested and we simply didn't have time to do it in time for the April release of 18.04.

Known Issues

Here are the known issues.

Ubuntu MATE
  • Anyone upgrading from Ubuntu MATE 16.04 or 17.10 may need to use MATE Tweak to reset the panel layout to one of the bundled layouts post upgrade.
    • Migrating panel layouts, particularly those without Indicator support, is hit and miss. Mostly miss.
  • Choosing Install Ubuntu MATE from the boot menu on HiDPI displays will not display Indicators in the installer. However, installs will still complete successfully.
    • This issue only affects HiDPI display users and the workaround is to Try Ubuntu MATE without installing and run the installer from the live desktop session.
Ubuntu family issues

This is our known list of bugs that affects all flavours.

You'll also want to check the Ubuntu MATE bug tracker to see what has already been reported. These issues will be addressed in due course.

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

Kees Cook: UEFI booting and RAID1

Pre, 20/04/2018 - 2:34pd

I spent some time yesterday building out a UEFI server that didn’t have on-board hardware RAID for its system drives. In these situations, I always use Linux’s md RAID1 for the root filesystem (and/or /boot). This worked well for BIOS booting since BIOS just transfers control blindly to the MBR of whatever disk it sees (modulo finding a “bootable partition” flag, etc, etc). This means that BIOS doesn’t really care what’s on the drive, it’ll hand over control to the GRUB code in the MBR.

With UEFI, the boot firmware is actually examining the GPT partition table, looking for the partition marked with the “EFI System Partition” (ESP) UUID. Then it looks for a FAT32 filesystem there, and does more things like looking at NVRAM boot entries, or just running BOOT/EFI/BOOTX64.EFI from the FAT32. Under Linux, this .EFI code is either GRUB itself, or Shim which loads GRUB.

So, if I want RAID1 for my root filesystem, that’s fine (GRUB will read md, LVM, etc), but how do I handle /boot/efi (the UEFI ESP)? In everything I found answering this question, the answer was “oh, just manually make an ESP on each drive in your RAID and copy the files around, add a separate NVRAM entry (with efibootmgr) for each drive, and you’re fine!” I did not like this one bit since it meant things could get out of sync between the copies, etc.

The current implementation of Linux’s md RAID puts metadata at the front of a partition. This solves more problems than it creates, but it means the RAID isn’t “invisible” to something that doesn’t know about the metadata. In fact, mdadm warns about this pretty loudly:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 /dev/sda1 /dev/sdb1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90

Reading from the mdadm man page:

-e, --metadata= ... 1, 1.0, 1.1, 1.2 default Use the new version-1 format superblock. This has fewer restrictions. It can easily be moved between hosts with different endian-ness, and a recovery operation can be checkpointed and restarted. The different sub-versions store the superblock at different locations on the device, either at the end (for 1.0), at the start (for 1.1) or 4K from the start (for 1.2). "1" is equivalent to "1.2" (the commonly preferred 1.x format). "default" is equivalent to "1.2".

First we toss a FAT32 on the RAID (mkfs.fat -F32 /dev/md0), and looking at the results, the first 4K is entirely zeros, and file doesn’t see a filesystem:

# dd if=/dev/sda1 bs=1K count=5 status=none | hexdump -C 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00001000 fc 4e 2b a9 01 00 00 00 00 00 00 00 00 00 00 00 |.N+.............| ... # file -s /dev/sda1 /dev/sda1: Linux Software RAID version 1.2 ...

So, instead, we’ll use --metadata 1.0 to put the RAID metadata at the end:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 --metadata 1.0 /dev/sda1 /dev/sdb1 ... # mkfs.fat -F32 /dev/md0 # dd if=/dev/sda1 bs=1 skip=80 count=16 status=none | xxd 00000000: 2020 4641 5433 3220 2020 0e1f be77 7cac FAT32 ...w|. # file -s /dev/sda1 /dev/sda1: ... FAT (32 bit)

Now we have a visible FAT32 filesystem on the ESP. UEFI should be able to boot whatever disk hasn’t failed, and grub-install will write to the RAID mounted at /boot/efi.

However, we’re left with a new problem: on (at least) Debian and Ubuntu, grub-install attempts to run efibootmgr to record which disk UEFI should boot from. This fails, though, since it expects a single disk, not a RAID set. In fact, it returns nothing, and tries to run efibootmgr with an empty -d argument:

Installing for x86_64-efi platform. efibootmgr: option requires an argument -- 'd' ... grub-install: error: efibootmgr failed to register the boot entry: Operation not permitted. Failed: grub-install --target=x86_64-efi WARNING: Bootloader is not properly installed, system may not be bootable

Luckily my UEFI boots without NVRAM entries, and I can disable the NVRAM writing via the “Update NVRAM variables to automatically boot into Debian?” debconf prompt when running: dpkg-reconfigure -p low grub-efi-amd64

So, now my system will boot with both or either drive present, and updates from Linux to /boot/efi are visible on all RAID members at boot-time. HOWEVER there is one nasty risk with this setup: if UEFI writes anything to one of the drives (which this firmware did when it wrote out a “boot variable cache” file), it may lead to corrupted results once Linux mounts the RAID (since the member drives won’t have identical block-level copies of the FAT32 any more).

To deal with this “external write” situation, I see some solutions:

  • Make the partition read-only when not under Linux. (I don’t think this is a thing.)
  • Create higher-level knowledge of the root-filesystem RAID configuration is needed to keep a collection of filesystems manually synchronized instead of doing block-level RAID. (Seems like a lot of work and would need redesign of /boot/efi into something like /boot/efi/booted, /boot/efi/spare1, /boot/efi/spare2, etc)
  • Prefer one RAID member’s copy of /boot/efi and rebuild the RAID at every boot. If there were no external writes, there’s no issue. (Though what’s really the right way to pick the copy to prefer?)

Since mdadm has the “--update=resync” assembly option, I can actually do the latter option. This required updating /etc/mdadm/mdadm.conf to add <ignore> on the RAID’s ARRAY line to keep it from auto-starting:

ARRAY <ignore> metadata=1.0 UUID=123...

(Since it’s ignored, I’ve chosen /dev/md100 for the manual assembly below.) Then I added the noauto option to the /boot/efi entry in /etc/fstab:

/dev/md100 /boot/efi vfat noauto,defaults 0 0

And finally I added a systemd oneshot service that assembles the RAID with resync and mounts it:

[Unit] Description=Resync /boot/efi RAID DefaultDependencies=no After=local-fs.target [Service] Type=oneshot ExecStart=/sbin/mdadm -A /dev/md100 --uuid=123... --update=resync ExecStart=/bin/mount /boot/efi RemainAfterExit=yes [Install] WantedBy=sysinit.target

(And don’t forget to run “update-initramfs -u” so the initramfs has an updated copy of /dev/mdadm/mdadm.conf.)

If mdadm.conf supported an “update=” option for ARRAY lines, this would have been trivial. Looking at the source, though, that kind of change doesn’t look easy. I can dream!

And if I wanted to keep a “pristine” version of /boot/efi that UEFI couldn’t update I could rearrange things more dramatically to keep the primary RAID member as a loopback device on a file in the root filesystem (e.g. /boot/efi.img). This would make all external changes in the real ESPs disappear after resync. Something like:

# truncate --size 512M /boot/efi.img # losetup -f --show /boot/efi.img /dev/loop0 # mdadm --create /dev/md100 --level 1 --raid-disks 3 --metadata 1.0 /dev/loop0 /dev/sda1 /dev/sdb1

And at boot just rebuild it from /dev/loop0, though I’m not sure how to “prefer” that partition…

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Rhonda D&#39;Vine: Diversity Update

Enj, 19/04/2018 - 9:53md

I have to excuse for being silent for that long. Way too many things happened. In fact I already wrote most of this last fall, but then something happened that did impact me too much to finalize this entry. And with that I want to go a bit into details how I write my blog entries:
I start to write them in English, I like to cross-reference things, and after I'm done I go over it and write it again in German. That process helps me proof-reading the English part, but it also means that it takes a fair amount of time. And the longer the entries get the more energy the translation and proof reading part takes, too. That's mostly also the reason why I tend to write longer entries when I find the energy and time for it.

Anyway, the first thing that I want to mention here finally happened last June: I officially got changed my name and gender/sex marker in my papers! That was a very happy moment in so many ways. A week later I got my new passport, finally managed to book my flight to Debconf in my name. Yay me, I exist!

Then, Stretch was released. I have to admit I had very little to do, wasn't involved in the release process, neither from the website team nor anywhere else because ...

... because I was packing my stuff that weekend, because on June 21st, a second thing finally happened: I got the keys to my flat in the Que[e]rbau!! Yes, I'm aware that we still need to work on the website. The building company actually did make a big event out of it, called every single person onto stage and handed over the keys. And it made me happy to be able to receive my key in my name and not one I don't relate to since a long while anymore. It did hurt seeing that happening to someone else from our house, even though they knew what the Que[e]rbau is about ... And: I moved right in the same day. Gave up my old flat the following week, even though I didn't have much furniture nor a kitchen but I was waiting way too long to be able to not be there. And just watch that sunset from my balcony. <3

And I mentioned it in the last blog post already, the European Lesbian* Conference organization needed more and more work, too. The program for it started to finalize, but there were still more than enough things to do. I totally fell into this, this was the first time I really felt what intersectionality means and that it's not just a label but an internal part of this conference. The energy going on in the team on that grounds is really outstanding, and I'm totally happy to be part of this effort.

And then came along Debconf17 in Montreal. It was nice to be with a fair amount of people that grew on me like a family over the years. And interestingly I got the notice that there was a Trans March going on, so I joined that. It was a pleasure meeting Sophie LaBelle and Chase Ross there. I wasn't aware that Chase was from Montreal, so that part was a surprise. Sophie I knew, and I brought her back to Vienna in November, right before the Transgender Day of Remembrance. :)

But one of the two moving speeches at the march were from Charlie Rose titled My Gender Is Black. I managed to get a recording of this and another great speech from another Black Lives Matters activist, and hope I'll be able to put them online at some point. For the time being the link to the text should be able to help.

And then Debconf itself started. And I held the Debian Diversity Round Table. While the title might had been misleading, because this group isn't officially formed yet, it turned out to get a fair amount of interest. I started off with why I called for it, that I intentionally chose to not have it video taped for people to be able to speak more freely and after a short introduction round with names, pronouns and other things people wanted to share we had some interesting discussions on why people think this is a good idea, what direction to move. A few ideas did spring up, and then ... time ran out. So actually we scheduled a continuation BoF to further enhance the topic. At the end of that we came up with a pretty good consensual view on how to move forward. Unfortunately I didn't manage yet to follow up on that and feel quite bad about it. :/

Because, after returning, getting back into work, and needing a bit more time for EL*C I started to feel serious pain in my back and my leg which seems to be a slipped disc and was on sick leave for about two months. The pain was too much, I even had to stay at the hospital for two weeks because my stomach acted up too.

At the end of October we had a grand opening: We have a community space in our Que[e]rbau in which we built sort of a bar, with cooking facility and hi-fi equipment. And we intentionally opened it up to the public. It's name is Yella Yella! Nachbar_innentreff. We named it after Yella Hertzka who was an important feminist at the start of the 20th century. The park on the other side of the street is called Yella Hertzka park, so the pun in the name with the connection to the arabic proverb Yalla Yalla is intentional.

With the Yella Yella a fair amount of internal discussions emerged, we all only started to live together, so naturally this took a fair amount of energy and discussions. Things take time to get a feeling for all the people. There were several interviews made, and events to get organized to get it running.

And then out of the sudden it turned 2018 and I still haven't published this post. I'm sorry 'bout that, but sometimes there are other things needing time. And here I am. Time move on even if we don't look at it.

A recent project that I had the honor to be part of is my movement is limitless [trans_non-binary short]. It was interesting to think about the topic whether gender identity affects the way you dance. And to seen and hear other people's approach to it.

At the upcoming Linuxtage Graz there will be a session about Common misconceptions about names and spaces and communities because they were enforcing a realname policy – at a community event. Not only is this a huge issue for trans people but also works against privacy researchers or people from the community that noone really knows by the name in their papers. The discussions that happened on twitter or in the background were partly a fair bit disturbing. Let's hope that we'll manage to make a good panel.

Which brings us to a panel for the upcoming Debconf in Taiwan. There is a suggestion to have a Gender Forum at the Openday. I'm still not completely sure what it should cover or what is expected for it and I guess it's still open for suggestions. There will be a plan, let's see to make it diverse and great!

I won't promise to send the next update sooner, but I'll try to get back into it. Right now I'm also working on a (German language) submission for a non-binary YouTube project and it would be great to see that thing lift off. I'll be more verbose on that front.

Thanks for reading so far, and read you soon. :)

/personal | permanent link | Comments: 0 |

Faqet