You are here

Planet Debian

Subscribe to Feed Planet Debian
Entries tagged english Ben Hutchings's diary of life and technology Reproducible builds blog Google Summer of Code 2018 Intern with Debian Chez Charles Bálint's blog about some of the important things in the Universe ganbatte kudasai! Entries tagged english Dude! Sweet! Entries tagged english Google Summer of Code 2018 Intern with Debian Thoughts about programming, sysadmin, Perl, Debian ... Stuff, Debian, Free Software and Craig jmtd Any sufficiently advanced thinking is indistinguishable from madness As time goes by ... Insider infos, master your Debian/Ubuntu distribution Thinking inside the box Digital-Scurf Ramblings Free Software Hacking Recent content in Debian on /home/athos Reproducible builds blog Thoughts, actions and projects Debian and Free Software sesse's blog Thinking inside the box showing latest 10 pabs Entries tagged english Thinking inside the box Recent content in Gsoc18 on bisco.org I began this blog as part of my homework of Master of Libre Software at URJC. I finished my studies but I keep on writing about free (as in freedom) software, networks and knowledge. Dude! Sweet! Recent content in Debian on Tickets'n'patches Just another WordPress.com weblog anarcat Blog from the Debian Project mejo roaming Entries tagged english anarcat joey Debian and Free Software Reproducible builds blog rebel with rather too many causes Thinking inside the box "Passion and dispassion. Choose two." -- Larry Wall something around Debian, written in funny Eng"r"ish ;)
Përditësimi: 7 months 2 javë më parë

The diameter of German+English

Mër, 23/05/2018 - 8:30pd

Languages never map directly onto each other. The English word fresh can mean frisch or frech, but frish can also be cool. Jumping from one words to another like this yields entertaining sequences that take you to completely different things. Here is one I came up with:

frechfreshfrishcoolabweisenddismissivewegwerfendtrashingverhauendbangingGeklopfeknocking – …

And I could go on … but how far? So here is a little experiment I ran:

  1. I obtained a German-English dictionary. Conveniently, after registration, you can get dict.cc’s translation file, which is simply a text file with three columns: German, English, Word form.

  2. I wrote a program that takes these words and first canonicalizes them a bit: Removing attributes like [ugs.] [regional], {f}, the to in front of verbs and other embellishment.

  3. I created the undirected, bipartite graph of all these words. This is a pretty big graph – ~750k words in each language, a million edges. A path in this graph is precisely a sequence like the one above.

  4. In this graph, I tried to find a diameter. The diameter of a graph is the longest path between two nodes that you cannot connect with a shorter path.

Because the graph is big (and my code maybe not fully optimized), it ran a few hours, but here it is: The English expression be annoyed by sb. and the German noun Icterus are related by 55 translations. Here is the full list:

  • be annoyed by sb.
  • durch jdn. verärgert sein
  • be vexed with sb.
  • auf jdn. böse sein
  • be angry with sb.
  • jdm. böse sein
  • have a grudge against sb.
  • jdm. grollen
  • bear sb. a grudge
  • jdm. etw. nachtragen
  • hold sth. against sb.
  • jdm. etw. anlasten
  • charge sb. with sth.
  • jdn. mit etw. [Dat.] betrauen
  • entrust sb. with sth.
  • jdm. etw. anvertrauen
  • entrust sth. to sb.
  • jdm. etw. befehlen
  • tell sb. to do sth.
  • jdn. etw. heißen
  • call sb. names
  • jdn. beschimpfen
  • abuse sb.
  • jdn. traktieren
  • pester sb.
  • jdn. belästigen
  • accost sb.
  • jdn. ansprechen
  • address oneself to sb.
  • sich an jdn. wenden
  • approach
  • erreichen
  • hit
  • Treffer
  • direct hit
  • Volltreffer
  • bullseye
  • Hahnenfuß-ähnlicher Wassernabel
  • pennywort
  • Mauer-Zimbelkraut
  • Aaron's beard
  • Großkelchiges Johanniskraut
  • Jerusalem star
  • Austernpflanze
  • goatsbeard
  • Geißbart
  • goatee
  • Ziegenbart
  • buckhorn plantain
  • Breitwegerich / Breit-Wegerich
  • birdseed
  • Acker-Senf / Ackersenf
  • yellows
  • Gelbsucht
  • icterus
  • Icterus

Pretty neat!

So what next?

I could try to obtain an even longer chain by forgetting whether a word is English or German (and lower-casing everything), thus allowing wild jumps like hathuthüttelodge.

Or write a tool where you can enter two arbitrary words and it finds such a path between them, if there exists one. Unfortunately, it seems that the terms of the dict.cc data dump would not allow me to create such a tool as a web site (but maybe I can ask).

Or I could throw in additional languages!

What would you do?

Joachim Breitner mail@joachim-breitner.de nomeata’s mind shares

Home Automation: Graphing MQTT sensor data

Mar, 22/05/2018 - 11:28md

So I’ve setup a MQTT broker and I’m feeding it temperature data. How do I actually make use of this data? Turns out collectd has an MQTT plugin, so I went about setting it up to record temperature over time.

First problem was that although the plugin supports MQTT/TLS it doesn’t support it for subscriptions until 5.8, so I had to backport the fix to the 5.7.1 packages my main collectd host is running.

The other problem is that collectd is picky about the format it accepts for incoming data. The topic name should be of the format <host>/<plugin>-<plugin_instance>/<type>-<type_instance> and the data is <unixtime>:<value>. I modified my MQTT temperature reporter to publish to collectd/mqtt-host/mqtt/temperature-study, changed the publish line to include the timestamp:

publish.single(pub_topic, str(time.time()) + ':' + str(temp), hostname=Broker, port=8883, auth=auth, tls={})

and added a new collectd user to the Mosquitto configuration:

mosquitto_passwd -b /etc/mosquitto/mosquitto.users collectd collectdpass

And granted it read-only access to the collectd/ prefix via /etc/mosquitto/mosquitto.acl:

user collectd topic read collectd/#

(I also created an mqtt-temp user with write access to that prefix for the Python script to connect to.)

Then, on the collectd host, I created /etc/collectd/collectd.conf.d/mqtt.conf containing:

LoadPlugin mqtt <Plugin "mqtt"> <Subscribe "ha"> Host "mqtt-host" Port "8883" User "collectd" Password "collectdpass" CACert "/etc/ssl/certs/ca-certificates.crt" Topic "collectd/#" </Subscribe> </Plugin>

I had some initial problems when I tried setting CACert to the Let’s Encrypt certificate; it actually wants to point to the “DST Root CA X3” certificate that signs that. Or using the full set of installed root certificates as I’ve done works too. Of course the errors you get back are just of the form:

collectd[8853]: mqtt plugin: mosquitto_loop failed: A TLS error occurred.

which is far from helpful. Once that was sorted collectd started happily receiving data via MQTT and producing graphs for me:

This is a pretty long winded way of ending up with some temperature graphs - I could have just graphed the temperature sensor using collectd on the Pi to send it to the monitoring host, but it has allowed a simple MQTT broker, publisher + subscriber setup with TLS and authentication to be constructed and confirmed as working.

Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

rust for cortex-m7 baremetal

Mar, 22/05/2018 - 10:35md
This is a reminder for myself, if you want to install rust for a baremetal Cortex-M7 target, this seems to be a tier 3 platform:

https://forge.rust-lang.org/platform-support.html

Higlighting the relevant part:

Target std rustc cargo notes ... msp430-none-elf * 16-bit MSP430 microcontrollers sparc64-unknown-netbsd ✓ ✓ NetBSD/sparc64 thumbv6m-none-eabi * Bare Cortex-M0, M0+, M1 thumbv7em-none-eabi *

Bare Cortex-M4, M7 thumbv7em-none-eabihf * Bare Cortex-M4F, M7F, FPU, hardfloat thumbv7m-none-eabi * Bare Cortex-M3 ... x86_64-unknown-openbsd ✓ ✓ 64-bit OpenBSD
In order to enable the relevant support, use the nightly build and add the relevant target:
eddy@feodora:~/usr/src/rust-uc$ rustup show
Default host: x86_64-unknown-linux-gnu

installed toolchains
--------------------

stable-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu (default)

active toolchain
----------------

nightly-x86_64-unknown-linux-gnu (default)
rustc 1.28.0-nightly (cb20f68d0 2018-05-21)If not using nightly, switch to that:

eddy@feodora:~/usr/src/rust-uc$ rustup default nightly-x86_64-unknown-linux-gnu
info: using existing install for 'nightly-x86_64-unknown-linux-gnu'
info: default toolchain set to 'nightly-x86_64-unknown-linux-gnu'

  nightly-x86_64-unknown-linux-gnu unchanged - rustc 1.28.0-nightly (cb20f68d0 2018-05-21)
Add the needed target:
eddy@feodora:~/usr/src/rust-uc$ rustup target add thumbv7em-none-eabi
info: downloading component 'rust-std' for 'thumbv7em-none-eabi'
  5.4 MiB /   5.4 MiB (100 %)   5.1 MiB/s ETA:   0 s               
info: installing component 'rust-std' for 'thumbv7em-none-eabi'
eddy@feodora:~/usr/src/rust-uc$ rustup show
Default host: x86_64-unknown-linux-gnu

installed toolchains
--------------------

stable-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu (default)

installed targets for active toolchain
--------------------------------------

thumbv7em-none-eabi
x86_64-unknown-linux-gnu

active toolchain
----------------

nightly-x86_64-unknown-linux-gnu (default)
rustc 1.28.0-nightly (cb20f68d0 2018-05-21)Then compile with --target.
eddyp noreply@blogger.com Rambling around foo

Reproducible Builds: Weekly report #160

Mar, 22/05/2018 - 8:30pd

Here’s what happened in the Reproducible Builds effort between Sunday May 13 and Saturday May 19 2018:

Packages reviewed and fixed, and bugs filed

In addition, build failure bugs were reported by Adrian Bunk (2) and Gilles Filippini (1).

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages.

reprotest development

reprotest is our tool to build software and check it for reproducibility.

  • kpcyrd:
  • Chris Lamb:
    • Update referencess to Alioth now that the repository has migrated to Salsa. (1, 2, 3)
jenkins.debian.net development

There were a number of changes to our Jenkins-based testing framework, including:

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Levente Polyak and Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible-builds.org/blog/ reproducible-builds.org

Securing the container image supply chain

Enj, 17/05/2018 - 7:00md

This article is part of a series on KubeCon Europe 2018.

KubeCon EU "Security is hard" is a tautology, especially in the fast-moving world of container orchestration. We have previously covered various aspects of Linux container security through, for example, the Clear Containers implementation or the broader question of Kubernetes and security, but those are mostly concerned with container isolation; they do not address the question of trusting a container's contents. What is a container running? Who built it and when? Even assuming we have good programmers and solid isolation layers, propagating that good code around a Kubernetes cluster and making strong assertions on the integrity of that supply chain is far from trivial. The 2018 KubeCon + CloudNativeCon Europe event featured some projects that could eventually solve that problem.

Image provenance

A first talk, by Adrian Mouat, provided a good introduction to the broader question of "establishing image provenance and security in Kubernetes" (video, slides [PDF]). Mouat compared software to food you get from the supermarket: "you can actually tell quite a lot about the product; you can tell the ingredients, where it came from, when it was packaged, how long it's good for". He explained that "all livestock in Europe have an animal passport so we can track its movement throughout Europe and beyond". That "requires a lot of work, and time, and money, but we decided that this is was worthwhile doing so that we know [our food is] safe to eat. Can we say the same thing about the software running in our data centers?" This is especially a problem in complex systems like Kubernetes; containers have inherent security and licensing concerns, as we have recently discussed.

You should be able to easily tell what is in a container: what software it runs, where it came from, how it was created, and if it has any known security issues, he said. Mouat also expects those properties to be provable and verifiable with strong cryptographic assertions. Kubernetes can make this difficult. Mouat gave a demonstration of how, by default, the orchestration framework will allow different versions of the same container to run in parallel. In his scenario, this is because the default image pull policy (ifNotPresent) might pull a new version on some nodes and not others. This problem arises because of an inconsistency between the way Docker and Kubernetes treat image tags; the former as mutable and the latter as immutable. Mouat said that "the default semantics for pulling images in Kubernetes are confusing and dangerous." The solution here is to deploy only images with tags that refer to a unique version of a container, for example by embedding a Git hash or unique version number in the image tag. Obviously, changing the policy to AlwaysPullImages will also help in solving the particular issue he demonstrated, but will create more image churn in the cluster.

But that's only a small part of the problem; even if Kubernetes actually runs the correct image, how can you tell what is actually in that image? In theory, this should be easy. Docker seems like the perfect tool to create deterministic images that consist exactly of what you asked for: a clean and controlled, isolated environment. Unfortunately, containers are far from reproducible and the problem begins on the very first line of a Dockerfile. Mouat gave the example of a FROM debian line, which can mean different things at different times. It should normally refer to Debian "stable", but that's actually a moving target; Debian makes new stable releases once in a while, and there are regular security updates. So what first looks like a static target is actually moving. Many Dockerfiles will happily fetch random source code and binaries from the network. Mouat encouraged people to at least checksum the downloaded content to prevent basic attacks and problems.

Unfortunately, all this still doesn't get us reproducible builds since container images include file timestamps, build identifiers, and image creation time that will vary between builds, making container images hard to verify through bit-wise comparison or checksums. One solution there is to use alternative build tools like Bazel that allow you to build reproducible images. Mouat also added that there is "tension between reproducibility and keeping stuff up to date" because using hashes in manifests will make updates harder to deploy. By using FROM debian, you automatically get updates when you rebuild that container. Using FROM debian:stretch-20180426 will get you a more reproducible container, but you'll need to change your manifest regularly to follow security updates. Once we know what is in our container, there is at least a standard in the form of the OCI specification that allows attaching annotations to document the contents of containers.

Another problem is making sure containers are up to date, a "weirdly hard" question to answer according to Mouat: "why can't I ask my registry [if] there is new version of [a] tag, but as far as I know, there's no way you can do that." Mouat literally hand-waved at a slide showing various projects designed to scan container images for known vulnerabilities, introducing Aqua, Clair, NeuVector, and Twistlock. Mouat said we need a more "holistic" solution than the current whack-a-mole approach. His company is working on such a product called Trow, but not much information about it was available at the time of writing.

The long tail of the supply chain

Verifying container images is exactly the kind of problem Notary is designed to solve. Notary is a server "that allows anyone to have trust over arbitrary collections of data". In practice, that can be used by the Docker daemon as an additional check before fetching images from the registry. This allows operators to approve images with cryptographic signatures before they get deployed in the cluster.

Notary implements The Update Framework (TUF), a specification covering the nitty-gritty details of signatures, key rotation, and delegation. It keeps signed hashes of container images that can be used for verification; it can be deployed by enabling Docker's "content Trust" in any Docker daemon, or by configuring a custom admission controller with a web hook in Kubernetes. In another talk (slides [PDF], video) Liam White and Michael Hough covered the basics of Notary's design and how it interacts with Docker. They also introduced Porteiris as an admission controller hook that can implement a policy like "allow any image from the LWN Docker registry as long as it's signed by your favorite editor". Policies can be scoped by namespace as well, which can be useful in multi-tenant clusters. The downside of Porteris is that it supports only IBM Cloud Notary servers because the images need to be explicitly mapped between the Notary server and the registry. The IBM team knows only about how to map its own images but the speakers said they were open to contributions there.

A limitation of Notary is that it looks only at the last step of the build chain; in itself, it provides no guarantees on where the image comes from, how the image was built, or what it's made of. In yet another talk (slides [PDF] video), Wendy Dembowski and Lukas Puehringer introduced a possible solution to that problem: two projects that work hand-in-hand to provide end-to-end verification of the complete container supply chain. Puehringer first introduced the in-toto project as a tool to authenticate the integrity of individual build steps: code signing, continuous integration (CI), and deployment. It provides a specification for "open and extensible" metadata that certifies how each step was performed and the resulting artifacts. This could be, at the source step, as simple as a Git commit hash or, at the CI step, a build log and artifact checksums. All steps are "chained" as well, so that you can track which commit triggered the deployment of a specific image. The metadata is cryptographically signed by role keys to provide strong attestations as to the provenance and integrity of each step. The in-toto project is supervised by Justin Cappos, who also works on TUF, so it shares some of its security properties and integrates well with the framework. Each step in the build chain has its own public/private key pair, with support for role delegation and rotation.

In-toto is a generic framework allowing a complete supply chain verification by providing "attestations" that a given artifact was created by the right person using the right source. But it does not necessarily provide the hooks to do those checks in Kubernetes itself. This is where Grafeas comes in, by providing a global API to read and store metadata. That can be package versions, vulnerabilities, license or vulnerability scans, builds, images, deployments, and attestations such as those provided by in-toto. All of those can then be used by the Kubernetes admission controller to establish a policy that regulates image deployments. Dembowski referred to this tutorial by Kelsey Hightower as an example configuration to integrate Grafeas in your cluster. According to Puehringer: "It seems natural to marry the two projects together because Grafeas provides a very well-defined API where you can push metadata into, or query from, and is well integrated in the cloud ecosystem, and in-toto provides all the steps in the chain."

Dembowski said that Grafeas is already in use at Google and it has been found useful to keep track of metadata about containers. Grafeas can keep track of what each container is running, who built it, when (sometimes vulnerable) code was deployed, and make sure developers do not ship containers built on untrusted development machines. This can be useful when a new vulnerability comes out and administrators scramble to figure out if or where affected code is deployed.

Puehringer explained that in-toto's reference implementation is complete and he is working with various Linux distributions to get them to use link metadata to have their package managers perform similar verification.

Conclusion

The question of container trust hardly seems resolved at all; the available solutions are complex and would be difficult to deploy for Kubernetes rookies like me. However, it seems that Kubernetes could make small improvements to improve security and auditability, the first of which is probably setting the image pull policy to a more reasonable default. In his talk, Mouat also said it should be easier to make Kubernetes fetch images only from a trusted registry instead of allowing any arbitrary registry by default.

Beyond that, cluster operators wishing to have better control over their deployments should start looking into setting up Notary with an admission controller, maybe Portieris if they can figure out how to make it play with their own Notary servers. Considering the apparent complexity of Grafeas and in-toto, I would assume that those would probably be reserved only to larger "enterprise" deployments but who knows; Kubernetes may be complex enough as it is that people won't mind adding a service or two in there to improve its security. Keep in mind that complexity is an enemy of security, so operators should be careful when deploying solutions unless they have a good grasp of the trade-offs involved.

This article first appeared in the Linux Weekly News.

Antoine Beaupré https://anarc.at/tag/debian-planet/ pages tagged debian-planet

Home Automation: Raspberry Pi as MQTT temperature sensor

Mër, 16/05/2018 - 10:16md

After setting up an MQTT broker I needed some data to feed it. It made sense to start basic and gradually build up bits and pieces that would form a bigger home automation setup. As it happened I have an old Raspberry Pi B (original rev 1 [2 if you look at /proc/cpuinfo] with 256MB RAM) and some DS18B20 1-Wire temperature sensors lying around, so I decided to make a heavyweight temperature sensor (long term I’m hoping to do something with some ESP8266s).

There are plenty of guides out there about hooking up the DS18B20 to the Pi; Adafruit has a reasonable one. The short version is that GPIO4 can be easily configured to be a 1-Wire bus and you hook the DS18B20 up with a 4k7Ω resistor across the data + 3v3 power pins. An initial check can be performed by enabling the DT overlay on the fly:

sudo dtoverlay w1-gpio

Detection of 1-Wire devices is automatic so you should see an entry in dmesg looking like:

w1_master_driver w1_bus_master1: Attaching one wire slave 28.012345678abcd crc ef

You can then do

$ cat /sys/bus/w1/devices/28-*/w1_slave 1e 01 4b 46 7f ff 0c 10 18 : crc=18 YES 1e 01 4b 46 7f ff 0c 10 18 t=17875

Which shows a current temperature of 17.875°C in my sudy. Once that’s working (and you haven’t swapped GND and DATA like I did on the first go) you can make the Pi bootup with 1-Wire enabled by adding a dtoverlay=w1-gpio line to /boot/config.txt. The next step is to get that fed into the MQTT broker. A simple Python client seemed like the right approach. Debian has paho-mqtt but sadly not in a stable release. Thankfully the python3-paho-mqtt 1.3.1-1 package in testing installed just fine on the Raspbian stretch image my Pi is running. I dropped the following in /usr/locals/bin/mqtt-temp:

#!/usr/bin/python3 import glob import time import paho.mqtt.publish as publish Broker = 'mqtt-host' auth = { 'username': 'user2', 'password': 'bar', } pub_topic = 'test/temperature' base_dir = '/sys/bus/w1/devices/' device_folder = glob.glob(base_dir + '28-*')[0] device_file = device_folder + '/w1_slave' def read_temp(): valid = False temp = 0 with open(device_file, 'r') as f: for line in f: if line.strip()[-3:] == 'YES': valid = True temp_pos = line.find(' t=') if temp_pos != -1: temp = float(line[temp_pos + 3:]) / 1000.0 if valid: return temp else: return None while True: temp = read_temp() if temp is not None: publish.single(pub_topic, str(temp), hostname=Broker, port=8883, auth=auth, tls={}) time.sleep(60)

And finished it off with a systemd unit file - I know a lot of people complain about systemd, but it really does make it easy to just spin up a minimal service as a unique non-privileged user. The following went in /etc/systemd/system/mqtt-temp.service:

[Unit] Description=MQTT Temperature sensor After=network.target [Service] # Hack because Python can't cope with a DynamicUser with no HOME Environment="HOME=/" ExecStart=/usr/local/sbin/mqtt-temp DynamicUser=yes MemoryDenyWriteExecute=true PrivateDevices=true ProtectKernelTunables=true ProtectControlGroups=true RestrictRealtime=true RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX RestrictNamespaces=true [Install] WantedBy=multi-user.target

Start it up and enable for subsequent reboots:

systemctl start mqtt-temp systemctl enable mqtt-temp

And then watch on my Debian test box as before:

$ mosquitto_sub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -v -t '#' -u user1 -P foo test/temperature 17.875 test/temperature 17.937 Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

PDP-8/e Replicated — Clocks And Logic

Mër, 09/05/2018 - 2:58md

This is, at long last, part 3 of the overview of my PDP-8/e replica project and offers some details of the low-level implementation.

I have mentioned that I build my PDP-8/e replica from the original schematics. The great thing about the PDP-8/e is that it is still built in discrete logic rather than around a microprocessor, meaning that schematics of the actual CPU logic are available instead of just programmer’s documentation. After all, with so many chips on multiple boards something is bound to break down sooner or later and technicians need schematics to diagnose and fix that1. In addition, there’s a maintenance manual that very helpfully describes the workings of every little part of the CPU with excerpts of the full schematics, but it has some inaccuracies and occasionally outright errors in the excerpts so the schematics are still indispensable.

Originally I wanted to design my own logic and use the schematics as just another reference. Since the front panel is a major part of the project and I want it to visually behave as close as possible to the real thing, I would have to duplicate the cycles exactly and generally work very close to the original design. I decided that at that point I might as well just reimplement the schematics.

However, some things can not be reimplemented exactly in the FPGA, other things are a bad fit and can be improved significantly with slight changes.

Clocks

Back in the day, the rule for digital circuits were multi-phase clocks of varying complexity, and the PDP-8/e is no exception in that regard. A cycle has four timing states of different lengths that each end on a timing pulse.

timing diagram from the "pdp8/e & pdp8/m small computer handbook 1972"

As can be seen, the timing states are active when they are at low voltage while the timing pulses are active high. There are plenty of quirks like this which I describe below in the Logic section.

In the PDP-8 the timing generator was implemented as a circular chain of shift registers with parallel outputs. At power on, these registers are reset to all 1 except for 0 in the first two bits. The shift operation is driven by a 20 MHz clock2 and the two zeros then circulate while logic combinations of the parallel outputs generate the timing signals (and also core memory timing, not shown in the diagram above).

What happens with these signals is that the timing states together with control signals decoded from instructions select data outputs and paths while the associated timing pulse combined with timing state and instruction signals trigger D type flip-flops to save these results and present it on their outputs until they are next triggered with different input data.

Relevant for D type flip-flops is the rising edge of their clock input. The length of the pulse does not matter as long as it is not shorter than a required minimum. For example the accumulator register needs to be loaded at TP3 during major state E for a few different instructions. Thus the AC LOAD signal is generated as TP3 and E and (TAD instruction or AND instruction or …) and that signal is used as the clock input for all twelve flip-flops that make up the accumulator register.

However, having flip-flops clocked off timing pulses that are combined with different amounts of logic create differences between sample times which in turn make it hard to push that kind of design to high cycle frequencies. Basically all modern digital circuits are synchronous. There, all flip-flops are clocked directly off the same global clock and get triggered at the same time. Since of course not all flip-flops should get new values at every cycle they have an additional enable input so that the rising clock edge will only register new data when enable is also true3.

Naturally, FPGAs are tailored to this design paradigm. They have (a limited number of) dedicated clock distribution networks set apart from the regular signal routing resources, to provide low skew clock signals across the device. Groups of logic elements get only a very limited set of clock inputs for all their flip-flops. While it is certainly possible to run the same scheme as the original in an FPGA, it would be an inefficient use of resources and very likely make automated timing analysis difficult by requiring lots of manual specification of clock relationships between registers.

So while I do use 20 MHz as the base clock in my timing generator and generate the same signals in my design, I also provide this 20 MHz as the common clock to all logic. Instead of registers triggering on the timing pulse rising edges they get the timing pulses as enables. One difference resulting from that is registers aren’t triggered by the rising edge of a pulse anymore but will trigger on every clock cycle where the pulse is active. The original pulses are two clock cycles long and extend into the following time state so the correct data they picked up on the first cycle would be overwritten by wrong data on the second. I simply shortened the timing pulses to one clock cycle to adapt to this.

timing signals from simulaton showing long and fast cycle

To reiterate, this is all in the interest of matching the original timing exactly. Analysis by the synthesis tool shows that I could easily push the base clock well over twice the current speed, and that’s already with it assuming that everything has to settle within one clock cycle as I’ve not specified multicycle paths. Meaning I could shorten the timing states to a single cycle4 for an overall more than 10× acceleration on the lowest speed grade of the low-end FPGA I’m using.

Logic

To save on logic, many parts with open collector outputs were used in the PDP-8. Instead of driving high or low voltage to represent zeros and ones, an open collector only drives either low voltage or leaves the line alone. Many outputs can then be simply connected together as they can’t drive conflicting voltages. Return to high voltage in the absence of outputs driving low is accomplished by a resistor to positive supply somewhere on the signal line.

The effect is that the connection of outputs itself forms a logic combination in that the signal is high when none of the gates drive low and it’s low when any number of gates drive low. Combining that with active low signalling, where a low voltage represents active or 1, the result is a logical OR combination of all outputs (called wired OR since no logic gates are involved).

The designers of the PDP-8/e made extensive use of that. The majority of signals are active low, marked with L after their name in the schematics. Some signals don’t have multiple sources and can be active high where it’s more convenient, those are marked with H. And then there are many signals that carry no indication at all and some that miss the indication in the maintenance manual just to make a reimplementer’s life more interesting.

excerpt from the PDP-8/e CPU schematic, generation of the accumulator load (AC LOAD L) signal can be seen on the right

As an example in this schematic, let’s look at AC LOAD L which triggers loading the accumulator register from the major registers bus. It’s a wired OR of two NAND outputs, both in the chip labeled E15, and with pull-up resistor R12 to +5 V. One NAND combines BUS STROBE and C2, the other TP3 and an OR in chip E7 of a bunch of instruction related signals. For comparison, here’s how I implemented it in VHDL:

ac_load <= (bus_strobe and not c2) or (TP3 and E and ir_TAD) or (TP3 and E and ir_AND) or (TP3 and E and ir_DCA) or (TP3 and F and ir_OPR);

FPGAs don’t have internal open collector logic5 and any output of a logic gate must be the only one driving a particular line. As a result, all the wired OR must be implemented with explicit ORs. Without the need to have active low logic I consistently use active high everywhere, meaning that the logic in my implementation is mostly inverted compared to the real thing. The deviation of ANDing TP3 with every signal instead of just once with the result of the OR is again due to consistency, I use the “<timing> and <major state> and <instruction signal>” pattern a lot.

One difficulty with wired OR is that it is never quite obvious from a given section of the schematics what all inputs to a given gate are. You may have a signal X being generated on one page of the schematic and a line with signal X as an input to a gate on another page, but that doesn’t mean there isn’t something on yet another page also driving it, or that it isn’t also present on a peripheral port6.

Some of the original logic is needed only for electrical reasons, such as buffers which are logic gates that do not combine multiple inputs but simply repeat their inputs (maybe inverted). Logic gate outputs can only drive so many inputs, so if one signal is widely used it needs buffers. Inverters are commonly used for that in the PDP-8. BUS STROBE above is one example, it is the inverted BUS STROBE L found on the backplane. Another is BTP3 (B for buffered) which is TP3 twice inverted.

Finally, some additional complexity is owed to the fact that the 8/e is made of discrete logic chips and that these have multiple gates in a package, for example the 7401 with four 2-input NAND gates with open collector outputs per chip. In designing the 8/e, logic was sometimes not implemented straightforwardly but as more complicated equivalents if it means unused gates in existing chips could be used rather than adding more chips.

Summary

I have started out saying that I build an exact PDP-8/e replica from the schematics. As I have detailed, that doesn’t involve just taking every gate and its connections from the schematic and writing it down in VHDL. I am changing the things that can not be directly implemented in an FPGA (like wired OR) and leaving out things that are not needed in this environment (such as buffers). Nevertheless, the underlying logic stays the same and as a result my implementation has the exact same timing and behaviour even in corner cases.

Ultimately all this only applies to the CPU and closely associated units (arithmetic and address extension). Moving out to peripheral hardware, the interface to the CPU may be the only part that could be implemented from original schematics. After all, where the magnetic tape drive interface in the original was controlling the actual tape hardware the equivalent in the replica project would be accessing emulated tape storage.

This finally concludes the overview of my project. Its development hasn’t advanced as much as I expected around this time last year since I ended up putting the project aside for a long while. After returning to it, running the MAINDEC test programs revealed a bunch of stuff I forgot to implement or implemented wrong which I had to fix. The optional Extended Arithmetic Element isn’t implemented yet, the Memory Extension & Time Share is now complete pending test failures I need to debug. It is now reaching a state where I consider the design getting ready to be published.

  1. There are also test programs that exercise and check every single logic gate to help pinpoint a problem. Naturally they are also extremely helpful with verifying that a replica is in fact working exactly like the real thing. [return]
  2. Thus 50 ns, one cycle of the 20 MHz clock, is the granularity at which these signals are created. The timing pulses, for example, are 100 ns long. [return]
  3. This is simply implemented by presenting them their own output by a feedback path when enable is false so that they reload their existing value on the clock edge. [return]
  4. In fact I would be limited by the speed of the external SRAM I use and its 6 bit wide data connection, requiring two cycles for a 12 bit access. [return]
  5. The IO pins that interface to the outside world generally have the capability to switch disconnection at run time, allowing open collector and similar signaling. [return]
  6. Besides the memory and expansion bus, the 8/e CPU also has a special interface to attach the Extended Arithmetic Element. [return]
Andreas Bombe https://activelow.net/tags/pdo/ pdo on Active Low

Introducing Autodeb

Mër, 09/05/2018 - 6:00pd

Autodeb is a new service that will attempt to automatically update Debian packages to newer upstream versions or to backport them. Resulting packages will be distributed in apt repositories.

I am happy to annnounce that I will be developing this new service as part of Google Summer of Code 2018 with Lucas Nussbaum as a mentor.

The program is currently nearing the end of the “Community Bonding” period, where mentors and their mentees discuss on the details of the projects and try to set objectives.

Main goal

Automatically create and distribute package updates/backports

This is the main objective of Autodeb. These unofficial packages can be an alternative for Debian users that want newer versions of software faster than it is available in Debian.

The results of this experiment will also be interesting when looking at backports. If successful, it could be that a large number of packages are backportable automatically, brigning even more options for stable (or even oldstable) users.

We also hope that the information that is produced by the system can be used by Debian Developers. For example, Debian Developers may be more tempted to support backports of their packages if they know that it already builds and that autopkgtests passes.

Other goals

Autodeb will be composed of infrastructure that is capable to build and test a large number of packages. We intend to build it with two secondary goals in mind:

Test packages that were built by developers before they upload it to the archive

We intend to add a self-service interface so that our testing infrastructure can be used for other purposes than automatically updating packages. This can empower Debian Developers by giving them easy access to more rigorous testing before they upload a package to the archive. For more more details on this, see my previous testeduploads proposal.

Archive rebuids / modifying the build and test environment

We would like to allow for building packages with a custom environment. For example, with a new version of GCC or with a set of packages. Ideally, this would also be a self-service interface where Developers can setup their environement and then upload packages for testing. Istead of individual package uploads, the input of packages to build could be a whole apt repository from which all source packages would be downloaded and rebuilt, with filters to select the packages to include or exclude.

What’s next

The next phase of Google Summer of Code is the coding period, it begins of May 14 and ends on August 6. However, there area a number of areas where work has already begun:

  1. Main repository: code for the master and the worker components of the service, written in golang.
  2. Debian packaging: Debian packaging autodeb. Contains scripts that will publish packages to pages.debian.net.
  3. Ansible scripts: this repository contains ansible scripts to provision the infrastructure at auto.debian.net.
Status update at DebConf

I have submitted a proposal for a talk on Autodeb at DebConf. By that time, the project should have evolved from idea to prototype and it will be interesting to discuss the things that we have learned:

  • How many packages can we successfully build?
  • How many of these packages fail tests?

If all goes well, it will also be an opportunity to officialy present our self-service interface to the public so that the community can start using it to test packages.

In the meantime, feel free to get in touch with us by email, on OFTC at #autodeb, or via issues on salsa.

Alexandre Viau https://alexandreviau.net/blog/ Alexandre Viau’s blog

Lessons from OpenStack Telemetry: Incubation

Enj, 12/04/2018 - 2:50md

It was mostly around that time in 2012 that I and a couple of fellow open-source enthusiasts started working on Ceilometer, the first piece of software from the OpenStack Telemetry project. Six years have passed since then. I've been thinking about this blog post for several months (even years, maybe), but lacked the time and the hindsight needed to lay out my thoughts properly. In a series of posts, I would like to share my observations about the Ceilometer development history.

To understand the full picture here, I think it is fair to start with a small retrospective on the project. I'll try to keep it short, and it will be unmistakably biased, even if I'll do my best to stay objective – bear with me.

Incubation

Early 2012, I remember discussing with the first Ceilometer developers the right strategy to solve the problem we were trying to address. The company I worked for wanted to run a public cloud, and billing the resources usage was at the heart of the strategy. The fact that no components in OpenStack were exposing any consumption API was a problem.

We debated about how to implement those metering features in the cloud platform. There were two natural solutions: either achieving some resource accounting report in each OpenStack projects or building a new software on the side, covering for the lack of those functionalities.

At that time there were only less than a dozen of OpenStack projects. Still, the burden of patching every project seemed like an infinite task. Having code reviewed and merged in the most significant projects took several weeks, which, considering our timeline, was a show-stopper. We wanted to go fast.

Pragmatism won, and we started implementing Ceilometer using the features each OpenStack project was offering to help us: very little.

Our first and obvious candidate for usage retrieval was Nova, where Ceilometer aimed to retrieves statistics about virtual machines instances utilization. Nova offered no API to retrieve those data – and still doesn't. Since it was out of the equation to wait several months to have such an API exposed, we took the shortcut of polling directly libvirt, Xen or VMware from Ceilometer.

That's precisely how temporary hacks become historical design. Implementing this design broke the basis of the abstraction layer that Nova aims to offer.

As time passed, several leads were followed to mitigate those trade-offs in better ways. But on each development cycle, getting anything merged in OpenStack became harder and harder. It went from patches long to review, to having a long list of requirements to merge anything. Soon, you'd have to create a blueprint to track your work, write a full specification linked to that blueprint, with that specification being reviewed itself by a bunch of the so-called core developers. The specification had to be a thorough document covering every aspect of the work, from the problem that was trying to be solved, to the technical details of the implementation. Once the specification was approved, which could take an entire cycle (6 months), you'd have to make sure that the Nova team would make your blueprint a priority. To make sure it was, you would have to fly a few thousands of kilometers from home to an OpenStack Summit, and orally argue with developers in a room filled with hundreds of other folks about the urgency of your feature compared to other blueprints.

An OpenStack design session in Hong-Kong, 2013

Even if you passed all of those ordeals, the code you'd send could be rejected, and you'd get back to updating your specification to shed light on some particular points that confused people. Back to square one.

Nobody wanted to play that game. Not in the Telemetry team at least.

So Ceilometer continued to grow, surfing the OpenStack hype curve. More developers were joining the project every cycle – each one with its list of ideas, features or requirements cooked by its in-house product manager.

But many features did not belong in Ceilometer. They should have been in different projects. Ceilometer was the first OpenStack project to pass through the OpenStack Technical Committee incubation process that existed before the rules were relaxed.

This incubation process was uncertain, long, and painful. We had to justify the existence of the project, and many technical choices that have been made. Where we were expecting the committee to challenge us at fundamental decisions, such as breaking abstraction layers, it was mostly nit-picking about Web frameworks or database storage.

Consequences

The rigidity of the process discouraged anyone to start a new project for anything related to telemetry. Therefore, everyone went ahead and started dumping its idea in Ceilometer itself. With more than ten companies interested, the frictions were high, and the project was at some point pulled apart in all directions. This phenomenon was happening to every OpenStack projects anyway.

On the one hand, many contributions brought marvelous pieces of technology to Ceilometer. We implemented several features you still don't find any metering system. Dynamically sharded, automatic horizontally scalable polling? Ceilometer has that for years, whereas you can't have it in, e.g., Prometheus.

On the other hand, there were tons of crappy features. Half-baked code merged because somebody needed to ship something. As the project grew further, some of us developers started to feel that this was getting out of control and could be disastrous. The technical debt was growing as fast as the project was.

Several technical choices made were definitely bad. The architecture was a mess; the messaging bus was easily overloaded, the storage engine was non-performant, etc. People would come to me (as I was the Project Team Leader at that time) and ask why the REST API would need 20 minutes to reply to an autoscaling request. The willingness to solve everything for everyone was killing Ceilometer. It's around that time that I decided to step out of my role of PTL and started working on Gnocchi to, at least, solve one of our biggest challenge: efficient data storage.

Ceilometer was also suffering from the poor quality of many OpenStack projects. As Ceilometer retrieves data from a dozen of other projects, it has to use their interface for data retrieval (API calls, notifications) – or sometimes, palliate for their lack of any interface. Users were complaining about Ceilometer dysfunctioning while the root of the problem was actually on the other side, in the polled project. The polling agent would try to retrieve the list of virtual machines running on Nova, but just listing and retrieving this information required several HTTP requests to Nova. And those basic retrieval requests would overload the Nova API. The API does not offer any genuine interface from where the data could be retrieved in a small number of calls. And it had terrible performances.
From the point of the view of the users, the load was generated by Ceilometer. Therefore, Ceilometer was the problem. We had to imagine new ways of circumventing tons of limitation from our siblings. That was exhausting.

At its peak, during the Juno and Kilo releases (early 2015), the code size of Ceilometer reached 54k lines of code, and the number of committers reached 100 individuals (20 regulars). We had close to zero happy user, operators were hating us, and everybody was wondering what the hell was going in those developer minds.

Nonetheless, despite the impediments, most of us had a great time working on Ceilometer. Nothing's ever perfect. I've learned tons of things during that period, which were actually mostly non-technical. Community management, social interactions, human behavior and politics were at the heart of the adventure, offering a great opportunity for self-improvement.

In the next blog post, I will cover what happened in the years that followed that booming period, up until today. Stay tuned!

Julien Danjou https://julien.danjou.info/ Julien Danjou

Bursary applications for DebConf18 are closing in 48 hours!

Enj, 12/04/2018 - 12:30md

If you intend to apply for a DebConf18 bursary and have not yet done so, please proceed as soon as possible!

Bursary applications for DebConf18 will be accepted until April 13th at 23:59 UTC. Applications submitted after this deadline will not be considered.

You can apply for a bursary when you register for the conference.

Remember that giving a talk or organising an event is considered towards your bursary; if you have a submission to make, submit it even if it is only sketched-out. You will be able to detail it later. DebCamp plans can be entered in the usual Sprints page at the Debian wiki.

Please make sure to double-check your accommodation choices (dates and venue). Details about accommodation arrangements can be found on the wiki.

See you in Hsinchu!

Laura Arjona Reina https://bits.debian.org/ Bits from Debian

Streaming the Norwegian ultimate championships

Enj, 12/04/2018 - 1:36pd

As the Norwegian indoor frisbee season is coming to a close, the Norwegian ultimate nationals are coming up, too. Much like in Trøndisk 2017, we'll be doing the stream this year, replacing a single-camera Windows/XSplit setup with a multi-camera free software stack based on Nageru.

The basic idea is the same as in Trøndisk; two cameras (one wide and one zoomed) for the main action and two static ones above the goal zones. (The hall has more amenities for TV productions than the one in Trøndisk, so a basic setup is somewhat simpler.) But there are so many tweaks:

  • We've swapped out some of the cameras for more suitable ones; the DSLRs didn't do too well under the flicker of the fluorescent tubes, for instance, and newer GoPros have rectilinear modes). And there's a camera on the commentators now, with side-by-side view as needed.

  • There are tally lights on the two human-operated cameras (new Nageru feature).

  • We're doing CEF directly in Nageru (new Nageru feature) instead of through CasparCG, to finally get those 60 fps buttery smooth transitions (and less CPU usage!).

  • HLS now comes out directly of Cubemap (new Cubemap feature) instead of being generated by a shell script using FFmpeg.

  • Speaking of CPU usage, we now have six cores instead of four, for more x264 oomph (we wanted to do 1080p60 instead of 720p60, but alas, even x264 at nearly superfast can't keep up when there's too much motion).

  • And of course, a ton of minor bugfixes and improvements based on our experience with Trøndisk—nothing helps as much as battle-testing.

For extra bonus, we'll be testing camera-over-IP from Android for interviews directly on the field, which will be a fun challenge for the wireless network. Nageru does have support for taking in IP streams through FFmpeg (incidentally, a feature originally added for the now-obsolete CasparCG integration), but I'm not sure if the audio support is mature enough to run in production yet—most likely, we'll do the reception with a laptop and use that as a regular HDMI input. But we'll see; thankfully, it's a non-essential feature this time, so we can afford to have it break. :-)

Streaming starts Saturday morning CEST (UTC+2), will progress until late afternoon, and then restart on Sunday with the playoffs (the final starts at 14:05). There will be commentary in a mix of Norwegian and English depending on the mood of the commentators, so head over to www.plastkast.no if you want to watch :-) Exact schedule on the page.

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Debian LTS work, March 2018

Mër, 11/04/2018 - 10:41md

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 2 hours from February. I worked 15 hours and will again carry over 2 hours to April.

I made another two releases on the Linux 3.2 longterm stable branch (3.2.100 and 3.2.101), the latter including mitigations for Spectre on x86. I rebased the Debian package onto 3.2.101 but didn't upload an update to Debian this month. We will need to add gcc-4.9 to wheezy before we can enable all the mitigations for Spectre variant 2.

Ben Hutchings https://www.decadent.org.uk/ben/blog Better living through software

Debian SecureBoot Sprint 2018

Mër, 11/04/2018 - 5:01md

Monday morning I gave back the keys to Office Factory Fulda, who sponsored the location for the SecureBoot Sprint from Thursday, 4th April to Sunday, 8th April. Appearently we left a pretty positive impression (we managed to clean up), so are welcome again for future sprints.

The goal of this sprint was enabling SecureBoot in/for Debian, so that users who have SecureBoot enabled machines do not need to turn that off to be able to run Debian. That needs us to handle signing a certain set of packages in a defined way, handling it as automated as possible while ensuring that stuff is done in a safe/secure way.

Now add details like secure handling of keys, only signing pre-approved sets (to make abusing it harder), revocations, key rollovers, combine it all with the infrastructue and situation we have in Debian, say dak, buildd, security archive with somewhat different rules of visibility, reproducability, a huge set of architectures only some of which do SecureBoot, proper audit logging of signatures and you end up with 7 people from different teams taking the whole first day just discussing and hashing out a specification. Plus some joining in virtually.

I’m not going into actual details of all that, as a sprint report will follow soon.

Friday to Sunday was used for actual implementation of the agreed solution. The actual dak changes turned out to not be too large, and thankfully Ansgar was on them, so I could take time to push the FTPTeams move to the new Salsa service forward. I still have a few of our less-important repositories to move, but thats a simple process I will be doing during this week, the most important step was coming up with a sane way of using Salsa.

That does not mean the actual web interface, but getting code changes from there to the various Debian hosts we run our services on. In the past, we pushed the hosts directly, so all code changes appearing on them meant that someone who was in the right unix group on that machine made them appear.1 “Verified by ssh login” basically.

With Salsa, we now add a service that has a different set of administrators added on top. And a big piece of software too, with a huge possibility of bugs, worst case allowing random users access to our repositories. Which is a way larger problem area than “git push via ssh” as in the past, and as such more likely to be bad. If we blindly pull from a repository on such shared space, the confirmation “a FTPMaster said this code is good” is gone.

So it needs a way of adding that confirmation back, while still being able to use all the nice features that Salsa offers. Within Debian, whats better than using already established ways of trusting something, gnupg created signatures?!

So how to go forward? I have been lucky, i did not need to entirely invent this on my own, Enrico had similar concerns for the New-Maintainer web pages. He setup CI to test his stuff and, if successful, installs the tested stuff on the NM machine, provided that the commit is signed by a key from a defined set.

Unfortunately, for me, he deals with a Django app that listens somewhere and can be pushed to. No such thing for me, neither do I have Django nor do I have a service listening that I can tell about changes to fetch.

We also have to take care when a database schema upgrade needs to be done, no automatic deployment on database-using FTPMaster hosts for that, a human needs to trigger this.

So the actual implementation that I developed for us, and which is in use on all hosts that we maintain code on, is implemented in our standard framework for regular jobs, cronscript.2

It turns out to live in multiple files (as usual with cronscript), where the actual code is in deploy.functions, deploy.variables, and the order to call things is defined in deploy.tasks.

cronscript around it takes care to setup the environment and keep logs, and we now call the deploy every few minutes, securely getting our code deployed.

  1. Or someone abused root rights, but if you do not trust root, you lost anyways, and there is no reason to think that any DSA-member would do this. 

  2. A framework for FTPMaster scripts that ensures the same basic setup everywhere and makes it easy to call functions and stuff, with or without error checking, in background or foreground. ALso easy to restart in the middle of a script run after breakage, as it keeps track where it was. 

Joerg Jaspert https://blog.ganneff.de/ Ganneff's Little Blog

Preventing resume immediately after suspend on Dell Latitude 5580 (Debian testing)

Mër, 11/04/2018 - 1:14md

I’ve installed Debian buster (testing at the time of writing) on a new Dell Latitude 5580 laptop, and one annoyance I’ve found is that the laptop would almost always resume as soon as it was suspended.

AFAIU, it seems the culprit is the network card (Ethernet controller: Intel Corporation Ethernet Connection (4) I219-LM) which would be configured with Wake-On-Lan (wol) set to the “magic packet” mode (ethtool enp0s31f6 | grep Wake-on would return ‘g’). One hint is that grep enabled /proc/acpi/wakeup returns GLAN.

There are many ways to change that for the rest of the session with a command like ethtool -s enp0s31f6 wol d.

But I had a hard time figuring out if there was a preferred way to make this persistant among the many hits in so many tutorials and forum posts.

My best hit so far is to add the a file named /etc/systemd/network/50-eth0.link containing :

[Match] Driver=e1000e [Link] WakeOnLan=off

The driver can be found by checking udev settings as reported by udevadm info -a /sys/class/net/enp0s31f6

There are other ways to do that with systemd, but so far it seems to be working for me. Hth,

Olivier Berger https://www-public.tem-tsp.eu/~berger_o/weblog debian-en – WebLog Pro Olivier Berger

Bread and data

Mër, 11/04/2018 - 11:01pd

For the past two weeks I've mostly been baking bread. I'm not sure what made me decide to make some the first time, but it actually turned out pretty good so I've been doing every day or two ever since.

This is the first time I've made bread in the past 20 years or so - I recall in the past I got frustrated that it never rose, or didn't turn out well. I can't see that I'm doing anything differently, so I'll just write it off as younger-Steve being daft!

No doubt I'll get bored of the delicious bread in the future, but for the moment I've got a good routine going - juggling going to the shops, child-care, and making bread.

Bread I've made includes the following:

Beyond that I've spent a little while writing a simple utility to embed resources in golang projects, after discovering the tool I'd previously been using, go-bindata, had been abandoned.

In short you feed it a directory of files and it will generate a file static.go with contents like this:

files[ "data/index.html" ] = "<html>.... files[ "data/robots.txt" ] = "User-Agent: * ..."

It's a bit more complex than that, but not much. As expected getting the embedded data at runtime is trivial, and it allows you to distribute a single binary even if you want/need some configuration files, templates, or media to run.

For example in the project I discussed in my previous post there is a HTTP-server which serves a user-interface based upon bootstrap. I want the HTML-files which make up that user-interface to be embedded in the binary, rather than distributing them seperately.

Anyway it's not unique, it was a fun experience writing, and I've switched to using it now:

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

DRM, DRM, oh how I hate DRM...

Mër, 11/04/2018 - 6:43pd

I love flexibility. I love when the rules of engagement are not set in stone and allow us to lead a full, happy, simple life. (Apologies to Felipe and Marianne for using their very nice sculpture for this rant. At least I am not desperately carrying a brick! ☺)

I have been very, very happy after I switched to a Thinkpad X230. This is the first computer I have with an option for a cellular modem, so after thinking it a bit, I got myself one:

After waiting for a couple of weeks, it arrived in a nonexciting little envelope straight from Hong Kong. If you look closely, you can even appreciate there's a line (just below the smaller barcode) that reads "Lenovo"). I soon found how to open this laptop (kudos to Lenovo for a very sensible and easy opening process, great documentation... So far, it's the "openest" computer I have had!) and installed my new card!

The process was decently easy, and after patting myself in the back, I eagerly turned on my computer... Only to find the BIOS to halt with the following message:

1802: Unauthorized network card is plugged in - Power off and remove the miniPCI network card (1199/6813). System is halted

So... Got everything back to its original state. Stupid DRM in what I felt the openest laptop I have ever had. Gah.

Anyway... As you can see, I have a brand new cellular modem. I am willing to give it to the first person that offers me a nice beer in exchange, here in Mexico or wherever you happen to cross my path (just tell me so I bring the little bugger along!)

Of course, I even tried to get one of the nice volunteers to install Libreboot in my computer now that I was to Libreplanet, which would have solved the issue. But they informed me that Libreboot is supported only in the (quite a bit older) X200 machines, not in the X230.

AttachmentSize IMG_20180409_225503.jpg1003.02 KB IMG_20180409_225835.jpg1.77 MB IMG_20180409_230000.jpg113.36 KB IMG_20180409_225835.jpg1.77 MB IMG_20180408_085157.jpg3.44 MB gwolf http://gwolf.org Gunnar Wolf

My Free Software Activities in March 2018

Hën, 09/04/2018 - 11:58md

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games Debian Java
  • I spent most of my free time on Java packages because…OpenJDK 9 is now the default Java runtime environment in Debian! As of today I count 319 RC bugs (bugs with severity normal would be serious today as well) of which 227 are already resolved. That means one third of the Java team’s packages have to be adjusted for the new OpenJDK version. Java 9 comes with a new module system called Jigsaw. Undoubtedly it represents a lot of new interesting ideas but it is also a major paradigm shift. For us mere packagers it means more work than any other version upgrade in the past. Let’s say we are a handful of regular contributors (I’m generous) and we spend most of our time to stabilize the Java ecosystem in Debian to the point that we can build all of our packages again. Repeat for every new Debian release. Unfortunately not much time is actually spent on packaging new and cool applications or libraries unless they are strictly required to fix a specific Java 9 issue. It just doesn’t feel right at the moment. Most upstreams are rather indifferent or relaxed when it comes to porting their applications to Java 9 because they still can use Java 8, so why can’t we? They don’t have to provide security support for five years and can make the switch to Java 9 much later. They can also cherry-pick certain versions of libraries whereas we have to ensure that everything works with one specific version of a library. But that’s not all: Java 9 will not be shipped with Buster and we even aim for OpenJDK 11! Releases of OpenJDK will be more frequent from now on, expect a new release every six months, and there are certain versions which will receive extended security support like OpenJDK 11. One thing we can look forward to: Apparently more commercial features of Oracle JDK will be merged into OpenJDK and it appears the longterm goal is to make Oracle JDK and OpenJDK builds completely interchangeable. So maybe one day only one free software JDK for everything and everyone? I hope so.
  • I worked on the following packages to address Java 9 or other bugs: activemq, snakeyaml, libjchart2d-java, jackson-dataformat-yaml, jboss-threads, jboss-logmanager, jboss-logging-tools, qdox2, wildfly-common, activemq-activeio, jackson-datatype-joda, antlr, axis, libitext5-java, libitext1-java, libitext-java, jedit, conversant-disruptor, beansbinding, cglib, undertow, entagged, jackson-databind, libslf4j-java, proguard, libhtmlparser-java, libjackson-json-java and sweethome3d (patch by Emmanuel Bourg)
  • New upstream versions: jboss-threads, okio, libokhttp-java, snakeyaml, robocode.
  • I NMUed jtb and applied a patch from Tiago Stürmer Daitx.
Debian LTS

This was my twenty-fifth month as a paid contributor and I have been paid to work 23,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 19.03.2018 until 25.03.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in imagemagick, libvirt, freeplane, exempi, calibre, gpac, ipython, binutils, libraw, memcached, mosquitto, sdl-image1.2, slurm-llnl, graphicsmagick, libslf4j-java, radare2, sam2p, net-snmp, apache2, ldap-account-manager, librelp, ruby-rack-protection, libvncserver, zsh and xerces-c.
  • DLA-1310-1. Issued a security update for exempi fixing 6 CVE.
  • DLA-1315-1. Issued a security update for libvirt fixing 2 CVE.
  • DLA-1316-1. Issued a security update for freeplane fixing 1 CVE.
  • DLA-1322-1. Issued a security update for graphicsmagick fixing 6 CVE.
  • DLA-1325-1. Issued a security update for drupal7 fixing 1 CVE.
  • DLA-1326-1. Issued a security update for php5 fixing 1 CVE.
  • DLA-1328-1. Issued a security update for xerces-c fixing 1 CVE.
  • DLA-1335-1. Issued a security update for zsh fixing 2 CVE.
  • DLA-1340-1. Issued a security update for sam2p fixing 5 CVE. I also prepared a security update for Jessie. (#895144)
  • DLA-1341-1. Issued a security update for sdl-image1.2 fixing 6 CVE.
Misc
  • I triaged all open bugs in imlib2 and forwarded the issues upstream. The current developer of imlib2 was very responsive and helpful. Thanks to Kim Woelders several longstanding bugs could be fixed.
  • There was also a new upstream release for xarchiver. Check it out!

Thanks for reading and see you next time.

Apo https://gambaru.de/blog planetdebian – gambaru.de

Migrating PET features to distro-tracker

Hën, 09/04/2018 - 3:30md

After joining the Debian Perl Team some time ago, PET has helped me a lot to find work to do in the team context, and also helped the whole team in our workflow. For those who do not know what PET is: “a collection of scripts that gather information about your (or your group’s) packages. It allows you to see in a bird’s eye view the health of hundreds of packages, instantly realizing where work is needed.”. PET became an important project since about 20 Debian teams were using it, including Perl and Ruby teams in which I am more active.

In Cape Town, during the DebConf16, I had a conversation with Raphael Hertzog about the possibility to migrate PET features to distro-tracker. He is one of the distro-tracker maintainers, and we found some similarities between those tools. Altough, after that I did not have enough time to push it forward. However, after the migration from Alioth to Salsa PET became almost unuseful because a lot of things were done based on Alioth. This brought me the motivation to get this migration idea off the drawing board, and support the PET features in distro-tracker team’s visualization.

In the meantime, the Debian Outreach team published a GSoC call for mentors for this year. I was a Debian GSoC student in 2014 and 2015, and this was a great opportunity for me to join the community. With that in mind and the wish to give this opportunity to others, I decided to become a mentor this year and proposed a project to implement the PET features in distro-tracker, called Improving distro-tracker to better support Debian Teams. We are at the selection students phase and I received great proposals. I am looking forward to the start of the program and finally have the PET features available in tracker.debian.org. And of course, bring new blood to the Debian Project, since this is the idea behind those outreach programs.

Lucas Kanashiro http://blog.kanashiro.xyz/ Lucas Kanashiro’s blog

New projects on Hosted Weblate

Hën, 09/04/2018 - 12:00md

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue has grown too long and waited for more than month, so it's time to process it and include new projects. I hope that gives you have good motivation to spend Christmas break by translating free software.

This time, the newly hosted projects include:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do that easily on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

Michal Čihař https://blog.cihar.com/archives/debian/ Michal Čihař's Weblog, posts tagged by Debian

Securing WordPress with AppArmor

Sht, 31/03/2018 - 12:24md

WordPress is a very popular CMS. According to one report, 30% of websites use WordPress, which is an impressive feat.

Despite this popularity, WordPress is built upon PHP which is often lacking in the security department. Add to this that the user that runs the webserver often has a fair bit of access and there is no distinguishing between the webserver code and the WordPress code and you set yourself up for troubles.

So, let’s introduce something that not only can tell the difference between Apache running and WordPress running under it, but also limit what WordPress can access.

As the AppArmor wiki says “AppArmor is Mandatory Access Control (MAC) like security system for Linux. AppArmor confines individual programs to a set of files, capabilities, network access and rlimits…”.  AppArmor also has this concept of hats, so your webserver code (e.g. apache) can be one hat with one policy but the WordPress PHP code has another hat and therefore another policy. For some reason, AppArmor calls a policy a profile, so wherever you see profile translate that to policy.

The idea here is to limit what WordPress can access down to the files and directories it needs, and nothing more. What follows is how I have setup my system but you may need to tweak it, especially for some plugins.

Change your hat

By default, apache will run in its own  AppArmor profile, called something like the “/usr/sbin/apache2” profile.  As the authors of this profile do not know what you will run on the webserver, it is very permissive and with the standard AppArmor setup is what the WordPress code will also run under.

First, you need to enable and install the mod_apparmor Apache module. This module allows you to change what profile is used, depending on what directory or URL is being requested. The link for mod_apparmor describes how to do this.

Once you have the module enabled, you need to tell Apache what directories you want the hat or profile to be changed and the name of the new hat. I put this into /etc/apache2/conf-available/wordpress and then “a2enconf wordpress”

<Directory "/usr/share/wordpress"> Require all granted <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> <IfModule mod_apparmor.c> AAHatName wordpress </IfModule> </Directory> Alias /wp-content /var/lib/wordpress/wp-content/ <Directory /var/lib/wordpress > Require all granted <IfModule mod_apparmor.c> AAHatName wordpress </IfModule> </Directory>

Most of this configuration is pretty standard WordPress setup for Apache. The important differences are the AAHatName lines.

What we have done here is if Apache serves up files from /usr/share/wordpress (where the WordPress code lives) or /var/lib/wordpress/wp-content (where things like plugins, themes and uploaded images live) then it will use the wordpress sub-profile.

Defining the profile

Now that we have the right profile for our WordPress directories, we need to create a profile. This will tell AppArmor what files and directories WordPress is allowed to access and how they are accessed. Obvious ones are the directories where the code and content live, but you will also need to include the log file locations.

This definition needs to sit “inside” the apache proper profile. In Debian and most other systems, it is just a matter of making a file in /etc/apparmor.d/apache.d/ directory.

^wordpress { include <abstractions/apache2-common> include <abstractions/base> include <abstractions/nameservice> include <abstractions/php5> /var/log/apache2/*.log w, /etc/wordpress/config-*.php r, /usr/share/wordpress/** r, /usr/share/wordpress/.maintenance w, # Change "/var/lib/wordpress/wp-content" to whatever you set # WP_CONTENT_DIR in the # /etc/wordpress/config-*.php file /var/lib/wordpress/wp-content r, /var/lib/wordpress/wp-content/** r, /var/lib/wordpress/wp-content/uploads/** rw, /var/lib/wordpress/wp-content/upgrade/** rw, # Uncomment to permit plugins Install/Update via web /var/lib/wordpress/wp-content/plugins/** rw, # Uncomment to permit themes Install/Update via web #/var/lib/wordpress/wp-content/themes/** rw, # This is what PHP sys_get_temp_dir() returns /tmp/* rw,

What we have here is a policy that basically says you can read the WordPress code and read the WordPress content. The plugins and themes sub-directories can their own line because you can selectively permit write access if you want to update plugins and themes using the web GUI.

The /etc file glob is where the Debian package stores its configuration file. The surprise for me was the maintenance dot-file which is created when WordPress is updating some component. Without this write permission, it is unable to update plugins or do many other things.

Audit Log

So how do you know its working? The simplest way is to apply the policy and then see what appears in your auditd log (Mine is at /var/log/audit/audit.log).

Two main things will go wrong. The first is the wrong profile will get used. I had this problem when I forgot to add the WordPress content directory.

Wrong Profile

type=AVC msg=audit(1522235033.386:43401): apparmor=”ALLOWED” operation=”open” profile=”/usr/sbin/apache2//null-dropbear.xyz” name=”/var/lib/wordpress/wp-content/plugins/akismet/akismet.php” pid=5036 comm=”apache2″ requested_mask=”r” denied_mask=”r” fsuid=33 ouid=33

So, what is AppArmor trying to say here? First we have the wrong profile! It’s not wordpress, but “/usr/sbin/apache2//null-dropbear,xyz” which is basically saying there was no specific sub-profile for this website so we will use the apache2 profile.

The apache2 profile is in complain, not enforce mode, so that’s why it says apparmor=”ALLOWED” yet has denied_mask=”r”.

Adding that second <Directory> clause to use the wordpress AAHatName fixed this.

Profile missing entries

The second type of problem is that you have Apache switching to the correct profile but you missed a line in the profile.  Initially I didn’t know WordPress created a file in the top-level of its code directory when undergoing maintenance. The log file showed:

type=AVC msg=audit(1522318023.409:51143): apparmor=”DENIED” operation=”mknod” profile=”/usr/sbin/apache2//wordpress” name=”/usr/share/wordpress/.maintenance” pid=16165 comm=”apache2″ requested_mask=”c” denied_mask=”c” fsuid=33 ouid=33

We have the correct profile here (wordpress is always sub-profile of apache). But we are getting a DENIED message because the profile (initially) didn’t permit the file /usr/share/wordpress/.maintenance” to be created.

Adding that file to the profile and reloading the profile and Apache fixed this.

Additional Tweaks

The given profile will probably work for most WordPress installations. Make sure you change the code and content directories to wherever you use.  Also, this profile will not let you auto-update the WordPress code. For a Debian package, this is a good thing as the Apache process is not writing the new files, dpkg is and it runs as root. If you are happy for a webserver to update PHP code that runs under it, you can change the permission for read/write for /usr/share/wordpress

I imagine some plugins out there will need additional directories. I don’t use many and none I use do those odd things, but there are plenty of odd plugins out there. Check your audit logs for any DENIED lines.

Craig http://dropbear.xyz Small Dropbear

Faqet