You are here

Agreguesi i feed

DebConf18 thanks its sponsors!

Planet Debian - Enj, 02/08/2018 - 12:00md

DebConf18 is taking place in Hsinchu, Taiwan, from July 29th to August 5th, 2018. It is the first Debian Annual Conference in Asia, with over 300 attendees and major advances for Debian and for Free Software in general.

Thirty-two companies have committed to sponsor DebConf18! With a warm "thank you", we'd like to introduce them to you.

Our Platinum sponsor is Hewlett Packard Enterprise (HPE). HPE is an industry-leading technology company providing a comprehensive portfolio of products such as integrated systems, servers, storage, networking and software. The company offers consulting, operational support, financial services, and complete solutions for many different industries: mobile and IoT, data & analytics and the manufacturing or public sectors among others.

HPE is also a development partner of Debian, and providing hardware for port development, Debian mirrors, and other Debian services (hardware donations are listed in the Debian machines page).

We have four Gold sponsors:

  • Google, a technology company specialized in Internet-related services as online advertising and search engine,
  • Infomaniak, Switzerland's largest web-hosting company, also offering live-streaming and video on demand services,
  • Collabora, which offers a comprehensive range of services to help its clients to navigate the ever-evolving world of Open Source, and
  • Microsoft, the American multinational technology company developing, licensing and selling computer software, consumer electronics, personal computers and related services.

As Silver sponsors we have credativ (a service-oriented company focusing on open-source software and also a Debian development partner), Gandi (a French company providing domain name registration, web hosting, and related services), Skymizer (a Taiwanese company focused on compiler and virtual machine technology), Civil Infrastructure Platform, (a collaborative project hosted by the Linux Foundation, establishing an open source “base layer” of industrial grade software), Brandorr Group, (a company that develops, deploys and manages new or existing infrastructure in the cloud for customers of all sizes), 3CX, (a software-based, open standards IP PBX that offers complete unified communications), Free Software Initiative Japan, (a non-profit organization dedicated to supporting Free Software growth and development), Texas Instruments (the global semiconductor company), the Bern University of Applied Sciences (with over 6,800 students enrolled, located in the Swiss capital), ARM, (a multinational semiconductor and software design company, designers of the ARM processors), Ubuntu, (the Operating System delivered by Canonical), Cumulus Networks, (a company building web-scale networks using innovative, open networking technology), Roche, (a major international pharmaceutical provider and research company dedicated to personalized healthcare) and Hudson-Trading, (a company researching and developing automated trading algorithms using advanced mathematical techniques).

ISG.EE, Univention Private Internet Access, Logilab, Dropbox and IBM are our Bronze sponsors.

And finally, SLAT (Software Liberty Association of Taiwan), The Linux foundation, deepin, Altus Metrum, Evolix, BerryNet and Purism are our supporter sponsors.

Thanks to all our sponsors for their support! Their contributions made possible that a large number of Debian contributors from all over the globe work together, help and learn from each other in DebConf18.

Laura Arjona Reina Bits from Debian

I have no friends or colleagues

Planet Debian - Enj, 02/08/2018 - 4:20pd

Although it’s never fun to have the most important professional association in your field tell you that “you have no friends or colleagues,” being able to make one’s very first submission to screenshots of despair softens the blow a little.

Benjamin Mako Hill copyrighteous

Irony is the hygiene of the mind

Planet Debian - Mër, 01/08/2018 - 11:20pd
While Elizabeth Bibesco might well be right about the mind software cleanliness requires a different approach.

Previously I have written about code smells which give a programmer hints where to clean up source code. A different technique, which has recently become readily available, is using tool-chain based instrumentation to perform run time analysis.

At a recent NetSurf developer weekend Michael Drake mentioned a talk he had seen at the Guadec conference which reference the use of sanitizers for improving the security and correctness of programs.

Santizers differ from other code quality metrics such as compiler warnings and static analysis in that they detect issues when the program is executed rather than on the source code. There are currently two  commonly used instrumentation types:
address sanitizer
This instrumentation detects several common errors when using memory such as "use after free"
undefined behaviour sanitizer
This instruments computations where the language standard has behaviour which is not clearly specified. For example left shifts of negative values (ISO 9899:2011 6.5.7 Bit-wise shift operators)
As these are runtime checks it is necessary to actually execute the instrumented code. Fortunately most of the NetSurf components have good unit test coverage so Daniel Silverstone used this to add a build target which runs the tests with the sanitizer options.

The previous investigation of this technology had been unproductive because of the immaturity of support in our CI infrastructure. This time the tool chain could be updated to be sufficiently robust to implement the technique.

Jobs were then added to the CI system to build this new target for each component in a similar way to how the existing coverage reports are generated. This resulted in failed jobs for almost every component which we proceeded to correct.

An example of how most issues were addressed is provided by Daniel fixing the bitmap library. Most of the fixes ensured correct type promotion in bit manipulation, however the address sanitizer did find a real out of bounds access when a malformed BMP header is processed. This is despite this library being run with a fuzzer and electric fence for many thousands of CPU hours previously.

Although we did find a small number of real issues the majority of the fixes were to tests which failed to correctly clean up the resources they used. This seems to parallel what I observed with the other run time testing, like AFL and Valgrind, in that often the test environment has the largest impact on detected issues to begin with.

In conclusion it appears that an instrumented build combined with our existing unit tests gives another tool to help us improve our code quality. Given the very low amount of engineering time the NetSurf project has available automated checks like these are a good way to help us avoid introducing issues. Vincent Sanders Vincents Random Waffle

Sound of Cicada.

Planet Debian - Mër, 01/08/2018 - 1:40pd
Sound of Cicada. I hear there's debconf in Taipei and my wishes to those who are there. I haven't done much Debian work recently and I wish I can do something in the future.

Junichi Uekawa Dancer's daily hackings

Sharing images with friends and family using RSS and EXIF/XMP metadata

Planet Debian - Mar, 31/07/2018 - 11:30md

For a while now, I have looked for a sensible way to share images with my family using a self hosted solution, as it is unacceptable to place images from my personal life under the control of strangers working for data hoarders like Google or Dropbox. The last few days I have drafted an approach that might work out, and I would like to share it with you. I would like to publish images on a server under my control, and point some Internet connected display units using some free and open standard to the images I published. As my primary language is not limited to ASCII, I need to store metadata using UTF-8. Many years ago, I hoped to find a digital photo frame capable of reading a RSS feed with image references (aka using the <enclosure> RSS tag), but was unable to find a current supplier of such frames. In the end I gave up that approach.

Some months ago, I discovered that XScreensaver is able to read images from a RSS feed, and used it to set up a screen saver on my home info screen, showing images from the Daily images feed from NASA. This proved to work well. More recently I discovered that Kodi (both using OpenELEC and LibreELEC) provide the Feedreader screen saver capable of reading a RSS feed with images and news. For fun, I used it this summer to test Kodi on my parents TV by hooking up a Raspberry PI unit with LibreELEC, and wanted to provide them with a screen saver showing selected pictures from my selection.

Armed with motivation and a test photo frame, I set out to generate a RSS feed for the Kodi instance. I adjusted my Freedombox instance, created /var/www/html/privatepictures/, wrote a small Perl script to extract title and description metadata from the photo files and generate the RSS file. I ended up using Perl instead of python, as the libimage-exiftool-perl Debian package seemed to handle the EXIF/XMP tags I ended up using, while python3-exif did not. The relevant EXIF tags only support ASCII, so I had to find better alternatives. XMP seem to have the support I need.

I am a bit unsure which EXIF/XMP tags to use, as I would like to use tags that can be easily added/updated using normal free software photo managing software. I ended up using the tags set using this exiftool command, as these tags can also be set using digiKam:

exiftool -headline='The RSS image title' \ -description='The RSS image description.' \ -subject+=for-family photo.jpeg

I initially tried the "-title" and "keyword" tags, but they were invisible in digiKam, so I changed to "-headline" and "-subject". I use the keyword/subject 'for-family' to flag that the photo should be shared with my family. Images with this keyword set are located and copied into my Freedombox for the RSS generating script to find.

Are there better ways to do this? Get in touch if you have better suggestions.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen Petter Reinholdtsen - Entries tagged english

My Work on Debian LTS (July 2018)

Planet Debian - Mar, 31/07/2018 - 3:51md

This month, after a longer pause, I have started working again for the Debian LTS project as a paid contributor. Thanks to all LTS sponsors for making this possible.

This is my list of work done in July 2018:

  • Triage CVE issues of ~27 packages during my front desk week.
  • Upload gosa 2.7.4+reloaded2-13+deb9u1 (DLA-1436-1) to jessie-security.
  • Upload network-manager-vpnc (DLA-1454-1) to jessie-security.
  • At the end of the month, I started looking at one of two open issues in phpldapadmin. More details on this, I have sent to the Debian LTS mailing list [1].



sunweaver sunweaver's blog

Free software activities in July 2018

Planet Debian - Mar, 31/07/2018 - 2:02md

Here is my monthly update covering what I have been doing in the free software world during July 2018 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month I:

  • Performed a Non Maintainer Upload of the GNU mtools package in order to address two reproducibility-related bugs (#900409 & #900410) that are blocking the inclusion of my previous merge request to the Debian Installer to make the installation images (ISO, hd-media, netboot, etc,) bit-for-bit reproducible.
  • Kept up to date. [...]
  • Submitted the following patches to fix reproducibility-related toolchain issues within Debian:
    • ogdi-dfsg: Please make the build (mostly) reproducible. (#903442)
    • schroot: Please make the documentation build reproducibly. (#902804)
  • I also submitted a patch to fix a specific reproducibility issue in v4l2loopback.
  • Worked on publishing our weekly reports. (#166, #167, #168, #169 & #170)
  • I also made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues:
    • Support .deb archives that contain an uncompressed data tarball. (#903401)
    • Wrap jsondiff calls with a try-except to prevent errors becoming fatal. (#903447, #903449)
    • Clear the progress bar after completion. (#901758)
    • Support .deb archives that contain an uncompressed control tarball. (#903391)
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 11.75 hours on its sister Extended LTS project:

  • "Frontdesk" duties, triaging CVEs, responding to user questions/queries, etc.
  • Hopefully final updates to various scripts — both local and shared — to accommodate and support the introduction of the new "Extended LTS" initiative.
  • Issued DLA 1417-1 for ca-certificates, updating the set of Certificate Authority (CA) certificates that are considered "valid" or otherwise should be trusted by systems.
  • Issued DLA 1419-1 for ruby-sprockets to fix a path traversal issue exploitable via file:// URIs.
  • Issued DLA 1420-1 for the Cinnamon Desktop Environment where a symlink attack could permit an attacker to overwrite an arbitrary file on the filesystem.
  • Issued DLA 1427-1 for znc to address a path traversal vulnerability via ../ filenames in "skin" names as well as to fix an issue where insufficient validation could allow writing of arbitrary values to the znc.conf config file.
  • Issued DLA 1443-1 for evolution-data-server to fix an issue where rejected requests to upgrade to a secure connection did not result in the termination of the connection.
  • Issued DLA 1448-1 for policykit-1, uploading Abhijith PA's fix for a denial of service vulnerability.
  • Issued ELA-13-1 for ca-certificates, also updating the set of Certificate Authority (CA) certificates that are considered "valid" or otherwise should be trusted by wheezy systems.

Finally, I also sponsored elpy (1.22.0-1) & wolfssl (3.15.3+dfsg-1) and I orphaned dbus-cpp (#904426) and process-cpp (#904425) as they were no longer required as build-dependencies of Anbox.

Debian bugs filed
  • cod-tools: Missing build-depends. (#903689)
  • network-manager-openvpn: "Cannot specify device when activating VPN" error when connecting. (#903109)
  • ukwm: override_dh_auto_test doesn't respect nocheck build profile. (#904889)
  • ITP: gpg-encrypted-root — Encrypt root volumes with an OpenPGP smartcard. (#903163)
  • gnumeric: ssconvert segmentation faults. (#903194)
FTP Team

As a Debian FTP assistant I ACCEPTed 213 packages: ahven, apache-mode-el, ats2-lang, bar-cursor-el, bidiui, boxquote-el, capstone, cargo, clevis, cockpit, crispy-doom, cyvcf2, debian-gis, devscripts-el, elementary-xfce, emacs-pod-mode, emacs-session, eproject-el, feedreader, firmware-nonfree, fwupd, fwupdate, gmbal, gmbal-commons, gmbal-pfl, gnome-subtitles, gnuastro, golang-github-avast-retry-go, golang-github-gdamore-encoding, golang-github-git-lfs-gitobj, golang-github-lucasb-eyer-go-colorful, golang-github-smira-go-aws-auth, golang-github-ulule-limiter, golang-github-zyedidia-clipboard, graphviz-dot-mode, grub2, haskell-iwlib, haskell-lzma, hyperscan, initsplit-el, intel-ipsec-mb, intel-mkl, ivulncheck, jaxws-api, jitterentropy-rngd, jp, json-c, julia, kitty, leatherman, leela-zero, lektor, libanyevent-fork-perl, libattribute-storage-perl, libbio-tools-run-alignment-clustalw-perl, libbio-tools-run-alignment-tcoffee-perl, libcircle-be-perl, libconvert-color-xterm-perl, libconvert-scalar-perl, libfile-copy-recursive-reduced-perl, libfortran-format-perl, libhtml-escape-perl, libio-fdpass-perl, libjide-oss-java, libmems, libmodule-build-pluggable-perl, libmodule-build-pluggable-ppport-perl, libnet-async-irc-perl, libnet-async-tangence-perl, libnet-cidr-set-perl, libperl-critic-policy-variables-prohibitlooponhash-perl, libppix-quotelike-perl, libpqxx, libproc-fastspawn-perl, libredis-fast-perl, libspatialaudio, libstring-tagged-perl, libtickit-async-perl, libtickit-perl, libtickit-widget-scroller-perl, libtickit-widget-tabbed-perl, libtickit-widgets-perl, libu2f-host, libuuid-urandom-perl, libvirt-dbus, libxsmm, lief, lightbeam, limesuite, linux, log4shib, mailscripts, mimepull, monero, mutter, node-unicode-data, octavia, octavia-dashboard, openstack-cluster-installer, osmo-iuh, osmo-mgw, osmo-msc, pg-qualstats, pg-stat-kcache, pgzero, php-composer-xdebug-handler, plasma-browser-integration, powerline-gitstatus, ppx-tools-versioned, pyside2, python-certbot-dns-gehirn, python-certbot-dns-linode, python-certbot-dns-sakuracloud, python-cheroot, python-django-dbconn-retry, python-fido2, python-ilorest, python-ipfix, python-lupa, python-morph, python-pygtrie, python-stem, pywws, r-cran-callr, r-cran-extradistr, r-cran-pkgbuild, r-cran-pkgload, r-cran-processx, rawtran, ros-ros-comm, ruby-bindex, ruby-marcel, rust-ar, rust-arrayvec, rust-atty, rust-bitflags, rust-bytecount, rust-byteorder, rust-chrono, rust-cloudabi, rust-crossbeam-utils, rust-csv, rust-csv-core, rust-ctrlc, rust-dns-parser, rust-dtoa, rust-either, rust-encoding-rs, rust-filetime, rust-fnv, rust-fuchsia-zircon, rust-futures, rust-getopts, rust-glob, rust-globset, rust-hex, rust-httparse, rust-humantime, rust-idna, rust-indexmap, rust-is-match, rust-itoa, rust-language-tags, rust-lazy-static, rust-libc, rust-memoffset, rust-nodrop, rust-num-integer, rust-num-traits, rust-openssl-sys, rust-os-pipe, rust-rand, rust-rand-core, rust-redox-termios, rust-regex, rust-regex-syntax, rust-remove-dir-all, rust-same-file, rust-scoped-tls, rust-semver-parser, rust-serde, rust-sha1, rust-sha2-asm, rust-shared-child, rust-shlex, rust-string-cache-shared, rust-strsim, rust-tar, rust-tempfile, rust-termion, rust-time, rust-try-lock, rust-ucd-util, rust-unicode-bidi, rust-url, rust-vec-map, rust-void, rust-walkdir, rust-winapi, rust-winapi-i686-pc-windows-gnu, rust-winapi-x86-64-pc-windows-gnu, rustc, simavr, tabbar-el, tarlz, ukui-media, ukui-menus, ukui-power-manager, ukui-window-switch, ukwm, vanguards, weevely & xml-security-c.

I also filed wishlist-level bugs against the following packages with potential licensing improvements:

  • pgzero: Please inline/summarise web-based licensing discussion in debian/copyright. (#904674)
  • plasma-browser-integration: "This_file_is_part_of_KDE" in debian/copyright? (#903713)
  • rawtran: Please split out debian/copyright. (#904589)
  • tabbar-el: Please inline web-based comments in debian/copyright. (#904782)
  • feedreader: Please use wildcards in debian/copyright. (#904631)

Lastly, I filed 10 RC bugs against packages that had potentially-incomplete debian/copyright files against: ahven, ats2-lang, fwupd, ivulncheck, libmems, libredis-fast-perl, libtickit-widget-tabbed-perl, lief, rust-humantime & rust-try-lock.

Chris Lamb lamby: Items or syndication on Planet Debian.

Porting Coreboot to the 51NB X210

Planet Debian - Mar, 31/07/2018 - 7:28pd
The X210 is a strange machine. A set of Chinese enthusiasts developed a series of motherboards that slot into old Thinkpad chassis, providing significantly more up to date hardware. The X210 has a Kabylake CPU, supports up to 32GB of RAM, has an NVMe-capable M.2 slot and has eDP support - and it fits into an X200 or X201 chassis, which means it also comes with a classic Thinkpad keyboard . We ordered some from a Facebook page (a process that involved wiring a large chunk of money to a Chinese bank which wasn't at all stressful), and a couple of weeks later they arrived. Once I'd put mine together I had a quad-core i7-8550U with 16GB of RAM, a 512GB NVMe drive and a 1920x1200 display. I'd transplanted over the drive from my XPS13, so I was running stock Fedora for most of this development process.

The other fun thing about it is that none of the firmware flashing protection is enabled, including Intel Boot Guard. This means running a custom firmware image is possible, and what would a ridiculous custom Thinkpad be without ridiculous custom firmware? A shadow of its potential, that's what. So, I read the Coreboot[1] motherboard porting guide and set to.

My life was made a great deal easier by the existence of a port for the Purism Librem 13v2. This is a Skylake system, and Skylake and Kabylake are very similar platforms. So, the first job was to just copy that into a new directory and start from there. The first step was to update the Inteltool utility so it understood the chipset - this commit shows what was necessary there. It's mostly just adding new PCI IDs, but it also needed some adjustment to account for the GPIO allocation being different on mobile parts when compared to desktop ones. One thing that bit me - Inteltool relies on being able to mmap() arbitrary bits of physical address space, and the kernel doesn't allow that if CONFIG_STRICT_DEVMEM is enabled. I had to disable that first.

The GPIO pins got dropped into gpio.h. I ended up just pushing the raw values into there rather than parsing them back into more semantically meaningful definitions, partly because I don't understand what these things do that well and largely because I'm lazy. Once that was done, on to the next step.

High Definition Audio devices (or HDA) have a standard interface, but the codecs attached to the HDA device vary - both in terms of their own configuration, and in terms of dealing with how the board designer may have laid things out. Thankfully the existing configuration could be copied from /sys/class/sound/card0/hwC0D0/init_pin_configs[2] and then hda_verb.h could be updated.

One more piece of hardware-specific configuration is the Video BIOS Table, or VBT. This contains information used by the graphics drivers (firmware or OS-level) to configure the display correctly, and again is somewhat system-specific. This can be grabbed from /sys/kernel/debug/dri/0/i915_vbt.

A lot of the remaining platform-specific configuration has been split out into board-specific config files. and this also needed updating. Most stuff was the same, but I confirmed the GPE and genx_dec register values by using Inteltool to dump them from the vendor system and copy them over. lspci -t gave me the bus topology and told me which PCIe root ports were in use, and lsusb -t gave me port numbers for USB. That let me update the root port and USB tables.

The final code update required was to tell the OS how to communicate with the embedded controller. Various ACPI functions are actually handled by this autonomous device, but it's still necessary for the OS to know how to obtain information from it. This involves writing some ACPI code, but that's largely a matter of cutting and pasting from the vendor firmware - the EC layout depends on the EC firmware rather than the system firmware, and we weren't planning on changing the EC firmware in any way. Using ifdtool told me that the vendor firmware image wasn't using the EC region of the flash, so my assumption was that the EC had its own firmware stored somewhere else. I was ready to flash.

The first attempt involved isis' machine, using their Beaglebone Black as a flashing device - the lack of protection in the firmware meant we ought to be able to get away with using flashrom directly on the host SPI controller, but using an external flasher meant we stood a better chance of being able to recover if something went wrong. We flashed, plugged in the power and… nothing. Literally. The power LED didn't turn on. The machine was very, very dead.

Things like managing battery charging and status indicators are up to the EC, and the complete absence of anything going on here meant that the EC wasn't running. The most likely reason for that was that the system flash did contain the EC's firmware even though the descriptor said it didn't, and now the system was very unhappy. Worse, the flash wouldn't speak to us any more - the power supply from the Beaglebone to the flash chip was sufficient to power up the EC, and the EC was then holding onto the SPI bus desperately trying to read its firmware. Bother. This was made rather more embarrassing because isis had explicitly raised concern about flashing an image that didn't contain any EC firmware, and now I'd killed their laptop.

After some digging I was able to find EC firmware for a related 51NB system, and looking at that gave me a bunch of strings that seemed reasonably identifiable. Looking at the original vendor ROM showed very similar code located at offset 0x00200000 into the image, so I added a small tool to inject the EC firmware (basing it on an existing tool that does something similar for the EC in some HP laptops). I now had an image that I was reasonably confident would get further, but we couldn't flash it. Next step seemed like it was going to involve desoldering the flash from the board, which is a colossal pain. Time to sleep on the problem.

The next morning we were able to borrow a Dediprog SPI flasher. These are much faster than doing SPI over GPIO lines, and also support running the flash at different voltage. At 3.5V the behaviour was the same as we'd seen the previous night - nothing. According to the datasheet, the flash required at least 2.7V to run, but flashrom listed 1.8V as the next lower voltage so we tried. And, amazingly, it worked - not reliably, but sufficiently. Our hypothesis is that the chip is marginally able to run at that voltage, but that the EC isn't - we were no longer powering the EC up, so could communicated with the flash. After a couple of attempts we were able to write enough that we had EC firmware on there, at which point we could shift back to flashing at 3.5V because the EC was leaving the flash alone.

So, we flashed again. And, amazingly, we ended up staring at a UEFI shell prompt[3]. USB wasn't working, and nor was the onboard keyboard, but we had graphics and were executing actual firmware code. I was able to get USB working fairly quickly - it turns out that Linux numbers USB ports from 1 and the FSP numbers them from 0, and fixing that up gave us working USB. We were able to boot Linux! Except there were a whole bunch of errors complaining about EC timeouts, and also we only had half the RAM we should.

After some discussion on the Coreboot IRC channel, we figured out the RAM issue - the Librem13 only has one DIMM slot. The FSP expects to be given a set of i2c addresses to probe, one for each DIMM socket. It is then able to read back the DIMM configuration and configure the memory controller appropriately. Running i2cdetect against the system SMBus gave us a range of devices, including one at 0x50 and one at 0x52. The detected DIMM was at 0x50, which made 0x52 seem like a reasonable bet - and grepping the tree showed that several other systems used 0x52 as the address for their second socket. Adding that to the list of addresses and passing it to the FSP gave us all our RAM.

So, now we just had to deal with the EC. One thing we noticed was that if we flashed the vendor firmware, ran it, flashed Coreboot and then rebooted without cutting the power, the EC worked. This strongly suggested that there was some setup code happening in the vendor firmware that configured the EC appropriately, and if we duplicated that it would probably work. Unfortunately, figuring out what that code was was difficult. I ended up dumping the PCI device configuration for the vendor firmware and for Coreboot in case that would give us any clues, but the only thing that seemed relevant at all was that the LPC controller was configured to pass io ports 0x4e and 0x4f to the LPC bus with the vendor firmware, but not with Coreboot. Unfortunately the EC was supposed to be listening on 0x62 and 0x66, so this wasn't the problem.

I ended up solving this by using UEFITool to extract all the code from the vendor firmware, and then disassembled every object and grepped them for port io. x86 systems have two separate io buses - memory and port IO. Port IO is well suited to simple devices that don't need a lot of bandwidth, and the EC is definitely one of these - there's no way to talk to it other than using port IO, so any configuration was almost certainly happening that way. I found a whole bunch of stuff that touched the EC, but was clearly depending on it already having been enabled. I found a wide range of cases where port IO was being used for early PCI configuration. And, finally, I found some code that reconfigured the LPC bridge to route 0x4e and 0x4f to the LPC bus (explaining the configuration change I'd seen earlier), and then wrote a bunch of values to those addresses. I mimicked those, and suddenly the EC started responding.

It turns out that the writes that made this work weren't terribly magic. PCs used to have a SuperIO chip that provided most of the legacy port functionality, including the floppy drive controller and parallel and serial ports. Individual components (called logical devices, or LDNs) could be enabled and disabled using a sequence of writes that was fairly consistent between vendors. Someone on the Coreboot IRC channel recognised that the writes that enabled the EC were simply using that protocol to enable a series of LDNs, which apparently correspond to things like "Working EC" and "Working keyboard". And with that, we were done.

Coreboot doesn't currently have ACPI support for the latest Intel graphics chipsets, so right now my image doesn't have working backlight control.Backlight control also turned out to be interesting. Most modern Intel systems handle the backlight via registers in the GPU, but the X210 uses the embedded controller (possibly because it supports both LVDS and eDP panels). This means that adding a simple display stub is sufficient - all we have to do on a backlight set request is store the value in the EC, and it does the rest.

Other than that, everything seems to work (although there's probably a bunch of power management optimisation to do). I started this process knowing almost nothing about Coreboot, but thanks to the help of people on IRC I was able to get things working in about two days of work[4] and now have firmware that's about as custom as my laptop.

[1] Why not Libreboot? Because modern Intel SoCs haven't had their memory initialisation code reverse engineered, so the only way to boot them is to use the proprietary Intel Firmware Support Package.
[2] Card 0, device 0
[3] After a few false starts - it turns out that the initial memory training can take a surprisingly long time, and we kept giving up before that had happened
[4] Spread over 5 or so days of real time

comments Matthew Garrett Matthew Garrett

Free software log (June 2018)

Planet Debian - Mar, 31/07/2018 - 6:06pd

Well, this is embarassingly late, but not a full month late. That's what counts, right?

It's quite late partly because I haven't had the right combination of time and energy to do much free software work since the beginning of June. I did get a couple of releases out then, though. wallet 1.4 incorporated better Active Directory support and fixed a bunch of build system and configuration issues. And rra-c-util 7.2 includes a bunch of fixes to M4 macros and cleans up some test issues.

The July report may go missing for roughly the same reason. I have done some outside-of-work programming, but it's gone almost entirely into learning Rust and playing around with implementing various algorithms to understand them better. Rather fun, but not something that's good enough to be worth releasing. It's reinventing wheels intentionally to understand underlying concepts better.

I do have a new incremental release of DocKnot almost ready to go out (incorporating changes I needed for the last wallet release), but I'm not sure if that will make it into July.

Russ Allbery Eagle's Path

(Badly) cloning a TEMPer USB

Planet Debian - Mar, 31/07/2018 - 2:31pd

Having setup a central MQTT broker I’ve wanted to feed it extra data. The study temperature was a start, but not the most useful piece of data when working towards controlling the central heating. As it happens I have a machine in the living room hooked up to the TV, so I thought about buying something like a TEMPer USB so I could sample the room temperature and add it as a data source. And then I realised that I still had a bunch of Digispark clones and some Maxim DS18B20 1-Wire temperature sensors and I should build something instead.

I decided to try and emulate the TEMPer device rather than doing something unique. V-USB was pressed into service and some furious Googling took place to try and find out the details of how the TEMPer appears to the host in order to craft the appropriate USB/HID descriptors to present - actually finding some lsusb output was the hardest part. Looking at the code of various tools designed to talk to the device provided details of the different init commands that needed to be recognised and a basic skeleton framework (reporting a constant 15°C temperature) was crafted. Once that was working with the existing client code knocking up some 1-Wire code to query the DS18B20 wasn’t too much effort (I seem to keep implementing this code on various devices).

At this point things became less reliable. The V-USB code is an evil (and very clever) set of interrupt driven GPIO bit banging routines, working around the fact that the ATTiny doesn’t have a USB port. 1-Wire is a timed protocol, so the simple implementation involves a bunch of delays. To add to this the temper-python library decides to do a USB device reset if it sees a timeout. And does a double read to work around some behaviour of the real hardware. Doing a 1-Wire transaction directly in response to these requests causes lots of problems, so I implemented a timer to do a 1-Wire temperature check once every 10 seconds, and then the request from the host just returns the last value read. This is a lot more reliable, but still sees a few resets a day. It would be nice to fix this, but for the moment it’s good enough for my needs - I’m reading temperature once a minute to report back to the MQTT server, but it offends me to see the USB resets in the kernel log.

Additionally I had some problems with accuracy. Firstly it seems the batch of DS18B20s I have can vary by 1-2°C, so I ended up adjusting for this in the code that runs on the host. Secondly I mounted the DS18B20 on the Digispark board, as in the picture. The USB cable ensures it’s far enough away from the host (rather than sitting plugged directly into the back of the machine and measuring the PSU fan output temperature), but the LED on the board turned out to be close enough that it affected the reading. I have no need for it so I just ended up removing it.

The code is locally and on GitHub in case it’s of use/interest to anyone else.

(I’m currently at DebConf18 but I’ll wait until it’s over before I write it up, and I’ve been meaning to blog about this for a while anyway.)

Jonathan McDowell Noodles' Emptiness

Weblate 3.1

Planet Debian - Pre, 27/07/2018 - 3:30md

Weblate 3.1 has been released today. It contains mostly bug fixes, but there are some new feature as well, for example support for Amazon Translate.

Full list of changes:

  • Upgrades from older version than 3.0.1 are not supported.
  • Allow to override default commit messages from settings.
  • Improve webhooks compatibility with self hosted environments.
  • Added support for Amazon Translate.
  • Compatibility with Django 2.1.
  • Django system checks are now used to diagnose problems with installation.
  • Removed support for soon shutdown libravatar service.
  • New addon to mark unchanged translations as needing edit.
  • Add support for jumping to specific location while translating.
  • Downloaded translations can now be customized.
  • Improved calculation of string similarity in translation memory matches.
  • Added support by signing Git commits by GnuPG.


Weblate 3.1.1 was released as well fixing test suite failure on some setups:

  • Fix testsuite failure on some setup.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Michal Čihař Michal Čihař's Weblog, posts tagged by Debian

Debcamp activities 2018

Planet Debian - Pre, 27/07/2018 - 3:19md
Emacs 2018-07-23
  • NMUed cdargs
  • NMUed silversearcher-ag-el
  • Uploaded the partially unbundelled emacs-goodies-el to Debian unstable
  • packaged and uploaded graphviz-dot-mode
  • packaged and uploaded boxquote-el
  • uploaded apache-mode-el
  • Closed bugs in graphviz-dot-mode that were fixed by the new version.
  • filed lintian bug about empty source package fields
  • packaged and uploaded emacs-session
  • worked on sponsoring tabbar-el
  • uploaded dh-make-elpa
Notmuch 2018-07-2[23]

Wrote patch series to fix bug noticed by seanw while (seanw was) working working on a package inspired by policy workflow.

  • Finished reviewing a patch series from dkg about protected headers.
  • Helped sean w find right config option for his bug report

  • Reviewed change proposal from aminb, suggested some issues to watch out for.

  • Add test for threading issue.
Nullmailer 2018-07-25
  • uploaded nullmailer backport
  • add "envelopefilter" feature to remotes in nullmailer-ssh
Perl 2018-07-23 2018-07-24
  • Forwarded #704527 to
  • Uploaded libemail-abstract-perl to fix Vcs-* urls
  • Updated debhelper compat and Standards-Version for libemail-thread-perl
  • Uploaded libemail-thread-perl
  • fixed RC bug #904727 (blocking for perl transition)
Policy and procedures 2018-07-22
  • seconded #459427
  • seconded #813471
  • seconded #628515
  • read and discussed draft of salvaging policy with Tobi
  • Discussed policy bug about short form License and License-Grant
  • worked with Tobi on salvaging proposal
David Bremner blog/tags/planet

My PhD topic

Planet Debian - Pre, 27/07/2018 - 12:16md

I'm long overdue writing about what I'm doing for my PhD, so here goes. To stop this getting too long I haven't defined a lot of concepts so it might not make sense to folks without a Computer Science background. I'm happy to answer any questions in the comments.

I'm investigating whether there are advantages to building a distributed stream processing system using pure functional programming, specifically, whether the reasoning abilites one has about purely functional systems allow us to build efficient stream processing systems.

We have a proof-of-concept of a stream processing system built using Haskell called STRIoT (Stream Processing for IoT). Via STRIoT, a user can define a graph of stream processing operations from a set of 8 purely functional operators. The chosen operators have well-understood semantics, so we can apply strong reasoning to the user-defined stream graph. STRIoT supports partitioning a stream graph into separate sub-graphs which are distributed to separate nodes, interconnected via the Internet. The examples provided with STRIoT use Docker and Docker Compose for the distribution.

The area I am currently focussing on is whether and how STRIoT could rewrite the stream processing graph, preserving it's functional behaviour, but improving its performance against one or more non-functional requirements: for example making it perform faster, or take up less memory, or a more complex requirement such as maximising battery life for a battery-operated component, or something similar.

Pure FP gives us the ability to safely rewrite chunks of programs by applying equational reasoning. For example, we can always replace the left-hand side of this equation by the right-hand side, which is functionally equivalent, but more efficient in both time and space terms:

map f . map g = map (f . g)

However, we need to reason about potentially conflicting requirements. We might sometimes increase network latency or overall processing time in order to reduce the power usage of nodes, such as smart watches or battery-operated sensors deployed in difficult-to-reach locations. This has implications on the design of the Optimizer, which I am exploring.

jmtd Jonathan Dowland's Weblog

Report from DebCamp18

Planet Debian - Pre, 27/07/2018 - 12:14md

This was a nice DebCamp! Here is what I've been up to.

AppArmor Packaging and distro integration Policy Misc
  • Tried to give Thunderbird a custom reportbug script that includes the status of the AppArmor profile in bug reports, in order to ease the Thunderbird maintainers' task when triaging newly reported bugs. Sadly, computing this status requires root credentials so this won't work. Instead, explained in README.apparmor how to get this information, so that the Thunderbird maintainers can point users there when they have a doubt.
Perl team
  • Triaged and investigated a few packages that don't build reproducibly.
  • Identified a few new candidates for removal from sid.
  • Removing packages that depend on obsolete libraries from the GNOME 2 area:
    • updated status of this process that I've started at DebCamp17 last year ⇒ filed a bunch of removal bugs;
    • filed RC bugs to prevent a number of other packages from being shipped in Buster.
Misc intrigeri intrigeri's blog

DebConf18 invites you to Debian Open Day at National Chiao Tung University, Microelectronics and Information Research Center (NCTU MIRC), in Hsinchu

Planet Debian - Pre, 27/07/2018 - 12:00md

DebConf, the annual conference for Debian contributors and users interested in improving the Debian operating system, will be held in National Chiao Tung University, Microelectronics and Information Research Center (NCTU MIRC) in Hsinchu, Taiwan, from July 29th to August 5th, 2018. The conference is preceded by DebCamp, July 21th to July 27th, and the DebConf18 Open Day on July 28th.

Debian is an operating system consisting entirely of free and open source software, and is known for its adherence to the Unix and Free Software philosophies and for its extensiveness. Thousands of volunteers from all over the world work together to create and maintain Debian software, and more than 400 are expected to attend DebConf18 to meet in person and work together more closely.

The conference features presentations and workshops, and video streams are made available in real-time and archived.

The DebConf18 Open Day, Saturday, July 28, is open to the public with events of interest to a wide audience.

The detailed schedule of the Open Day's events include, among others:

  • Questions and Answers Session with Minister Audrey Tang,
  • Debian Meets Smart City Applications with SZ Lin
  • a Debian Packaging Workshop,
  • panel discussion: Story of Debian contributors around the world,
  • sessions in English or Chinese about different aspects of the Debian project and community, and other free software projects like LibreOffice, Clonezilla and DRBL, LXDE/LXQt desktops, EzGo...

Everyone is welcome to attend, attendance is free, and it is a great possibility for interested users to meet the Debian community.

The full schedule for Open Day's events and the rest of the conference is at and the video streaming will be available at the DebConf18 website

DebConf is committed to a safe and welcome environment for all participants. See the DebConf Code of Conduct and the Debian Code of Conduct for more details on this.

Debian thanks the numerous sponsors for their commitment to DebConf18, particularly its Platinum Sponsor Hewlett Packard Enterprise, the Bureau of Foreign Trade, Ministry of Economic Affairs via the MEET TAIWAN program, and its venue sponsors, the National Chiao Tung University 國立交通大學 and the National Center for High-performance Computing 國家高速網路與計算中心.

For media contacts, please contact DebConf organization: 林上智 (SZ Lin), Cell: 0911-162297

Laura Arjona Reina, Héctor Orón Martínez Bits from Debian

Project cleanup

Planet Debian - Pre, 27/07/2018 - 11:45pd

For the past couple of days I've gone back over my golang projects, and updated each of them to have zero golint/govet warnings.

Nothing significant has changed, but it's nice to be cleaner.

I did publish a new project, which is a webmail client implemented in golang. Using it you can view the contents of a remote IMAP server in your browser:

  • View folders.
  • View messages.
  • View attachments
  • etc.

The (huge) omission is the ability to reply to messages, compose new mails, or forward/delete messages. Still as a "read only webmail" it does the job.

Not a bad hack, but I do have the problem that my own mailserver presents ~/Maildir over IMAP and I have ~1000 folders. Retrieving that list of folders is near-instant - but retrieving that list of folders and the unread-mail count of each folder takes over a minute.

For the moment I've just not handled folders-with-new-mail specially, but it is a glaring usability hole. There are solutions, the two most obvious:

  • Use an AJAX call to get/update the unread-counts in the background.
    • Causes regressions as soon as you navigate to a new page though.
  • Have some kind of open proxy-process to maintain state and avoid accessing IMAP directly.
    • That complicates the design, but would allow "instant" fetches of "stuff".

Anyway check it out if you like. Bug reports welcome.

Steve Kemp Steve Kemp's Blog

Sixth GSoC Report

Planet Debian - Pre, 27/07/2018 - 7:28pd

After finishing the the evaluations of the SSO solutions, formorer asked me to look into integrating one of the solutions into the existing Debian SSO infrastructure. is a Django application that basically provides a way of creating and managing client certificates. It does not do authentication itself, but uses the REMOTE_USER authentication source of Django. I tested integration with lemonldap-ng, and after some troubles setting up the clone on my infrastructure (thanks to Enrico for pointing me in the right direction) the authentication using the apaches authnz module worked. To integrate lemonldap-ng i only had to add a ProxyPass and a ProxyPassReverse directive in the apache config. I tested the setup using gitlab and it worked.

I’ve also added some additional features to nacho: on the one hand, i’ve added a management command that removes stale temporary accounts that have never been activated. The idea is to run that command in regular intervals via cron (or systemd timers). To implement that feature, i basically followed the howto for writing custom django-admin commands from the django manual. Based on that knowledge i then implemented two other commands that provide backup and restore functionality. The backup command prints the contents of the LDAP database on stdout in LDIF format. The restore command expects LDIF on stdin and writes those values to the ldap database. I also did some cleanup in the codebase and documented the test cases.

The third big project i looked into was to implement oauth2 authentication for one of the existing websites that use I chose for that, because it is based on Django. I used a lot of time to look for existing modules for Django that implement oauth2 authentication and tesed some of them. There is for example django-allauth that provides authentication against a lot of authentication providers. I did manage to create an addiational authentication provider for Keycloak, but it seemed a bit overengineered to use such a big application for only one provider. So i sat down and wrote a small Django app that does oauth2 authentication. As soon as that worked with a clean Django installation, it took just some small adjustments to use it for the newmaintainer interface. You can find the branch on salsa

bisco Gsoc18 on

Local qemu/kvm virtual machines, 2018

Planet Debian - Pre, 27/07/2018 - 7:00pd

For work I run a personal and a work VM on my laptop. When I was at VMware I dogfooded internal builds of Workstation which worked well, but was always a challenge to have its additions consistently building against latest kernels. About 5 and half years ago, the only practical alternative option was VirtualBox. IIRC SPICE maybe didn't even exist or was very early, and while VNC is OK to fiddle with something, completely impractical for primary daily use.

VirtualBox is fine, but there is the promised land of all the great features of qemu/kvm and many recent improvements in 3D integration always calling. I'm trying all this on my Fedora 28 host, with a Fedora 28 guest (which has been in-place upgraded since Fedora 19), so everything is pretty recent. Periodically I try this conversion again, but, spoiler alert, have not yet managed to get things quite right.

As I happened to close an IRC window, somehow my client seemed to crash X11. How odd ... so I thought, everything has just disappeared anyway; I might as well try switching again.

Image conversion has become much easier. My primary VM has a number of snapshots, so I used the VirtualBox GUI to clone the VM and followed the prompts to create the clone with squashed snapshots. Then simply convert the VDI to a RAW image with

$ qemu-img convert -p -f vdi -O raw image.vdi image.raw

Note if you forget the progress meter, send the pid a SIGUSR1 to get it to spit out a progress.

virt-manager has come a long way too. Creating a new VM was trivial. I wanted to make sure I was using all the latest SPICE gl etc., stuff. Here I hit some problems with what seemed to be permission denials on drm devices before even getting the machine started. Something suggested using libvirt in session mode, with the qemu:///session URL -- which seemed more like what I want anyway (a VM for only my user). I tried that, put the converted raw image in my home directory and the VM would boot. Yay!

It was a bit much to expect it to work straight away; while GRUB did start, it couldn't find the root disks. In hindsight, you should probably generate a non-host specific initramfs before converting the disk, so that it has a larger selection of drivers to find the boot devices (especially the modern virtio drivers). On Fedora that would be something like

sudo dracut --no-hostonly --regenerate-all -f

As it turned out, I "simply" attached a live-cd and booted into that, then chrooted into my old VM and regenerated the initramfs for the latest kernel manually. After this the system could find the LVM volumes in the image and would boot.

After a fiddly start, I was hopeful. The guest kernel dmesg DRM sections showed everything was looking good for 3D support, along with the glxinfo showing all the virtio-gpu stuff looking correct. However, I could not get what I hoped was trivial automatic window resizing happening no matter what. After a bunch of searching, ensuring my agents were running correctly, etc. it turns out that has to be implemented by the window-manager now, and it is not supported by my preferred XFCE (see Note you can do this manually with xrandr --output Virtual-1 --auto to get it to resize, but that's rather annoying.

I thought that it is 2018 and I could live with Gnome, so installed that. Then I tried to ping something, and got another selinux denial (on the host) from qemu-system-x86 creating icmp_socket. I am guessing this has to do with the interaction between libvirt session mode and the usermode networking device (filed I figured I'd limp along with ICMP and look into details later...

Finally when I moved the window to my portrait-mode external monitor, the SPICE window expanded but the internal VM resolution would not expand to the full height. It looked like it was taking the height from the portrait-orientation width.

Unfortunately, forced swapping of environments and still having two/three non-trivial bugs to investigate exceeded my practical time to fiddle around with all this. I'll stick with VirtualBox for a little longer; 2020 might be the year!

Ian Wienand Technovelty

Starting your first Python project

Planet Debian - Enj, 26/07/2018 - 6:25md

There's a gap between learning the syntax of the Python programming language and being able to build a project from scratch. When you finish reading your first tutorial or book about Python, you're good to go for writing a Fibonacci suite calculator, but that does not help you starting your actual project.

There are a few questions that pop up in your mind, and that's normal. Let's take a stab at those!

Which Python version should I use?

It's not a secret that Python has several versions that are supported at the same time. Each minor version of the interpreter gets bugfix support for 18 months and security support for 5 years. For example, Python 3.7, released on 27th June 2018, will be supported until Python 3.8 is released, around October 2019 (15 months later). Around December 2019, the last bugfix release of Python 3.7 will occur, and everyone is expected to switch to Python 3.8.

Current Python 3.7/3.8 release schedule

That's important to be aware of as the version of the interpreter will be entirely part of your software lifecycle.

On top of that, we should take into consideration the Python 2 versus Python 3 question. That still might be an open question for people working with (very) old platforms.

In the end, the question of which version of Python one should use is well worth asking.

Here are some short answers:

  • Versions 2.6 and older are really obsolete by now, so you don't have to worry about supporting them at all. If you intend on supporting these older versions anyway, be warned that you'll have an even harder time ensuring that your program supports Python 3.x as well. Though you might still run into Python 2.6 on some older systems; if that's the case, sorry for you!
  • Version 2.7 is and will remain the last version of Python 2.x. I don't think there is a system where Python 3 is not available one way or the other nowadays. So unless you're doing archeology once again, forget it. Python 2.7 will not be supported after the year 2020, so the last thing you want to do is build a new software based on it.
  • Versions 3.7 is the most recent version of the Python 3 branch as of this writing, and that's the one that you should target. Most recent operating systems ship at least 3.6, so in the case where you'd target those, you can make sure your application also work with 3.7.
Project Layout

Starting a new project is always a puzzle. You never know how to organize your files. However, once you have a proper understanding of the best practice out there, it's pretty simple.

First, your project structure should be fairly basic. Use packages and hierarchy wisely: a deep hierarchy can be a nightmare to navigate, while a flat hierarchy tends to become bloated.

Then, avoid making a few common mistakes. Don't leave unit tests outside the package directory. These tests should be included in a sub-package of your software so that:

  • They don't get automatically installed as a tests top-level module by setuptools (or some other packaging library) by accident.
  • They can be installed and eventually used by other packages to build their unit tests.

The following diagram illustrates what a standard file hierarchy should look like:

A Python project files and directories hierarchy is the standard name for Python installation script, along with its companion setup.cfg, which should contain the installation script configuration. When run, installs your package using the Python distribution utilities.

You can also provide valuable information to users in README.rst (or README.txt, or whatever filename suits your fancy). Finally, the docs directory should contain the package's documentation in reStructuredText format, that will be consumed by Sphinx.

Packages often have to provide extra data, such as images, shell scripts, and so forth. Unfortunately, there's no universally accepted standard for where these files should be stored. Just put them wherever makes the most sense for your project: depending on their functions, for example, Web application templates could go in a templates directory in your package root directory.

The following top-level directories also frequently appear:

  • etc for sample configuration files.
  • tools for shell scripts or related tools.
  • bin for binary scripts you've written that will be installed by

There's another design issue that I often encounter. When creating files or modules, some developers create them based on the type of code they will store. For example, they would create or files. This is a terrible approach. It doesn't help any developer when navigating the code. The code organization doesn't benefit from this, and it forces readers to jump between files for no good reason. There are a few exceptions, such as libraries, in some instances, because they do expose a complete API for consumers. However, other than that, think twice before doing that in your application.

Organize your code based on features, not based on types.

Creating a module directory with just an file in it is also a bad idea. For example, don't create a directory named hooks with a single file named hooks/ in it where would have been enough instead. If you create a directory, it should contain several other Python files that belong to the category the directory represents.

Be also very careful about the code that you put in the files: it is going to be called and executed the first time that any of the module contained in the directory is loaded. This can have unwanted side effects. Those files should be empty most of the time unless you know what you're doing.

Version Numbering

Software version needs to be stamped to know which one is more recent than another. As every piece of code evolves, it's a requirement for every project to be able to organize its timeline.

There is an infinite number of way to organize your version numbers, but PEP 440 introduces a version format that every Python package, and ideally every application, should follow. This way, programs and packages will be able to quickly and reliably identify which versions of your package they require.

PEP 440 defines the following regular expression format for version numbering:


This allows for standard numbering like 1.2 or 1.2.3.

However, please do note that:

  • 1.2 is equivalent to 1.2.0; 1.3.4 is equivalent to, and so forth.
  • Versions matching N[.N]+ are considered final releases.
  • Date-based versions such as 2013.06.22 are considered invalid. Automated tools designed to detect PEP 440-format version numbers will (or should) raise an error if they detect a version number greater than or equal to 1980.

Final components can also use the following format:

  • N[.N]+aN (e.g. 1.2a1) denotes an alpha release, a version that might be unstable and missing features.
  • N[.N]+bN (e.g. 2.3.1b2) denotes a beta release, a version that might be feature-complete but still buggy.
  • N[.N]+cN or N[.N]+rcN (e.g. 0.4rc1) denotes a (release) candidate, a version that might be released as the final product unless significant bugs emerge. While the rc and c suffixes have the same meaning, if both are used, rc releases are considered to be newer than c releases.

These suffixes can also be used:

  • .postN (e.g.1.4.post2) indicates a post-release. These are typically used to address minor errors in the publication process (e.g. mistakes in release notes). You shouldn't use .postN when releasing a bugfix version; instead, you should increment the minor version number.
  • .devN (e.g. 2.3.4.dev3) indicates a developmental release. This suffix is discouraged because it is harder for humans to parse. It indicates a prerelease of the version that it qualifies: e.g. 2.3.4.dev3 indicates the third developmental version of the 2.3.4 release, before any alpha, beta, candidate or final release.

This scheme should be sufficient for most common use cases.

You might have heard of Semantic Versioning, which provides its own guidelines for version numbering. This specification partially overlaps with PEP 440, but unfortunately, they're not entirely compatible. For example, Semantic Versioning's recommendation for prerelease versioning uses a scheme such as 1.0.0-alpha+001 that is not compliant with PEP 440.

Many DVCS platforms, such as Git and Mercurial, can generate version numbers using an identifying hash (for Git, refer to git describe). Unfortunately, this system isn't compatible with the scheme defined by PEP 440: for one thing, identifying hashes aren't orderable.

Those are only some of the first questions you could have. If you have any other one that you would like me to answer, feel free to write a comment below. Some goes if you have any other pieces of advice you'd like to share!

Julien Danjou Julien Danjou


Planet Debian - Enj, 26/07/2018 - 2:36md

This work has been brought to you by the wonderful DebCamp.

I needed to reproduce a build issue on an i386 architecture, so I started going through the instructions for finding a porterbox and setting up a chroot.

And then I though, this is long and boring. A program could do that.

So I created a program to do that:

$ debug-on-porterbox --help usage: debug-on-porterbox [-h] [--verbose] [--debug] [--cleanup] [--git] [--reuse] [--dist DIST] [--host HOST] arch [package] set up a build environment to debug a package on a porterbox positional arguments: arch architecture name package package name optional arguments: -h, --help show this help message and exit --verbose, -v verbose output --debug debug output --cleanup cleanup a previous build, removing porterbox data and git remotes --git setup a git clone of the current branch --reuse reuse an existing session --dist DIST distribution (default: sid) --host HOST hostname to use (autodetected by default)

On a source directory, you can run debug-on-porterbox i386 and it will:

  • find out the program name from debian/control (but if you provide it explicitly, you do not need to be in the source directory)
  • look up's LDAP to find a porterbox for that architecture
  • log into the machine via ssh
  • create a work directory
  • create the chroot, update it, install build dependencies
  • get the source with apt-get build-dep
  • alternatively, if using --git and running inside a git repo, create a git repo on the porterbox, push the local git branch to it, and add a remote to push/pull to/from it

The only thing left for you to do is to log into the machine debug-on-porterbox tells you, run the command porterbox tells you to enter the chroot, and debug away.

At the end you can clean everything up, including the remote chroot and the git remote in the local repo, with: debug-on-porterbox [--git] --cleanup i386

The code is on Salsa: have fun!

Enrico Zini Enrico Zini: pdo


Subscribe to AlbLinux agreguesi