You are here

Agreguesi i feed

Recently I'm not writing any code.

Planet Debian - Pre, 08/06/2018 - 10:58md
Recently I'm not writing any code.

Junichi Uekawa http://www.netfort.gr.jp/~dancer/diary/201806.html.en Dancer's daily hackings

Elsevier CiteScore™ missing the top conference in data mining

Planet Debian - Pre, 08/06/2018 - 4:01md

Elsevier Scopus is crap.

It’s really time to abandon Elsevier. German universities canceled their subscriptions. Sweden apparently began now to do so, too. Because Elsevier (and to a lesser extend, other publishers) overcharge universities badly.

Meanwhile, Elsevier still struggles to pretend it offers additional value. For example with the ‘‘horribly incomplete’’ Scopus database. For computer science, Scopus etc. are outright useless.

Elsevier just advertised (spammed) their “CiteScore™ metrics”. “Establishing a new standard for measuring serial citation impact”. Not.

“Powered by Scopus, CiteScore metrics are a comprehensive, current, transparent and “ horribly incomplete for computer science.

An excerpt from Elsevier CiteScore™:

Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

Scopus coverage years:from 2002 to 2003, from 2005 to 2015(coverage discontinued in Scopus)

ACM SIGKDD is the top conference for data mining (there are others like NIPS with more focus in machine learning - I’m referring to the KDD subdomain).

But for Elsevier, it does not seem to be important.

Forget Elsevier. Also forget Thomson Reuter’s ISI Web of Science. It’s just the same publisher-oriented crap.

Communications of the ACM: Research Evaluation For Computer Science

Niklaus Wirth, Turing Award winner, appears for minor papers from indexed publications, not his seminal 1970 Pascal report. Knuth’s milestone book series, with an astounding 15,000 citations in Google Scholar, does not figure. Neither do Knuth’s three articles most frequently cited according to Google.

Yes, if you ask Elsevier or Thomson Reuter’s, Donald Knuth’s “the art of computer programming” does not matter. Because it is not published by Elsevier.

They also ignore the fact that open-access gains importance quickly. Many very influencial papers such as “word2vec” have been published first in the open-access preprint server arXiv. Some never even were published anywhere else.

According to Google Scholar, the top venue for artificial intelligence is arXiv cs.LG, and stat.ML is ranked 5. And the top venue for computational linguistics is arXiv cs.CL. In databases and information systems the top venue WWW publishes via ACM, but using open-access links from their web page. The second, VLDB, operates their own server to publish PVLDB as open-access. And number three is arXiv cs.SI, number five is arXiv cs.DB.

Time to move to open-access, and away from overpriced publishers. If you want your paper to be read and cited, publish open-access and not with expensive walled gardens like Elsevier.

Erich Schubert https://www.vitavonni.de/blog/ Techblogging

My Debian Activities in May 2018

Planet Debian - Pre, 08/06/2018 - 12:51pd

FTP master

This month I accepted 304 packages and rejected 20 uploads. The overall number of packages that got accepted this month was 420.

Debian LTS

This was my forty seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 24.25h. During that time I did LTS uploads of:

    [DLA 1387-1] cups security update for one CVE
    [DLA 1388-1] wireshark security update for 9 CVEs

I continued to work on the bunch of wireshark CVEs and sorted all out that did not affect Jessie or Stretch. At the end I sent my dediff with patches for 20 Jessie CVEs and 38 CVES for Stretch to Moritz so that he could compare them with his own work. Unfortunately he didn’t use all of them.

The CVEs for krb5 were marked as no-dsa by the security team, so there was no upload for Wheezy. Building the package for cups was a bit annoying as the test suite didn’t want to run in the beginning.

I also tested the apache2 package from Roberto twice and let the package do a second round before the final upload.

Last but not least I did a week of frontdesk duties and prepared my new working environment for Jessie LTS and Wheezy ELTS.

Other stuff

During May I did uploads of …

  • libmatthew-java to fix a FTBFS with Java 9 due to a disappearing javah. In the end it resulted in a new upstream version.

I also prepared the next libosmocore transistion by uploading several osmocom packages to experimental. This has to continue in June.

Further I sponsored some glewlwyd packages for Nicolas Mora. He is right on his way to become a Debian Maintainer.

Last but not least I uploaded the new package libterm-readline-ttytter-per, which is needed to bring readline functionality to oysttyer, a command line twitter client.

alteholz http://blog.alteholz.eu blog.alteholz.eu » planetdebian

My Debian Activities in May 2018

Planet Debian - Enj, 07/06/2018 - 10:53md

FTP master

This month I accepted 304 packages and rejected 20 uploads. The overall number of packages that got accepted this month was 420.

Debian LTS

This was my forty seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 24.25h. During that time I did LTS uploads of:

    [DLA 1387-1] cups security update for one CVE
    [DLA 1388-1] wireshark security update for 9 CVEs

I continued to work on the bunch of wireshark CVEs and sorted all out that did not affect Jessie or Stretch. At the end I sent my dediff with patches for 20 Jessie CVEs and 38 CVES for Stretch to Moritz so that he could compare them with his own work. Unfortunately he didn’t use all of them.

The CVEs for krb5 were marked as no-dsa by the security team, so there was no upload for Wheezy. Building the package for cups was a bit annoying as the test suite didn’t want to run in the beginning.

I also tested the apache2 package from Roberto twice and let the package do a second round before the final upload.

Last but not least I did a week of frontdesk duties and prepared my new working environment for Jessie LTS and Wheezy ELTS.

Other stuff

During May I did uploads of …

  • libmatthew-java to fix a FTBFS with Java 9 due to a disappearing javah. In the end it resulted in a new upstream version.

I also prepared the next libosmocore transistion by uploading several osmocom packages to experimental. This has to continue in June.

Further I sponsored some glewlwyd packages for Nicolas Mora. He is right on his way to become a Debian Maintainer.

Last but not least I uploaded the new package libterm-readline-ttytter-per, which is needed to bring readline functionality to oysttyer, a command line twitter client.

alteholz http://blog.alteholz.eu blog.alteholz.eu » planetdebian

The Psion Gemini

Planet Debian - Enj, 07/06/2018 - 3:04md

So, I backed the Gemini and received my shiny new device just a few months after they said that it'd ship, not bad for an indiegogo project! Out of the box, I flashed it, using the non-approved linux flashing tool at that time, and failed to backup the parts that, err, I really didn't want blatted... So within hours I had a new phone that I, err, couldn't make calls on, which was marginally annoying. And the tech preview of Debian wasn't really worth it, as it was fairly much unusable (which was marginally upsetting, but hey) - after a few more hours / days of playing around I got the IMEI number back in to the Gemini and put back on the stock android image. I didn't at this point have working bluetooth or wifi, which was a bit of a pain too, turns out the mac addresses for those are also stored in the nvram (doh!), that's now mostly working through a bit of collaboration with another Gemini owner, my Gemini currently uses the mac addresses from his device... which I'll need to fix in the next month or so, else we'll have a mac address collision, probably.

Overall, it's not a bad machine, the keyboard isn't quite as good as I was hoping for, the phone functionality is not bad once you're on a call, but not great until you're on a call, and I certainly wouldn't use it to replace the Samsung Galaxy S7 Edge that I currently use as my full time phone. It is however really rather useful as a sysadmin tool when you don't want to be lugging a full laptop around with you, the keyboard is better than using the on screen keyboard on the phone, the ssh client is "good enough" to get to what I need, and the terminal font isn't bad. I look forward to seeing where it goes, I'm happy to have been an early backer, as I don't think I'd pay the current retail price for one.

Brett Parker iDunno@sommitrealweird.co.uk The World of SommitRealWeird.

Serge Hallyn: TPM 2.0 in qemu

Planet Ubuntu - Dje, 03/06/2018 - 5:41pd

If you want to test software which exploits TPM 2.0 functionality inside the qemu-kvm emulator, this can be challenging because the software stack is still quite new. Here is how I did it.

First, you need a new enough qemu. The version on Ubuntu xenial does not suffice. The 2.11 version in Ubuntu bionic does. I believe the 2.10 version in artful is also too old, but might be mis-remembering haven’t tested that lately.

The two pieces of software I needed were libtpms and swtpm. For libtpms I used the tpm2-preview.rev146.v2 branch, and for swtpm I used the tpm2-preview.v2 branch.

apt -y install libtool autoconf tpm-tools expect socat libssl-dev git clone https://github.com/stefanberger/libtpms ( cd libtpms && git checkout tpm2-preview.rev146.v2 && ./bootstrap.sh && ./configure --prefix=/usr --with-openssl --with-tpm2 && make && make install) git clone https://github.com/stefanberger/swtpm (cd swtpm && git checkout tpm2-preview.v2 && ./bootstrap.sh && configure --prefix=/usr --with-openssl --with-tpm2 && make && make install)

For each qemu instance, I create a tpm device. The relevant part of the script I used looks like this:

#!/bin/bash i=0 while [ -d /tmp/mytpm$i ]; do let i=i+1 done tpm=/tmp/tpm$i mkdir $tpm echo "Starting $tpm" sudo swtpm socket --tpmstate dir=$tpm --tpm2 \ --ctrl type=unixio,path=/$tpm/swtpm-sock & sleep 2 # this should be changed to a netstat query next_vnc() { vncport=0 port=5900 while nc -z 127.0.0.1 $port; do port=$((port + 1)) vncport=$((vncport + 1)) done echo $vncport } nextvnc=$(next_vnc) sudo kvm -drive file=${disk},format=raw,if=virtio,cache=none -chardev socket,id=chrtpm,path=/$tpm/swtpm-sock -tpmdev emulator,id=tpm0,chardev=chrtpm -device tpm-tis,tpmdev=tpm0 -vnc :$nextvnc -m 2048

Gustavo Silva: Installing Ubuntu 1804 With NVidia High End Graphics Card

Planet Ubuntu - Dje, 03/06/2018 - 2:00pd

Last week, I was presented with a message saying my Ubuntu partition was already full. I digged up what was going on, I found out Google Chrome was flooding /var/log/syslog. I had 40GB of logs with a weird error message:

Jun 1 12:22:35 machina update-notifier.desktop[5460]: [15076:15076:0100/000000.062848:ERROR:zygote_linux.cc(247)] Error reading message from browser: Socket operation on non-socket (88)

I’m pretty certain it was related with Chrome because the only time the system stabilized enough was when I killed its process. I two extensions, an ad blocker and Todoist, so I don’t think it was related to any suspicious extension.

Anyway, the sprint ended in the previous Friday so I thought it might be a good time to format the computer. Maybe try a new Linux distro, other than Ubuntu. And this is the story of how I spent close to a day figuring out which Linux distributions would work on my hardware.

Here’s the thing: In April, I tried installing Ubuntu and failed miserably. The installer would freeze loading. It was not faulty image. The same happened again yesterday. Then I decided to give Manjaro Linux a try. It wouldn’t work properly. Sometimes the boot would freeze, others it would actually work. “That’s it, I’m moving to Fedora.” The installer work, but no start-up would work. So I investigated and wrote nomodeset in the grub options (when you enter the grub menu, just press e to get access to boot options). It worked, the system was booting., But at 800x600. And in no way I was able to easily install nouveau (open source drivers) or the proprietary.

It was then time to give a go to Ubuntu again. And I didn’t want to give up on the 18.04. It is LTS after all and it has to work. My graphics card always brought me issues, specially being on the mid-high end side of things (GTX1060).

The first step was to add the nomodeset command again to the grub options - Hey the installer launched instantly with 800x600 resolution. It’s enough to get it done.

Then I had a major issue: Ubuntu did not work after the first login. The screen turns black and nothing happens. Fine!!! In the next boot, before logging in via GDM, I pressed ALT + F2 to enter TTY mode and then ran the set of commands to enable the graphics card to work properly and set other important flags to allow the system to reboot, shutdown and not crash when opening the settings screen after booting up (these three errors have always been fixed after adding these options:

I need to add acpi=force into grub options again (run sudo vi /etc/default/grub and edit the line with GRUB_CMDLINE_LINUX), add my mouse configuration manually - it’s a Mad Catz R.A.T. 3.

At the time, I decided to try and remove the nomodeset option in the GRUB_CMDLINE_LINUX and the resolution got back to normal. The next step was to install NVidia’s graphics, using the proprietary tab (in Gnome, you can call that panel running software-properties --open-tab=4.

And that’s it. It just worked. Everything was back to normal. Potentially the same kind of solution would have worked in Fedora but since I’ve been using Ubuntu for the past 3 years now, getting help has become much easier, as well as debugging error logs.

Since version 17.04 and since I got this computer, installing Linux has always been difficult Hope this story is useful for someone with an high-end laptop that has encountered issues with NVidia graphics card’s drivers. Hopefully these commands will at least help you getting one step further into your working system

Thanks for reading,

gsilvapt

Launchpad News: Launchpad news, May 2018

Planet Ubuntu - Sht, 02/06/2018 - 9:44md

Here’s a brief changelog for this month.

Build farm
  • Send fast_cleanup: True to virtualised builds, since they can safely skip the final cleanup steps
Code
  • Add spam controls to code review comments (#1680746)
  • Only consider the most recent nine successful builds when estimating recipe build durations (#1770121)
  • Make updated code import emails more informative
Infrastructure
  • Upgrade to Twisted 17.9.0
  • Get the test suite passing on Ubuntu 18.04 LTS
  • Allow admins to configure users such that unsigned email from them will be rejected, as a spam defence (#1714967)
Snappy
  • Prune old snap files that have been uploaded to the store; this cleaned up about 5TB of librarian space
  • Make the snap store client cope with a few more edge cases (#1766911)
  • Allow branches in snap store channel names (#1754405)
Soyuz (package management)
  • Add DistroArchSeries.setChrootFromBuild, allowing setting a chroot from a file produced by a live filesystem build
  • Disambiguate URLs to source package files in the face of filename clashes in imported archives
  • Optimise SourcePackagePublishingHistory:+listing-archive-extra (#1769979)
Miscellaneous
  • Disable purchasing of new commercial subscriptions; existing customers have been contacted, and people with questions about this can contact Canonical support
  • Various minor revisions to the terms of service from Canonical’s legal department, and a clearer data privacy policy

Costales: Podcast Ubuntu y otras hierbas S02E07: iPads y Chromebooks en colegios y aplicaciones Android en Ubuntu Phone

Planet Ubuntu - Sht, 02/06/2018 - 3:29md
En esta ocasión, Francisco MolineroFrancisco Javier Teruelo y Marcos Costales, charlamos sobre los siguientes temas:

  • La adopción de iPads y Chromebooks en educación.
  • Qué supondrá poder ejecutar aplicaciones Android en Ubuntu Phone.

Capítulo 7º de la segunda temporada
El podcast esta disponible para escuchar en:

Andres Rodriguez: MAAS 2.4.0 (final) released!

Planet Ubuntu - Mër, 30/05/2018 - 8:05md
Hello MAASters! I’m happy to announce that MAAS 2.4.0 (final) is now available! This new MAAS release introduces a set of exciting features and improvements that improve performance, stability and usability of MAAS. MAAS 2.4.0 will be immediately available in the PPA, but it is in the process of being SRU’d into Ubuntu Bionic. PPA’s Availability MAAS 2.4.0 is currently available for Ubuntu Bionic in ppa:maas/stable for the coming week. sudo add-apt-repository ppa:maas/stable
sudo apt-get update
sudo apt-get install maas
What’s new? Most notable MAAS 2.4.0 changes include:
  • Performance improvements across the backend & UI.
  • KVM pod support for storage pools (over API).
  • DNS UI to manage resource records.
  • Audit Logging
  • Machine locking
  • Expanded commissioning script support for firmware upgrades & HBA changes.
  • NTP services now provided with Chrony.
For the full list of features & changes, please refer to the release notes: https://docs.maas.io/2.4/en/release-notes

Matthew Helmke: Ubuntu Unleashed 2019 and other books presale discount

Planet Ubuntu - Enj, 24/05/2018 - 6:59pd

Starting Thursday, May 24th the about-to-be released 2019 new edition of my book, Ubuntu Unleashed, will be listed in InformIT’s Summer Coming Soon sale, which goes through May 29th. The discount is 40% off print and 45% off eBooks, no discount code will be required. Here’s the link: InformIT Summer Sale.

Xubuntu: New Wiki pages for Testers

Planet Ubuntu - Mër, 23/05/2018 - 6:49md

During the last few weeks of the 18.04 (Bionic Beaver) cycle, we had 2 people drop by in our development channel trying to respond to the call for testers from the Development and QA Teams.

It quickly became apparent to me that I was having to repeat myself in order to make it “basic” enough for someone who had never tested for us, to understand what I was trying to put across.

After pointing to the various resources we have, and other flavours use – it transpired that they both would have preferred something a bit easier to start with.

So I asked them to write it for us all.

Rather than belabour my point here, I’ve asked both of them to write a few words about what they needed and what they have achieved for everyone.

Before they get that chance – I would just like to thank them both for the hours of work they have put in drafting, tweaking and getting the pages into a position where we can tell you all of their existence.

You can see the fruits of their labour at our updated web page for Testers and the new pages we have at the New Tester wiki.

Kev
On behalf of the Xubuntu Development and QA Teams.

“I see the whole idea of OS software and communities helping themselves as a breath of fresh air in an ever more profit obsessed world (yes, I am a cynical old git).

I really wanted to help, but just didn’t think that I had any of the the skills required, and the guides always seemed to assume a level of knowledge that I just didn’t have.

So, when I was asked to help write a ‘New Testers’ guide for my beloved Xubuntu I absolutely jumped at the chance, knowing that my ignorance was my greatest asset.

I hope what resulted from our work will help those like me (people who can easily learn but need to be told pretty much everything from the bottom up) to start testing and enjoy the warm, satisfied glow of contributing to their community.
Most of all, I really enjoyed collaborating with some very nice people indeed.”
Leigh Sutherland

“I marvel at how we live in an age in which we can collaborate and share with people all over the world – as such I really like the ideas of free and open source. A long time happy Xubuntu user, I felt the time to be involved, to go from user-only to contributor was long overdue – Xubuntu is a community effort after all. So, when the call for testing came last March, I dove in. At first testing seemed daunting, complicated and very technical. But, with leaps and bounds, and the endless patience and kindness of the Xubuntu-bunch over at Xubuntu-development, I got going. I felt I was at last “paying back”. When flocculant asked if I would help him and Leigh to write some pages to make the information about testing more accessible for users like me, with limited technical skills and knowledge, I really liked the idea. And that started a collaboration I really enjoyed.

It’s my hope that with these pages we’ve been able to get across the information needed by someone like I was when I started -technical newby, noob- to simply get set up to get testing.

It’s also my hope people like you will tell us where and how these pages can be improved, with the aim to make the first forays into testing as gentle and easy as possible. Because without testing we as a community can not make xubuntu as good as we’d want it to be.”
Willem Hobers

The diameter of German+English

Planet Debian - Mër, 23/05/2018 - 8:30pd

Languages never map directly onto each other. The English word fresh can mean frisch or frech, but frish can also be cool. Jumping from one words to another like this yields entertaining sequences that take you to completely different things. Here is one I came up with:

frechfreshfrishcoolabweisenddismissivewegwerfendtrashingverhauendbangingGeklopfeknocking – …

And I could go on … but how far? So here is a little experiment I ran:

  1. I obtained a German-English dictionary. Conveniently, after registration, you can get dict.cc’s translation file, which is simply a text file with three columns: German, English, Word form.

  2. I wrote a program that takes these words and first canonicalizes them a bit: Removing attributes like [ugs.] [regional], {f}, the to in front of verbs and other embellishment.

  3. I created the undirected, bipartite graph of all these words. This is a pretty big graph – ~750k words in each language, a million edges. A path in this graph is precisely a sequence like the one above.

  4. In this graph, I tried to find a diameter. The diameter of a graph is the longest path between two nodes that you cannot connect with a shorter path.

Because the graph is big (and my code maybe not fully optimized), it ran a few hours, but here it is: The English expression be annoyed by sb. and the German noun Icterus are related by 55 translations. Here is the full list:

  • be annoyed by sb.
  • durch jdn. verärgert sein
  • be vexed with sb.
  • auf jdn. böse sein
  • be angry with sb.
  • jdm. böse sein
  • have a grudge against sb.
  • jdm. grollen
  • bear sb. a grudge
  • jdm. etw. nachtragen
  • hold sth. against sb.
  • jdm. etw. anlasten
  • charge sb. with sth.
  • jdn. mit etw. [Dat.] betrauen
  • entrust sb. with sth.
  • jdm. etw. anvertrauen
  • entrust sth. to sb.
  • jdm. etw. befehlen
  • tell sb. to do sth.
  • jdn. etw. heißen
  • call sb. names
  • jdn. beschimpfen
  • abuse sb.
  • jdn. traktieren
  • pester sb.
  • jdn. belästigen
  • accost sb.
  • jdn. ansprechen
  • address oneself to sb.
  • sich an jdn. wenden
  • approach
  • erreichen
  • hit
  • Treffer
  • direct hit
  • Volltreffer
  • bullseye
  • Hahnenfuß-ähnlicher Wassernabel
  • pennywort
  • Mauer-Zimbelkraut
  • Aaron's beard
  • Großkelchiges Johanniskraut
  • Jerusalem star
  • Austernpflanze
  • goatsbeard
  • Geißbart
  • goatee
  • Ziegenbart
  • buckhorn plantain
  • Breitwegerich / Breit-Wegerich
  • birdseed
  • Acker-Senf / Ackersenf
  • yellows
  • Gelbsucht
  • icterus
  • Icterus

Pretty neat!

So what next?

I could try to obtain an even longer chain by forgetting whether a word is English or German (and lower-casing everything), thus allowing wild jumps like hathuthüttelodge.

Or write a tool where you can enter two arbitrary words and it finds such a path between them, if there exists one. Unfortunately, it seems that the terms of the dict.cc data dump would not allow me to create such a tool as a web site (but maybe I can ask).

Or I could throw in additional languages!

What would you do?

Joachim Breitner mail@joachim-breitner.de nomeata’s mind shares

Home Automation: Graphing MQTT sensor data

Planet Debian - Mar, 22/05/2018 - 11:28md

So I’ve setup a MQTT broker and I’m feeding it temperature data. How do I actually make use of this data? Turns out collectd has an MQTT plugin, so I went about setting it up to record temperature over time.

First problem was that although the plugin supports MQTT/TLS it doesn’t support it for subscriptions until 5.8, so I had to backport the fix to the 5.7.1 packages my main collectd host is running.

The other problem is that collectd is picky about the format it accepts for incoming data. The topic name should be of the format <host>/<plugin>-<plugin_instance>/<type>-<type_instance> and the data is <unixtime>:<value>. I modified my MQTT temperature reporter to publish to collectd/mqtt-host/mqtt/temperature-study, changed the publish line to include the timestamp:

publish.single(pub_topic, str(time.time()) + ':' + str(temp), hostname=Broker, port=8883, auth=auth, tls={})

and added a new collectd user to the Mosquitto configuration:

mosquitto_passwd -b /etc/mosquitto/mosquitto.users collectd collectdpass

And granted it read-only access to the collectd/ prefix via /etc/mosquitto/mosquitto.acl:

user collectd topic read collectd/#

(I also created an mqtt-temp user with write access to that prefix for the Python script to connect to.)

Then, on the collectd host, I created /etc/collectd/collectd.conf.d/mqtt.conf containing:

LoadPlugin mqtt <Plugin "mqtt"> <Subscribe "ha"> Host "mqtt-host" Port "8883" User "collectd" Password "collectdpass" CACert "/etc/ssl/certs/ca-certificates.crt" Topic "collectd/#" </Subscribe> </Plugin>

I had some initial problems when I tried setting CACert to the Let’s Encrypt certificate; it actually wants to point to the “DST Root CA X3” certificate that signs that. Or using the full set of installed root certificates as I’ve done works too. Of course the errors you get back are just of the form:

collectd[8853]: mqtt plugin: mosquitto_loop failed: A TLS error occurred.

which is far from helpful. Once that was sorted collectd started happily receiving data via MQTT and producing graphs for me:

This is a pretty long winded way of ending up with some temperature graphs - I could have just graphed the temperature sensor using collectd on the Pi to send it to the monitoring host, but it has allowed a simple MQTT broker, publisher + subscriber setup with TLS and authentication to be constructed and confirmed as working.

Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

rust for cortex-m7 baremetal

Planet Debian - Mar, 22/05/2018 - 10:35md
This is a reminder for myself, if you want to install rust for a baremetal Cortex-M7 target, this seems to be a tier 3 platform:

https://forge.rust-lang.org/platform-support.html

Higlighting the relevant part:

Target std rustc cargo notes ... msp430-none-elf * 16-bit MSP430 microcontrollers sparc64-unknown-netbsd ✓ ✓ NetBSD/sparc64 thumbv6m-none-eabi * Bare Cortex-M0, M0+, M1 thumbv7em-none-eabi *

Bare Cortex-M4, M7 thumbv7em-none-eabihf * Bare Cortex-M4F, M7F, FPU, hardfloat thumbv7m-none-eabi * Bare Cortex-M3 ... x86_64-unknown-openbsd ✓ ✓ 64-bit OpenBSD
In order to enable the relevant support, use the nightly build and add the relevant target:
eddy@feodora:~/usr/src/rust-uc$ rustup show
Default host: x86_64-unknown-linux-gnu

installed toolchains
--------------------

stable-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu (default)

active toolchain
----------------

nightly-x86_64-unknown-linux-gnu (default)
rustc 1.28.0-nightly (cb20f68d0 2018-05-21)If not using nightly, switch to that:

eddy@feodora:~/usr/src/rust-uc$ rustup default nightly-x86_64-unknown-linux-gnu
info: using existing install for 'nightly-x86_64-unknown-linux-gnu'
info: default toolchain set to 'nightly-x86_64-unknown-linux-gnu'

  nightly-x86_64-unknown-linux-gnu unchanged - rustc 1.28.0-nightly (cb20f68d0 2018-05-21)
Add the needed target:
eddy@feodora:~/usr/src/rust-uc$ rustup target add thumbv7em-none-eabi
info: downloading component 'rust-std' for 'thumbv7em-none-eabi'
  5.4 MiB /   5.4 MiB (100 %)   5.1 MiB/s ETA:   0 s               
info: installing component 'rust-std' for 'thumbv7em-none-eabi'
eddy@feodora:~/usr/src/rust-uc$ rustup show
Default host: x86_64-unknown-linux-gnu

installed toolchains
--------------------

stable-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu (default)

installed targets for active toolchain
--------------------------------------

thumbv7em-none-eabi
x86_64-unknown-linux-gnu

active toolchain
----------------

nightly-x86_64-unknown-linux-gnu (default)
rustc 1.28.0-nightly (cb20f68d0 2018-05-21)Then compile with --target.
eddyp noreply@blogger.com Rambling around foo

Reproducible Builds: Weekly report #160

Planet Debian - Mar, 22/05/2018 - 8:30pd

Here’s what happened in the Reproducible Builds effort between Sunday May 13 and Saturday May 19 2018:

Packages reviewed and fixed, and bugs filed

In addition, build failure bugs were reported by Adrian Bunk (2) and Gilles Filippini (1).

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages.

reprotest development

reprotest is our tool to build software and check it for reproducibility.

  • kpcyrd:
  • Chris Lamb:
    • Update referencess to Alioth now that the repository has migrated to Salsa. (1, 2, 3)
jenkins.debian.net development

There were a number of changes to our Jenkins-based testing framework, including:

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Levente Polyak and Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible-builds.org/blog/ reproducible-builds.org

Kubuntu General News: Plasma 5.12.5 bugfix update for Kubuntu 18.04 LTS – Testing help required

Planet Ubuntu - Hën, 21/05/2018 - 5:36md

Are you using Kubuntu 18.04, our current LTS release?

We currently have the Plasma 5.12.5 LTS bugfix release available in our Updates PPA, but we would like to provide the important fixes and translations in this release to all users via updates in the main Ubuntu archive. This would also mean these updates would be provide by default with the 18.04.1 point release ISO expected in late July.

The Stable Release Update tracking bug can be found here: https://bugs.launchpad.net/ubuntu/+source/plasma-desktop/+bug/1768245

A launchpad.net account is required to post testing feedback as bug comments.

The Plasma 5.12.5 changelog can be found at: https://www.kde.org/announcements/plasma-5.12.4-5.12.5-changelog.php

[Test Case]

* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.12.4?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing. e.g. “weather plasmoid fetches BBC weather data.”
– Test the ‘fixed’ functionality.

Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt based package management is advisable.

Details on how to enable the propose repository can be found at: https://wiki.ubuntu.com/Testing/EnableProposed.

Unfortunately that page illustrates Xenial and Ubuntu Unity rather than Bionic in Kubuntu. Using Discover or Muon, use Settings > More, enter your password, and ensure that Pre-release updates (bionic-proposed) is ticked in the Updates tab.

Or from the commandline, you can modify the software sources manually by adding the following line to /etc/apt/sources.list:

deb http://archive.ubuntu.com/ubuntu/ bionic-proposed restricted main multiverse universe

It is not advisable to upgrade all available packages from proposed, as many will be unrelated to this testing and may NOT have been sufficiently verified as updates to assume safe. So the safest but a little involved method would be to use Muon (or even synaptic!) to select each upgradeable packages with a version containing 5.12.5-0ubuntu0.1 (5.12.5.1-0ubuntu0.1 for plasma-discover due to an additional update).

Please report your findings on the bug report. If you need some guidance on how to structure your report, please see https://wiki.ubuntu.com/QATeam/PerformingSRUVerification. Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important bug-fix release out the door to all of our users.

Thanks! Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

Sina Mashek: Check if external ip has changed

Planet Ubuntu - Pre, 18/05/2018 - 11:00md

Sometimes we are on connections that have a dynamic ip. This will add your current external ip to ~/.external-ip.

Each time the script is run, it will dig an OpenDNS resolver to grab your external ip. If it is different from what is in ~/.external-ip it will echo the new ip. Otherwise it will return nothing.

#!/bin/sh # Check external IP for change # Ideal for use in a cron job # # Usage: sh check-ext-ip.sh # # Returns: Nothing if the IP is same, or the new IP address # First run always returns current address # # Requires dig: # Debian/Ubuntu: apt install dnsutils # Solus: eopkg install bind-utils # CentOS/Fedora: yum install bind-utils # # by Sina Mashek <sina@sinacutie.stream> # Released under CC0 or Public Domain, whichever is supported # Where we will store the external IP EXT_IP="$HOME/.external-ip" # Check if dig is installed if [ "$(command -v dig)" = "" ]; then echo "This script requires 'dig' to run" # Load distribution release information . /etc/os-release # Check for supported release; set proper package manager and package name if [ "$ID" = "debian" ] || [ "$ID" = "ubuntu" ]; then MGR="apt" PKG="dnsutils" elif [ "$ID" = "fedora" ] || [ "$ID" = "centos" ]; then MGR="yum" PKG="bind-utils" elif [ "$ID" = "solus" ]; then MGR="eopkg" PKG="bind-utils" else echo "Please consult your package manager for the correct package" exit 1 fi # Will run if one of the above supported distributions was found echo "Installing $PKG ..." sudo "$MGR" install "$PKG" fi # We check our external IP directly from a DNS request GET_IP="$(dig +short myip.opendns.com @resolver1.opendns.com)" # Check if ~/.external-ip exists if [ -f "$EXT_IP" ]; then # If the external ip is the same as the current ip if [ "$(cat "$EXT_IP")" = "$GET_IP" ]; then exit 0 fi # If it doesn't exist or is not the same, grab and save the current IP else echo "$GET_IP" > "$EXT_IP" fi

Securing the container image supply chain

Planet Debian - Enj, 17/05/2018 - 7:00md

This article is part of a series on KubeCon Europe 2018.

KubeCon EU "Security is hard" is a tautology, especially in the fast-moving world of container orchestration. We have previously covered various aspects of Linux container security through, for example, the Clear Containers implementation or the broader question of Kubernetes and security, but those are mostly concerned with container isolation; they do not address the question of trusting a container's contents. What is a container running? Who built it and when? Even assuming we have good programmers and solid isolation layers, propagating that good code around a Kubernetes cluster and making strong assertions on the integrity of that supply chain is far from trivial. The 2018 KubeCon + CloudNativeCon Europe event featured some projects that could eventually solve that problem.

Image provenance

A first talk, by Adrian Mouat, provided a good introduction to the broader question of "establishing image provenance and security in Kubernetes" (video, slides [PDF]). Mouat compared software to food you get from the supermarket: "you can actually tell quite a lot about the product; you can tell the ingredients, where it came from, when it was packaged, how long it's good for". He explained that "all livestock in Europe have an animal passport so we can track its movement throughout Europe and beyond". That "requires a lot of work, and time, and money, but we decided that this is was worthwhile doing so that we know [our food is] safe to eat. Can we say the same thing about the software running in our data centers?" This is especially a problem in complex systems like Kubernetes; containers have inherent security and licensing concerns, as we have recently discussed.

You should be able to easily tell what is in a container: what software it runs, where it came from, how it was created, and if it has any known security issues, he said. Mouat also expects those properties to be provable and verifiable with strong cryptographic assertions. Kubernetes can make this difficult. Mouat gave a demonstration of how, by default, the orchestration framework will allow different versions of the same container to run in parallel. In his scenario, this is because the default image pull policy (ifNotPresent) might pull a new version on some nodes and not others. This problem arises because of an inconsistency between the way Docker and Kubernetes treat image tags; the former as mutable and the latter as immutable. Mouat said that "the default semantics for pulling images in Kubernetes are confusing and dangerous." The solution here is to deploy only images with tags that refer to a unique version of a container, for example by embedding a Git hash or unique version number in the image tag. Obviously, changing the policy to AlwaysPullImages will also help in solving the particular issue he demonstrated, but will create more image churn in the cluster.

But that's only a small part of the problem; even if Kubernetes actually runs the correct image, how can you tell what is actually in that image? In theory, this should be easy. Docker seems like the perfect tool to create deterministic images that consist exactly of what you asked for: a clean and controlled, isolated environment. Unfortunately, containers are far from reproducible and the problem begins on the very first line of a Dockerfile. Mouat gave the example of a FROM debian line, which can mean different things at different times. It should normally refer to Debian "stable", but that's actually a moving target; Debian makes new stable releases once in a while, and there are regular security updates. So what first looks like a static target is actually moving. Many Dockerfiles will happily fetch random source code and binaries from the network. Mouat encouraged people to at least checksum the downloaded content to prevent basic attacks and problems.

Unfortunately, all this still doesn't get us reproducible builds since container images include file timestamps, build identifiers, and image creation time that will vary between builds, making container images hard to verify through bit-wise comparison or checksums. One solution there is to use alternative build tools like Bazel that allow you to build reproducible images. Mouat also added that there is "tension between reproducibility and keeping stuff up to date" because using hashes in manifests will make updates harder to deploy. By using FROM debian, you automatically get updates when you rebuild that container. Using FROM debian:stretch-20180426 will get you a more reproducible container, but you'll need to change your manifest regularly to follow security updates. Once we know what is in our container, there is at least a standard in the form of the OCI specification that allows attaching annotations to document the contents of containers.

Another problem is making sure containers are up to date, a "weirdly hard" question to answer according to Mouat: "why can't I ask my registry [if] there is new version of [a] tag, but as far as I know, there's no way you can do that." Mouat literally hand-waved at a slide showing various projects designed to scan container images for known vulnerabilities, introducing Aqua, Clair, NeuVector, and Twistlock. Mouat said we need a more "holistic" solution than the current whack-a-mole approach. His company is working on such a product called Trow, but not much information about it was available at the time of writing.

The long tail of the supply chain

Verifying container images is exactly the kind of problem Notary is designed to solve. Notary is a server "that allows anyone to have trust over arbitrary collections of data". In practice, that can be used by the Docker daemon as an additional check before fetching images from the registry. This allows operators to approve images with cryptographic signatures before they get deployed in the cluster.

Notary implements The Update Framework (TUF), a specification covering the nitty-gritty details of signatures, key rotation, and delegation. It keeps signed hashes of container images that can be used for verification; it can be deployed by enabling Docker's "content Trust" in any Docker daemon, or by configuring a custom admission controller with a web hook in Kubernetes. In another talk (slides [PDF], video) Liam White and Michael Hough covered the basics of Notary's design and how it interacts with Docker. They also introduced Porteiris as an admission controller hook that can implement a policy like "allow any image from the LWN Docker registry as long as it's signed by your favorite editor". Policies can be scoped by namespace as well, which can be useful in multi-tenant clusters. The downside of Porteris is that it supports only IBM Cloud Notary servers because the images need to be explicitly mapped between the Notary server and the registry. The IBM team knows only about how to map its own images but the speakers said they were open to contributions there.

A limitation of Notary is that it looks only at the last step of the build chain; in itself, it provides no guarantees on where the image comes from, how the image was built, or what it's made of. In yet another talk (slides [PDF] video), Wendy Dembowski and Lukas Puehringer introduced a possible solution to that problem: two projects that work hand-in-hand to provide end-to-end verification of the complete container supply chain. Puehringer first introduced the in-toto project as a tool to authenticate the integrity of individual build steps: code signing, continuous integration (CI), and deployment. It provides a specification for "open and extensible" metadata that certifies how each step was performed and the resulting artifacts. This could be, at the source step, as simple as a Git commit hash or, at the CI step, a build log and artifact checksums. All steps are "chained" as well, so that you can track which commit triggered the deployment of a specific image. The metadata is cryptographically signed by role keys to provide strong attestations as to the provenance and integrity of each step. The in-toto project is supervised by Justin Cappos, who also works on TUF, so it shares some of its security properties and integrates well with the framework. Each step in the build chain has its own public/private key pair, with support for role delegation and rotation.

In-toto is a generic framework allowing a complete supply chain verification by providing "attestations" that a given artifact was created by the right person using the right source. But it does not necessarily provide the hooks to do those checks in Kubernetes itself. This is where Grafeas comes in, by providing a global API to read and store metadata. That can be package versions, vulnerabilities, license or vulnerability scans, builds, images, deployments, and attestations such as those provided by in-toto. All of those can then be used by the Kubernetes admission controller to establish a policy that regulates image deployments. Dembowski referred to this tutorial by Kelsey Hightower as an example configuration to integrate Grafeas in your cluster. According to Puehringer: "It seems natural to marry the two projects together because Grafeas provides a very well-defined API where you can push metadata into, or query from, and is well integrated in the cloud ecosystem, and in-toto provides all the steps in the chain."

Dembowski said that Grafeas is already in use at Google and it has been found useful to keep track of metadata about containers. Grafeas can keep track of what each container is running, who built it, when (sometimes vulnerable) code was deployed, and make sure developers do not ship containers built on untrusted development machines. This can be useful when a new vulnerability comes out and administrators scramble to figure out if or where affected code is deployed.

Puehringer explained that in-toto's reference implementation is complete and he is working with various Linux distributions to get them to use link metadata to have their package managers perform similar verification.

Conclusion

The question of container trust hardly seems resolved at all; the available solutions are complex and would be difficult to deploy for Kubernetes rookies like me. However, it seems that Kubernetes could make small improvements to improve security and auditability, the first of which is probably setting the image pull policy to a more reasonable default. In his talk, Mouat also said it should be easier to make Kubernetes fetch images only from a trusted registry instead of allowing any arbitrary registry by default.

Beyond that, cluster operators wishing to have better control over their deployments should start looking into setting up Notary with an admission controller, maybe Portieris if they can figure out how to make it play with their own Notary servers. Considering the apparent complexity of Grafeas and in-toto, I would assume that those would probably be reserved only to larger "enterprise" deployments but who knows; Kubernetes may be complex enough as it is that people won't mind adding a service or two in there to improve its security. Keep in mind that complexity is an enemy of security, so operators should be careful when deploying solutions unless they have a good grasp of the trade-offs involved.

This article first appeared in the Linux Weekly News.

Antoine Beaupré https://anarc.at/tag/debian-planet/ pages tagged debian-planet

Mathieu Trudel: Building a local testing lab with Ubuntu, MAAS and netplan

Planet Ubuntu - Enj, 17/05/2018 - 12:47pd
OverviewI'm presenting here the technical aspects of setting up a small-scale testing lab in my basement, using as little hardware as possible, and keeping costs to a minimum. For one thing, systems needed to be mobile if possible, easy to replace, and as flexible as possible to support various testing scenarios. I may wish to bring part of this network with me on short trips to give a talk, for example.

One of the core aspects of this lab is its use of the network. I have former experience with Cisco hardware, so I picked some relatively cheap devices off eBay: a decent layer 3 switch (Cisco C3750, 24 ports, with PoE support in case I'd want to start using that), a small Cisco ASA 5505 to act as a router. The router's configuration is basic, just enough to make sure this lab can be isolated behind a firewall, and have an IP on all networks. The switch's config is even simpler, and consists in setting up VLANs for each segment of the lab (different networks for different things). It connects infrastructure (the MAAS server, other systems that just need to always be up) via 802.1q trunks; the servers are configured with IPs on each appropriate VLAN. VLAN 1 is my "normal" home network, so that things will work correctly even when not supporting VLANs (which means VLAN 1 is set to the the native VLAN and to be untagged wherever appropriate). VLAN 10 is "staging", for use with my own custom boot server. VLAN 15 is "sandbox" for use with MAAS. The switch is only powered on when necessary, to save on electricity costs and to avoid hearing its whine (since I work in the same room). This means it is usually powered off, as the ASA already provides many ethernet ports. The telco rack in use was salvaged, and so were most brackets, except for the specialized bracket for the ASA which was bought separately. Total costs for this setup is estimated to about 500$, since everything comes from cheap eBay listings or salvaged, reused equipment.

The Cisco hardware was specifically selected because I had prior experience with them, so I could make sure the features I wanted were supported: VLANs, basic routing, and logs I can make sense of. Any hardware could do -- VLANs aren't absolutely required, but given many network ports on a switch, it tends to avoid requiring multiple switches instead.

My main DNS / DHCP / boot server is a raspberry pi 2. It serves both the home network and the staging network. DNS is set up such that the home network can resolve any names on any of the networks: using home.example.com or staging.example.com, or even maas.example.com as a domain name following the name of the system. Name resolution for the maas.example.com domain is forwarded to the MAAS server. More on all of this later.

The MAAS server has been set up on an old Thinkpad X230 (my former work laptop); I've been routinely using it (and reinstalling it) for various tests, but that meant reinstalling often, possibly conflicting with other projects if I tried to test more than one thing at a time. It was repurposed to just run Ubuntu 18.04, with a MAAS region and rack controller installed, along with libvirt (qemu) available over the network to remotely start virtual machines. It is connected to both VLAN 10 and VLAN 15.

Additional testing hardware can be attached to either VLAN 10 or VLAN 15 as appropriate -- the C3750 is configured so "top" ports are in VLAN 10, and "bottom" ports are in VLAN 15, for convenience. The first four ports are configured as trunk ports if necessary. I do use a Dell Vostro V130 and a generic Acer Aspire laptop for testing "on hardware". They are connected to the switch only when needed.

Finally, "clients" for the lab may be connected anywhere (but are likely to be on the "home" network). They are able to reach the MAAS web UI directly, or can use MAAS CLI or any other features to deploy systems from the MAAS servers' libvirt installation.

Setting up the network hardwareI will avoid going into the details of the Cisco hardware too much; configuration is specific to this hardware. The ASA has a restrictive firewall that blocks off most things, and allows SSH and HTTP access. Things that need access the internet go through the MAAS internal proxy.

For simplicity, the ASA is always .1 in any subnet, the switch is .2 when it is required (and was made accessible over serial cable from the MAAS server). The rasberrypi is always .5, and the MAAS server is always .25. DHCP ranges were designed to reserve anything .25 and below for static assignments on the staging and sandbox networks, and since I use a /23 subnet for home, half is for static assignments, and the other half is for DHCP there.

MAAS server hardware setupNetplan is used to configure the network on Ubuntu systems. The MAAS server's configuration looks like this:

network:
    ethernets:
        enp0s25:
            addresses: []
            dhcp4: true
            optional: true
    bridges:
        maasbr0:
            addresses: [ 10.3.99.25/24 ]
            dhcp4: no
            dhcp6: no
            interfaces: [ vlan15 ]
        staging:
            addresses: [ 10.3.98.25/24 ]
            dhcp4: no
            dhcp6: no
            interfaces: [ vlan10 ]
    vlans:
        vlan15:
            dhcp4: no
            dhcp6: no
            accept-ra: no
            id: 15
            link: enp0s25
        vlan10:
            dhcp4: no
            dhcp6: no
            accept-ra: no
            id: 10
            link: enp0s25
    version: 2Both VLANs are behind bridges as to allow setting virtual machines on any network. Additional configuration files were added to define these bridges for libvirt (/etc/libvirt/qemu/networks/maasbr0.xml):<network>
<name>maasbr0</name>
<bridge name="maasbr0">
<forward mode="bridge">
</forward></bridge></network>
Libvirt also needs to be accessible from the network, so that MAAS can drive it using the "pod" feature. Uncomment "listen_tcp = 1", and set authentication as you see fit, in /etc/libvirt/libvirtd.conf. Also set:
libvirtd_opts="-l"
In /etc/default/libvirtd, then restart the libvirtd service.

dnsmasq serverThe raspberrypi has similar netplan config, but sets up static addresses on all interfaces (since it is the DHCP server). Here, dnsmasq is used to provide DNS, DHCP, and TFTP. The configuration is in multiple files; but here are some of the important parts:dhcp-leasefile=/depot/dnsmasq/dnsmasq.leases
dhcp-hostsdir=/depot/dnsmasq/reservations
dhcp-authoritative
dhcp-fqdn
# copied from maas, specify boot files per-arch.
dhcp-boot=tag:x86_64-efi,bootx64.efi
dhcp-boot=tag:i386-pc,pxelinux
dhcp-match=set:i386-pc, option:client-arch, 0 #x86-32
dhcp-match=set:x86_64-efi, option:client-arch, 7 #EFI x86-64
# pass search domains everywhere, it's easier to type short names
dhcp-option=119,home.example.com,staging.example.com,maas.example.com
domain=example.com
no-hosts
addn-hosts=/depot/dnsmasq/dns/
domain-needed
expand-hosts
no-resolv
# home network
domain=home.example.com,10.3.0.0/23
auth-zone=home.example.com,10.3.0.0/23
dhcp-range=set:home,10.3.1.50,10.3.1.250,255.255.254.0,8h
# specify the default gw / next router
dhcp-option=tag:home,3,10.3.0.1
# define the tftp server
dhcp-option=tag:home,66,10.3.0.5
# staging is configured as above, but on 10.3.98.0/24.
# maas.example.com: "isolated" maas network.
# send all DNS requests for X.maas.example.com to 10.3.99.25 (maas server)
server=/maas.example.com/10.3.99.25
# very basic tftp config
enable-tftp
tftp-root=/depot/tftp
tftp-no-fail
# set some "upstream" nameservers for general name resolution.
server=8.8.8.8
server=8.8.4.4

DHCP reservations (to avoid IPs changing across reboots for some systems I know I'll want to reach regularly) are kept in /depot/dnsmasq/reservations (as per the above), and look like this:
de:ad:be:ef:ca:fe,10.3.0.21
I did put one per file, with meaningful filenames. This helps with debugging and making changes when network cards are changed, etc. The names used for the files do not match DNS names, but instead are a short description of the device (such as "thinkpad-x230"), since I may want to rename things later.
Similarly, files in /depot/dnsmasq/dns have names describing the hardware, but then contain entries in hosts file form:
10.3.0.21 izanagi
Again, this is used so any rename of a device only requires changing the content of a single file in /depot/dnsmasq/dns, rather than also requiring renaming other files, or matching MAC addresses to make sure the right change is made.

Installing MAASAt this point, the configuration for the networking should already be completed, and libvirt should be ready and accessible from the network.
The MAAS installation process is very straightforward. Simply install the maas package, which will pull in maas-rack-controller and maas-region-controller.
Once the configuration is complete, you can log in to the web interface. Use it to make sure, under Subnets, that only the MAAS-driven VLAN has DHCP enabled. To enable or disable DHCP, click the link in the VLAN column, and use the "Take action" menu to provide or disable DHCP.
This is necessary if you do not want MAAS to fully manage all of the network and provide DNS and DHCP for all systems. In my case, I am leaving MAAS in its own isolated network since I would keep the server offline if I do not need it (and the home network needs to keep working if I'm away).
Some extra modifications were made to the stock MAAS configuration to change the behavior of deployed systems. For example; I often test packages in -proposed, so it is convenient to have that enabled by default, with the archive pinned to avoid accidentally installing these packages. Given that I also do netplan development and might try things that would break the network connectivity, I also make sure there is a static password for the 'ubuntu' user, and that I have my own account created (again, with a static, known, and stupidly simple password) so I can connect to the deployed systems on their console. I have added the following to /etc/maas/preseed/curtin_userdata:

late_commands:
[...]
  pinning_00: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Package: *' >> /etc/apt/preferences.d/proposed"]
  pinning_01: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Pin: release a={{release}}-proposed' >> /etc/apt/preferences.d/proposed"]
  pinning_02: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Pin-Priority: -1' >> /etc/apt/preferences.d/proposed"]
apt:
  sources:
    proposed.list:
      source: deb $MIRROR {{release}}-proposed main universe
write_files:
  userconfig:
    path: /etc/cloud/cloud.cfg.d/99-users.cfg
    content: |
      system_info:
        default_user:
          lock_passwd: False
          plain_text_passwd: [REDACTED]
      users:
        - default
        - name: mtrudel
          groups: sudo
          gecos: Matt
          shell: /bin/bash
          lock-passwd: False
          passwd: [REDACTED]

The pinning_ entries are simply added to the end of the "late_commands" section.
For the libvirt instance, you will need to add it to MAAS using the maas CLI tool. For this, you will need to get your MAAS API key from the web UI (click your username, then look under MAAS keys), and run the following commands:
maas login local   http://localhost:5240/MAAS/  [your MAAS API key]
maas local pods create type=virsh power_address="qemu+tcp://127.0.1.1/system"
The pod will be given a name automatically; you'll then be able to use the web interface to "compose" new machines and control them via MAAS. If you want to remotely use the systems' Spice graphical console, you may need to change settings for the VM to allow Spice connections on all interfaces, and power it off and on again.

Setting up the clientDeployed hosts are now reachable normally over SSH by using their fully-qualified name, and specifying to use the ubuntu user (or another user you already configured):
ssh ubuntu@vocal-toad.maas.example.com
There is an inconvenience with using MAAS to control virtual machines like this, they are easy to reinstall, so their host hashes will change frequently if you access them via SSH. There's a way around that, using a specially crafted ssh_config (~/.ssh/config). Here, I'm sharing the relevant parts of the configuration file I use:
CanonicalDomains home.example.com
CanonicalizeHostname yes
CanonicalizeFallbackLocal no
HashKnownHosts no
UseRoaming no
# canonicalize* options seem to break github for some reason
# I haven't spent much time looking into it, so let's make sure it will go through the
# DNS resolution logic in SSH correctly.
Host github.com
  Hostname github.com.
Host *.maas
  Hostname %h.example.com
Host *.staging
  Hostname %h.example.com
Host *.maas.example.com
  User ubuntu
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Host *.staging.example.com
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
Host *.lxd
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  ProxyCommand nc $(lxc list -c s4 $(basename %h .lxd) | grep RUNNING | cut -d' ' -f4) %p
Host *.libvirt
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  ProxyCommand nc $(virsh domifaddr $(basename %h .libvirt) | grep ipv4 | sed 's/.* //; s,/.*,,') %p
As a bonus, I have included some code that makes it easy to SSH to local libvirt systems or lxd containers.
The net effect is that I can avoid having the warnings about changed hashes for MAAS-controlled systems and machines in the staging network, but keep getting them for all other systems.
Now, this means that to reach a host on the MAAS network, a client system only needs to use the short name with .maas tacked on:
vocal-toad.maasAnd the system will be reachable, and you will not have any warning about known host hashes (but do note that this is specific to a sandbox environment, you definitely want to see such warnings in a production environment, as it can indicate that the system you are connecting to might not be the one you think).
It's not bad, but the goal would be to use just the short names. I am working around this using a tiny script:
#!/bin/sh
ssh $@.maas
And I saved this as "sandbox" in ~/bin and making it executable.
And with this, the lab is ready.
UsageTo connect to a deployed system, one can now do the following:

$ sandbox vocal-toad
Warning: Permanently added 'vocal-toad.maas.example.com,10.3.99.12' (ECDSA) to the list of known hosts.
Welcome to Ubuntu Cosmic Cuttlefish (development branch) (GNU/Linux 4.15.0-21-generic x86_64)
[...]
ubuntu@vocal-toad:~$
ubuntu@vocal-toad:~$ id mtrudel
uid=1000(mtrudel) gid=1000(mtrudel) groups=1000(mtrudel),27(sudo)
MobilityOne important point for me was the mobility of the lab. While some of the network infrastructure must remain in place, I am able to undock the Thinkpad X230 (the MAAS server), and connect it via wireless to an external network. It will continue to "manage" or otherwise control VLAN 15 on the wired interface. In these cases, I bring another small configurable switch: a Cisco Catalyst 2960 (8 ports + 1), which is set up with the VLANs. A client could then be connected directly on VLAN 15 behind the MAAS server, and is free to make use of the MAAS proxy service to reach the internet. This allows me to bring the MAAS server along with all its virtual machines, as well as to be able to deploy new systems by connecting them to the switch. Both systems fit easily in a standard laptop bag along with another laptop (a "client").
All the systems used in the "semi-permanent" form of this lab can easily run on a single home power outlet, so issues are unlikely to arise in mobile form. The smaller switch is rated for 0.5amp, and two laptops do not pull very much power.
Next stepsOne of the issues that remains with this setup is that it is limited to either starting MAAS images or starting images that are custom built and hooked up to the raspberry pi, which leads to a high effort to integrate new images:
  • Custom (desktop?) images could be loaded into MAAS, to facilitate starting a desktop build.
  • Automate customizing installed packages based on tags applied to the machines.
    • juju would shine there; it can deploy workloads based on available machines in MAAS with the specified tags.
    • Also install a generic system with customized packages, not necessarily single workloads, and/or install extra packages after the initial system deployment.
      • This could be done using chef or puppet, but will require setting up the infrastructure for it.
    • Integrate automatic installation of snaps.
  • Load new images into the raspberry pi automatically for netboot / preseeded installs
    • I have scripts for this, but they will take time to adapt
    • Space on such a device is at a premium, there must be some culling of old images

Faqet

Subscribe to AlbLinux agreguesi