You are here

Agreguesi i feed

Julita Inca: Linux on Supercomputers

Planet GNOME - Mër, 06/12/2017 - 4:53pd

Today, I did a presentation about Linux on Supercomputers at the Faculty of  Industrial of UNMSM for its annivrsary. It was published the event in the Intranet of the School.

I started by presenting the project of Satoshi Sekiguchi from Japan, who is in charge of ABCI, a supercomputer that pretends to be the top1 in the list of supercomputers around the world. This project is expected to be done in April 18 to help simulations of earthquakes with 130 petaflops of speed in calculations. The top 5 of top500 list:The way supercomputers were measured with the Linpack tool and why Linux have been used in the most powerful supercomputers were also explained. History of supercomputers and techonology relate to were topics during the talk. I have also emphasized the importance of gathering a multidisciplinary group in a supercomputer project or in other parallelized computer architecture.

Thanks to the organizers for contacting to do this rewarding talk. Linux is important for scientific purposes as well as for education in Peru and around the world. 


Filed under: Education, Events, FEDORA, GNOME, GNU/Linux/Open Source, τεχνολογια :: Technology Tagged: ABCI, Facultad de Ingenieria Industrial, Julita Inca, Julita Inca Chiroque, linux, supercomputer talk, supercomputers, top500, UNMSM

Creating a blog with pelican and Github pages

Planet Debian - Mar, 05/12/2017 - 11:30md

Today I'm going to talk about how this blog was created. Before we begin, I expect you to be familiarized with using Github and creating a Python virtual enviroment to develop. If you aren't, I recommend you to learn with the Django Girls tutorial, which covers that and more.

This is a tutorial to help you publish a personal blog hosted by Github. For that, you will need a regular Github user account (instead of a project account).

The first thing you will do is to create the Github repository where your code will live. If you want your blog to point to only your username (like rsip22.github.io) instead of a subfolder (like rsip22.github.io/blog), you have to create the repository with that full name.

I recommend that you initialize your repository with a README, with a .gitignore for Python and with a free software license. If you use a free software license, you still own the code, but you make sure that others will benefit from it, by allowing them to study it, reuse it and, most importantly, keep sharing it.

Now that the repository is ready, let's clone it to the folder you will be using to store the code in your machine:

$ git clone https://github.com/YOUR_USERNAME/YOUR_USERNAME.github.io.git

And change to the new directory:

$ cd YOUR_USERNAME.github.io

Because of how Github Pages prefers to work, serving the files from the master branch, you have to put your source code in a new branch, preserving the "master" for the output of the static files generated by Pelican. To do that, you must create a new branch called "source":

$ git checkout -b source

Create the virtualenv with the Python3 version installed on your system.

On GNU/Linux systems, the command might go as:

$ python3 -m venv venv

or as

$ virtualenv --python=python3.5 venv

And activate it:

$ source venv/bin/activate

Inside the virtualenv, you have to install pelican and it's dependencies. You should also install ghp-import (to help us with publishing to github) and Markdown (for writing your posts using markdown). It goes like this:

(venv)$ pip install pelican markdown ghp-import

Once that is done, you can start creating your blog using pelican-quickstart:

(venv)$ pelican-quickstart

Which will prompt us a series of questions. Before answering them, take a look at my answers below:

> Where do you want to create your new web site? [.] ./ > What will be the title of this web site? Renata's blog > Who will be the author of this web site? Renata > What will be the default language of this web site? [pt] en > Do you want to specify a URL prefix? e.g., http://example.com (Y/n) n > Do you want to enable article pagination? (Y/n) y > How many articles per page do you want? [10] 10 > What is your time zone? [Europe/Paris] America/Sao_Paulo > Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) Y **# PAY ATTENTION TO THIS!** > Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) n > Do you want to upload your website using FTP? (y/N) n > Do you want to upload your website using SSH? (y/N) n > Do you want to upload your website using Dropbox? (y/N) n > Do you want to upload your website using S3? (y/N) n > Do you want to upload your website using Rackspace Cloud Files? (y/N) n > Do you want to upload your website using GitHub Pages? (y/N) y > Is this your personal page (username.github.io)? (y/N) y Done. Your new project is available at /home/username/YOUR_USERNAME.github.io

About the time zone, it should be specified as TZ Time zone (full list here: List of tz database time zones).

Now, go ahead and create your first blog post! You might want to open the project folder on your favorite code editor and find the "content" folder inside it. Then, create a new file, which can be called my-first-post.md (don't worry, this is just for testing, you can change it later). The contents should begin with the metadata which identifies the Title, Date, Category and more from the post before you start with the content, like this:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes Title: My first post Date: 2017-11-26 10:01 Modified: 2017-11-27 12:30 Category: misc Tags: first, misc Slug: My-first-post Authors: Your name Summary: What does your post talk about? Write here. This is the *first post* from my Pelican blog. **YAY!**

Let's see how it looks?

Go to the terminal, generate the static files and start the server. To do that, use the following command:

(venv)$ make html && make serve

While this command is running, you should be able to visit it on your favorite web browser by typing localhost:8000 on the address bar.

Pretty neat, right?

Now, what if you want to put an image in a post, how do you do that? Well, first you create a directory inside your content directory, where your posts are. Let's call this directory 'images' for easy reference. Now, you have to tell Pelican to use it. Find the pelicanconf.py, the file where you configure the system, and add a variable that contains the directory with your images:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes STATIC_PATHS = ['images']

Save it. Go to your post and add the image this way:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes ![Write here a good description for people who can't see the image]({filename}/images/IMAGE_NAME.jpg)

You can interrupt the server at anytime pressing CTRL+C on the terminal. But you should start it again and check if the image is correct. Can you remember how?

(venv)$ make html && make serve

One last step before your coding is "done": you should make sure anyone can read your posts using ATOM or RSS feeds. Find the pelicanconf.py, the file where you configure the system, and edit the part about feed generation:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes FEED_ALL_ATOM = 'feeds/all.atom.xml' FEED_ALL_RSS = 'feeds/all.rss.xml' AUTHOR_FEED_RSS = 'feeds/%s.rss.xml' RSS_FEED_SUMMARY_ONLY = False

Save everything so you can send the code to Github. You can do that by adding all files, committing it with a message ('first commit') and using git push. You will be asked for your Github login and password.

$ git add -A && git commit -a -m 'first commit' && git push --all

And... remember how at the very beginning I said you would be preserving the master branch for the output of the static files generated by Pelican? Now it's time for you to generate them:

$ make github

You will be asked for your Github login and password again. And... voilà! Your new blog should be live on https://YOUR_USERNAME.github.io.

If you had an error in any step of the way, please reread this tutorial, try and see if you can detect in which part the problem happened, because that is the first step to debbugging. Sometimes, even something simple like a typo or, with Python, a wrong indentation, can give us trouble. Shout out and ask for help online or on your community.

For tips on how to write your posts using Markdown, you should read the Daring Fireball Markdown guide.

To get other themes, I recommend you visit Pelican Themes.

This post was adapted from Adrien Leger's Create a github hosted Pelican blog with a Bootstrap3 theme. I hope it was somewhat useful for you.

Renata https://rsip22.github.io/blog/ Renata's blog

nuttx-scheme

Planet Debian - Mar, 05/12/2017 - 11:25md
Scheme For NuttX

Last fall, I built a tiny lisp interpreter for AltOS. That was fun, but had some constraints:

  • Yet another lisp-like language
  • Ran only on AltOS, not exactly a widely used operating system.

To fix the first problem, I decided to try and just implement scheme. The language I had implemented wasn't far off; it had lexical scoping and call-with-current-continuation after all. The rest is pretty simple stuff.

To fix the second problem, I ported the interpreter to NuttX. NuttX is a modest operating system for 8 to 32-bit microcontrollers with a growing community of developers.

I downloaded the most recent Scheme spec, the Revised⁷ Report, which is the 'small language' follow on to the contentious Revised⁶ Report.

Converting ao-lisp to ao-scheme

Reading through the spec, it was clear there were a few things I needed to deal with to provide something that looked like scheme:

  • quasiquote
  • syntax-rules
  • function names
  • boolean type

Quasiquote turned out to be fun -- the spec described it in terms of a set of list forms, so I hacked up the reader to convert the convenient syntax using ` , and ,@ into lists and wrote a macro to emit construction code from the generated lists.

Syntax-rules is a 'nicer' way to write macros, and I haven't implemented it yet. There's nothing it can do which the underlying full macros cannot, so I'm planning on just writing it in scheme rather than having a pile more C code.

Scheme as a separate boolean type, rather than using the empty list, (), for false, it uses #f and has everything else be 'true'. Adding that wasn't hard, just tedious as I had to work through any place that used boolean values and switch it to using #f or #t.

There were also a pile of random function name swaps and another bunch of new functions to write.

All in all, not a huge amount of work, and now the language looks a lot more like scheme.

Adding more number types

The original language had only integers, and those were only 14 bits wide. To make the language a bit more usable, I added 24 bit integers as well, along with 32-bit floats. Then I added automatic promotion between representations and the usual scheme tests for exactness. This added a bit to the footprint, and maybe I should make it optional.

Porting to NuttX

This was the easiest piece of the process -- NuttX offers a posix-like API, just like AltOS. Getting it to build was actually a piece of cake. The only part requiring new code was the lack of any command line editing or echo -- I ended up using readline to get that to work.

I was pleased that all of the language changes I made didn't significantly impact the footprint of the resulting system. I built NuttX for the stm32f4-discovery board, compiling in basic and then comparing with and without scheme:

Before:

$ size nuttx text data bss dec hex filename 183037 172 4176 187385 2dbf9 nuttx

After:

$ size nuttx text data bss dec hex filename 213197 188 22672 236057 39a19 nuttx

The increase in text includes 11kB of built-in lisp code, so that when the interpreter starts, you already have all of the necessary lisp code loaded that turns the bare interpreter into a full scheme system. That makes the core interpreter around 20kB of code, which is nice and compact (at least for scheme, I think).

The BSS space includes the heap; this can be set to any size you like. It would probably be good to have that allocated on the fly instead of used even when the interpreter isn't running.

Where's the Code

I've pushed the code here:

$ git clone git://keithp.com/git/apps Future Work

There's more work to complete the language support; here's some tasks needing attention at some point:

  • No vectors or bytevectors
  • Characters are just numbers
  • No dynamic-wind or exceptions
  • No environments
  • No ports
  • No syntax-rules
  • No record types
  • No libraries
  • Heap allocated from BSS
A Sample Application!

Here's towers of hanoi in scheme for nuttx:

; ; Towers of Hanoi ; ; Copyright © 2016 Keith Packard <keithp@keithp.com> ; ; This program is free software; you can redistribute it and/or modify ; it under the terms of the GNU General Public License as published by ; the Free Software Foundation, either version 2 of the License, or ; (at your option) any later version. ; ; This program is distributed in the hope that it will be useful, but ; WITHOUT ANY WARRANTY; without even the implied warranty of ; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ; General Public License for more details. ; ; ANSI control sequences (define (move-to col row) (for-each display (list "\033[" row ";" col "H")) ) (define (clear) (display "\033[2J") ) (define (display-string x y str) (move-to x y) (display str) ) (define (make-piece num max) ; A piece for position 'num' ; is num + 1 + num stars ; centered in a field of max * ; 2 + 1 characters with spaces ; on either side. This way, ; every piece is the same ; number of characters (define (chars n c) (if (zero? n) "" (+ c (chars (- n 1) c)) ) ) (+ (chars (- max num 1) " ") (chars (+ (* num 2) 1) "*") (chars (- max num 1) " ") ) ) (define (make-pieces max) ; Make a list of numbers from 0 to max-1 (define (nums cur max) (if (= cur max) () (cons cur (nums (+ cur 1) max)) ) ) ; Create a list of pieces (map (lambda (x) (make-piece x max)) (nums 0 max)) ) ; Here's all of the towers of pieces ; This is generated when the program is run (define towers ()) ; position of the bottom of ; the stacks set at runtime (define bottom-y 0) (define left-x 0) (define move-delay 25) ; Display one tower, clearing any ; space above it (define (display-tower x y clear tower) (cond ((= 0 clear) (cond ((not (null? tower)) (display-string x y (car tower)) (display-tower x (+ y 1) 0 (cdr tower)) ) ) ) (else (display-string x y " ") (display-tower x (+ y 1) (- clear 1) tower) ) ) ) ; Position of the top of the tower on the screen ; Shorter towers start further down the screen (define (tower-pos tower) (- bottom-y (length tower)) ) ; Display all of the towers, spaced 20 columns apart (define (display-towers x towers) (cond ((not (null? towers)) (display-tower x 0 (tower-pos (car towers)) (car towers)) (display-towers (+ x 20) (cdr towers))) ) ) ; Display all of the towers, then move the cursor ; out of the way and flush the output (define (display-hanoi) (display-towers left-x towers) (move-to 1 23) (flush-output) (delay move-delay) ) ; Reset towers to the starting state, with ; all of the pieces in the first tower and the ; other two empty (define (reset-towers len) (set! towers (list (make-pieces len) () ())) (set! bottom-y (+ len 3)) ) ; Move a piece from the top of one tower ; to the top of another (define (move-piece from to) ; references to the cons holding the two towers (define from-tower (list-tail towers from)) (define to-tower (list-tail towers to)) ; stick the car of from-tower onto to-tower (set-car! to-tower (cons (caar from-tower) (car to-tower))) ; remove the car of from-tower (set-car! from-tower (cdar from-tower)) ) ; The implementation of the game (define (_hanoi n from to use) (cond ((= 1 n) (move-piece from to) (display-hanoi) ) (else (_hanoi (- n 1) from use to) (_hanoi 1 from to use) (_hanoi (- n 1) use to from) ) ) ) ; A pretty interface which ; resets the state of the game, ; clears the screen and runs ; the program (define (hanoi len) (reset-towers len) (clear) (display-hanoi) (_hanoi len 0 1 2) #t ) Keith Packard http://keithp.com/blog/ blog

Michael Meeks: 2017-12-05 Tuesday.

Planet GNOME - Mar, 05/12/2017 - 10:00md
  • Mail; admin. Lunch with J. Commercial call. Spent much of the day doing the things that are supposed to be quick & get done before you work on larger tasks - but somehow fill the time.
  • Out to the Hopbine in Cambridge in the evening with J. for a lovely Collabora Christmas party, good to catch up with the local part of the team.

Lubuntu Blog: Join Phabricator

Planet Ubuntu - Mar, 05/12/2017 - 8:54md
Inspired by the wonderful KDE folks, Lubuntu has created a Phabricator instance for our project. Phabricator is an open source, version control system-agnostic collaborative development environment similar in some ways to GitHub, GitLab, and perhaps a bit more remotely, like Launchpad. We were looking for tools to organize, coordinate, and collaborate, especially across teams within […]

back on the Linux desktop

Planet Debian - Mar, 05/12/2017 - 4:35md

As forecast, I've switched from Mac back to Linux on the Desktop. I'm using a work-supplied Thinkpad T470s which is a nice form-factor machine (the the T450s was the first Thinkpad to widen my perspective away from just looking at the X series).

I've installed Debian to get started and ended up with GNOME 3 as the desktop (I was surprised to not be prompted for a choice in the installer, but on reflection that makes sense, I did a non-networked installed from the GNOME-flavour of the live DVD). So for the time being I'm going to stick to GNOME 3 and see what's new/better/worse than last time, but once my replacement SSD arrives I can revisit.

I haven't made much progress on the sticking points I identified in my last post. I'm hoping to get 1pass up and running in the interim to read my 1Password DB so I can get by until I've found a replacement password manager that I like.

Most of my desktop configuration steps I have captured in some Ansible playbooks. I'm looking at Ansible after a long break from using puppet, and there's things I like and things I don't. I've also been exploring ownCloud for personal file sharing and despite a couple of warning signs (urgh PHP, official Debian package was dropped) I'm finding it really useful, in particular for sharing stuff with family. I might write more about both of those later.

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

Finding bugs in Haskell code by proving it

Planet Debian - Mar, 05/12/2017 - 3:17md

Last week, I wrote a small nifty tool called bisect-binary, which semi-automates answering the question “To what extent can I fill this file up with zeroes and still have it working”. I wrote it it in Haskell, and part of the Haskell code, in the Intervals.hs module, is a data structure for “subsets of a file” represented as a sorted list of intervals:

data Interval = I { from :: Offset, to :: Offset } newtype Intervals = Intervals [Interval]

The code is the kind of Haskell code that I like to write: A small local recursive function, a few guards to case analysis, and I am done:

intersect :: Intervals -> Intervals -> Intervals intersect (Intervals is1) (Intervals is2) = Intervals $ go is1 is2 where go _ [] = [] go [] _ = [] go (i1:is1) (i2:is2) -- reorder for symmetry | to i1 < to i2 = go (i2:is2) (i1:is1) -- disjoint | from i1 >= to i2 = go (i1:is1) is2 -- subset | to i1 == to i2 = I f' (to i2) : go is1 is2 -- overlapping | otherwise = I f' (to i2) : go (i1 { from = to i2} : is1) is2 where f' = max (from i1) (from i2)

But clearly, the code is already complicated enough so that it is easy to make a mistake. I could have put in some QuickCheck properties to test the code, I was in proving mood...

Now available: Formal Verification for Haskell

Ten months ago I complained that there was no good way to verify Haskell code (and created the nifty hack ghc-proofs). But things have changed since then, as a group at UPenn (mostly Antal Spector-Zabusky, Stephanie Weirich and myself) has created hs-to-coq: a translator from Haskell to the theorem prover Coq.

We have used hs-to-coq on various examples, as described in our CPP'18 paper, but it is high-time to use it for real. The easiest way to use hs-to-coq at the moment is to clone the repository, copy one of the example directories (e.g. examples/successors), place the Haskell file to be verified there and put the right module name into the Makefile. I also commented out parts of the Haskell file that would drag in non-base dependencies.

Massaging the translation

Often, hs-to-coq translates Haskell code without a hitch, but sometimes, a bit of help is needed. In this case, I had to specify three so-called edits:

  • The Haskell code uses Intervals both as a name for a type and for a value (the constructor). This is fine in Haskell, which has separate value and type namespaces, but not for Coq. The line

    rename value Intervals.Intervals = ival

    changes the constructor name to ival.

  • I use the Int64 type in the Haskell code. The Coq version of Haskell’s base library that comes with hs-to-coq does not support that yet, so I change that via

    rename type GHC.Int.Int64 = GHC.Num.Int

    to the normal Int type, which itself is mapped to Coq’s Z type. This is not a perfect fit, and my verification would not catch problems that arise due to the boundedness of Int64. Since none of my code does arithmetic, only comparisons, I am fine with that.

  • The biggest hurdle is the recursion of the local go functions. Coq requires all recursive functions to be obviously (i.e. structurally) terminating, and the go above is not. For example, in the first case, the arguments to go are simply swapped. It is very much not obvious why this is not an infinite loop.

    I can specify a termination measure, i.e. a function that takes the arguments xs and ys and returns a “size” of type nat that decreases in every call: Add the lengths of xs and ys, multiply by two and add one if the the first interval in xs ends before the first interval in ys.

    If the problematic function were a top-level function I could tell hs-to-coq about this termination measure and it would use this information to define the function using Program Fixpoint.

    Unfortunately, go is a local function, so this mechanism is not available to us. If I care more about the verification than about preserving the exact Haskell code, I could easily change the Haskell code to make go a top-level function, but in this case I did not want to change the Haskell code.

    Another way out offered by hs-to-coq is to translate the recursive function using an axiom unsafeFix : forall a, (a -> a) -> a. This looks scary, but as I explain in the previous blog post, this axiom can be used in a safe way.

    I should point out it is my dissenting opinion to consider this a valid verification approach. The official stand of the hs-to-coq author team is that using unsafeFix in the verification can only be a temporary state, and eventually you’d be expected to fix (heh) this, for example by moving the functions to the top-level and using hs-to-coq’s the support for Program Fixpoint.

With these edits in place, hs-to-coq splits out a faithful Coq copy of my Haskell code.

Time to prove things

The rest of the work is mostly straight-forward use of Coq. I define the invariant I expect to hold for these lists of intervals, namely that they are sorted, non-empty, disjoint and non-adjacent:

Fixpoint goodLIs (is : list Interval) (lb : Z) : Prop := match is with | [] => True | (I f t :: is) => (lb <= f)%Z /\ (f < t)%Z /\ goodLIs is t end. Definition good is := match is with ival is => exists n, goodLIs is n end.

and I give them meaning as Coq type for sets, Ensemble:

Definition range (f t : Z) : Ensemble Z := (fun z => (f <= z)%Z /\ (z < t)%Z). Definition semI (i : Interval) : Ensemble Z := match i with I f t => range f t end. Fixpoint semLIs (is : list Interval) : Ensemble Z := match is with | [] => Empty_set Z | (i :: is) => Union Z (semI i) (semLIs is) end. Definition sem is := match is with ival is => semLIs is end.

Now I prove for every function that it preserves the invariant and that it corresponds to the, well, corresponding function, e.g.:

Lemma intersect_good : forall (is1 is2 : Intervals), good is1 -> good is2 -> good (intersect is1 is2). Proof. … Qed. Lemma intersection_spec : forall (is1 is2 : Intervals), good is1 -> good is2 -> sem (intersect is1 is2) = Intersection Z (sem is1) (sem is2). Proof. … Qed.

Even though I punted on the question of termination while defining the functions, I do not get around that while verifying this, so I formalize the termination argument above

Definition needs_reorder (is1 is2 : list Interval) : bool := match is1, is2 with | (I f1 t1 :: _), (I f2 t2 :: _) => (t1 <? t2)%Z | _, _ => false end. Definition size2 (is1 is2 : list Interval) : nat := (if needs_reorder is1 is2 then 1 else 0) + 2 * length is1 + 2 * length is2.

and use it in my inductive proofs.

As I intend this to be a write-once proof, I happily copy’n’pasted proof scripts and did not do any cleanup. Thus, the resulting Proof file is big, ugly and repetitive. I am confident that judicious use of Coq tactics could greatly condense this proof.

Using Program Fixpoint after the fact?

This proofs are also an experiment of how I can actually do induction over a locally defined recursive function without too ugly proof goals (hence the line match goal with [ |- context [unsafeFix ?f _ _] ] => set (u := f) end.). One could improve upon this approach by following these steps:

  1. Define copies (say, intersect_go_witness) of the local go using Program Fixpoint with the above termination measure. The termination argument needs to be made only once, here.

  2. Use this function to prove that the argument f in go = unsafeFix f actually has a fixed point:

    Lemma intersect_go_sound:

    f intersect_go_witness = intersect_go_witness

    (This requires functional extensionality). This lemma indicates that my use of the axioms unsafeFix and unsafeFix_eq are actually sound, as discussed in the previous blog post.

  3. Still prove the desired properties for the go that uses unsafeFix, as before, but using the functional induction scheme for intersect_go! This way, the actual proofs are free from any noisy termination arguments.

    (The trick to define a recursive function just to throw away the function and only use its induction rule is one I learned in Isabelle, and is very useful to separate the meat from the red tape in complex proofs. Note that the induction rule for a function does not actually mention the function!)

Maybe I will get to this later.

Update: I experimented a bit in that direction, and it does not quite work as expected. In step 2 I am stuck because Program Fixpoint does not create a fixpoint-unrolling lemma, and in step 3 I do not get the induction scheme that I was hoping for. Both problems would not exist if I use the Function command, although that needs some tickery to support a termination measure on multiple arguments. The induction lemma is not quite as polished as I was hoping for, so he resulting proof is still somewhat ugly, and it requires copying code, which does not scale well.

Efforts and gains

I spent exactly 7 hours working on these proofs, according to arbtt. I am sure that writing these functions took me much less time, but I cannot calculate that easily, as they were originally in the Main.hs file of bisect-binary.

I did find and fix three bugs:

  • The intersect function would not always retain the invariant that the intervals would be non-empty.
  • The subtract function would prematurely advance through the list intervals in the second argument, which can lead to a genuinely wrong result. (This occurred twice.)

Conclusion: Verification of Haskell code using Coq is now practically possible!

Final rant: Why is the Coq standard library so incomplete (compared to, say, Isabelle’s) and requires me to prove so many lemmas about basic functions on Ensembles?

Joachim Breitner mail@joachim-breitner.de nomeata’s mind shares

Reproducible Builds: Weekly report #136

Planet Debian - Mar, 05/12/2017 - 3:10md

Here's what happened in the Reproducible Builds effort between Sunday, November 26 and Saturday, December 2, 2017:

Media coverage Arch Linux imap key leakage

A security issue was found in the imap package in Arch Linux thanks to the reproducible builds effort in that distribution.

Due to a hardcoded key-generation routine in the build() step of imap's PKGBUILD (the standard packaging file for Arch Linux packages), a default secret key was generated and leaked on all imap installations. This was prompty reviewed, confirmed and fixed by the package maintainers.

This mirrors similar security issues found in Debian, such as #833885.

Debian packages reviewed and fixed, and bugs filed

In addition, 73 FTBFS bugs were detected and reported by Adrian Bunk.

Reviews of unreproducible Debian packages

83 package reviews have been added, 41 have been updated and 33 have been removed in this week, adding to our knowledge about identified issues.

1 issue type was updated:

LEDE / OpenWrt packages updates: diffoscope development reprotest development

Version 0.7.4 was uploaded to unstable by Ximin Luo. It included contributions already covered by posts of the previous weeks as well as new ones from:

reproducible-website development tests.reproducible-builds.org Misc.

This week's edition was written by Alexander Couzens, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Santiago Torres-Arias, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

Simos Xenitellis: How to migrate LXD from DEB/PPA package to Snap package

Planet Ubuntu - Mar, 05/12/2017 - 2:35md

You are using LXD from a Linux distribution package and you would like to migrate your existing installation to the Snap LXD package. Let’s do the migration together!

This post is not about live container migration in LXD. Live container migration is about moving a running container from one LXD server to another.

If you do not have LXD installed already, then look for another guide about the installation and set up of LXD from a snap package. A fresh installation of LXD as a snap package is easy.

Note that from the end of 2017, LXD will be generally distributed as a Snap package. If you run LXD 2.0.x from Ubuntu 16.04, you are not affected by this.

Prerequisites

Let’s check the version of LXD (Linux distribution package).

$ lxd --version 2.20 $ apt policy lxd lxd: Installed: 2.20-0ubuntu4~16.04.1~ppa1 Candidate: 2.20-0ubuntu4~16.04.1~ppa1 Version table: *** 2.20-0ubuntu4~16.04.1~ppa1 500 500 http://ppa.launchpad.net/ubuntu-lxc/lxd-stable/ubuntu xenial/main amd64 Packages 100 /var/lib/dpkg/status 2.0.11-0ubuntu1~16.04.2 500 500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages 2.0.2-0ubuntu1~16.04.1 500 500 http://archive.ubuntu.com/ubuntu xenial-security/main amd64 Packages 2.0.0-0ubuntu4 500 500 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages

In this case, we run LXD version 2.20, and it was installed from the LXD PPA repository.

If you did not enable the LXD PPA repository, you would have an LXD version 2.0.x, the version that was released with Ubuntu 16.04 (what is running above). LXD version 2.0.11 is currently the default version for Ubuntu 16.04.3 and will be supported in that form until 2016 + 5 = 2021. LXD version 2.0.0 is the original LXD version in Ubuntu 16.04 (when original released) and LXD version 2.0.2 is the security update of that LXD 2.0.0.

We are migrating to the LXD snap package. Let’s see how many containers will be migrated.

$ lxc list | grep RUNNING | wc -l 6

It would be a good test to check if something goes horribly wrong.

Let’s check the available incoming LXD snap packages.

$ snap info lxd name: lxd summary: System container manager and API publisher: canonical contact: https://github.com/lxc/lxd/issues description: | LXD is a container manager for system containers. It offers a REST API to remotely manage containers over the network, using an image based workflow and with support for live migration. Images are available for all Ubuntu releases and architectures as well as for a wide number of other Linux distributions. LXD containers are lightweight, secure by default and a great alternative to virtual machines. snap-id: J60k4JY0HppjwOjW8dZdYc8obXKxujRu channels: stable: 2.20 (5182) 44MB - candidate: 2.20 (5182) 44MB - beta: ↑ edge: git-b165982 (5192) 44MB - 2.0/stable: 2.0.11 (4689) 20MB - 2.0/candidate: 2.0.11 (4770) 20MB - 2.0/beta: ↑ 2.0/edge: git-03e9048 (5131) 19MB -

There are several channels to choose from. The stable channel has LXD 2.20, just like the candidate channel. When the LXD 2.21 snap is ready, it will first be released in the candidate channel and stay there for 24 hours. If everything goes well, it will get propagated to the stable channel. LXD 2.20 was released some time ago, that’s why both channels have the same version (at the time of writing this blog post).

There is the edge channel, which has the auto-compiled version from the git source code repository. It is handy to use this channel if you know that a specific fix (that affects you) has been added to the source code, and you want to verify that it actually fixed the issue. Note that the beta channel is not used, therefore it inherits whatever is found in the channel below; the edge channel.

Finally, there are these 2.0/ tagged channels that correspond to the stock 2.0.x LXD versions in Ubuntu 16.04. It looks that those who use the 5-year supported LXD (because Ubuntu 16.04) have the option to switch to a snap version after all.

Installing the LXD snap

Install the LXD snap.

$ snap install lxd lxd 2.20 from 'canonical' installed Migrating to the LXD snap

Now, the LXD snap is installed, but the DEB/PPA package LXD is the one that is running. We need to run the migration script lxd.migrate that will move the data from the DEB/PPA version over to the Snap version of LXD. In practical terms, it will move files from /var/lib/lxd (old DEB/PPA LXD location), to

$ sudo lxd.migrate => Connecting to source server => Connecting to destination server => Running sanity checks === Source server LXD version: 2.20 LXD PID: 4414 Resources: Containers: 6 Images: 3 Networks: 1 Storage pools: 1 === Destination server LXD version: 2.20 LXD PID: 30329 Resources: Containers: 0 Images: 0 Networks: 0 Storage pools: 0 The migration process will shut down all your containers then move your data to the destination LXD. Once the data is moved, the destination LXD will start and apply any needed updates. And finally your containers will be brought back to their previous state, completing the migration. Are you ready to proceed (yes/no) [default=no]? yes => Shutting down the source LXD => Stopping the source LXD units => Stopping the destination LXD unit => Unmounting source LXD paths => Unmounting destination LXD paths => Wiping destination LXD clean => Moving the data => Moving the database => Backing up the database => Opening the database => Updating the storage backends => Starting the destination LXD => Waiting for LXD to come online === Destination server LXD version: 2.20 LXD PID: 2812 Resources: Containers: 6 Images: 3 Networks: 1 Storage pools: 1 The migration is now complete and your containers should be back online. Do you want to uninstall the old LXD (yes/no) [default=no]? yes All done. You may need to close your current shell and open a new one to have the "lxc" command work. Testing the migration to the LXD snap

Let’s check that the containers managed to start successfully,

$ lxc list | grep RUNNING | wc -l 6

But let’s check that we can still run Firefox from an LXD container, according to the following post,

How to run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop

Yep, all good. The artifact in the middle (over the c in packaged) is the mouse cursor in wait mode, while GNOME Screenshot is about to take the screenshot. I did not find a report about that in the GNOME Screenshot bugzilla. It is a minor issue and there are several workarounds (1. try one more time, 2. use timer screenshot).

Let’s do some actual testing,

Yep, works as well.

Exploring the LXD snap commands

Let’s type lxd and press Tab.

$ lxd<Tab> lxd lxd.check-kernel lxd.migrate lxd.benchmark lxd.lxc

There are two commands left to try out, lxd.check-kernel and lxd.benchmark. The snap package is called lxd, therefore any additional commands are prepended with lxd.. lxd is the actually LXD server executable. lxd.lxc is the lxc command that we are using for all LXD actions. The LXD snap package makes the appropriate symbolic link so that we just need to write lxc instead of lxd.lxc.

Trying out lxd.check-kernel

Let’s run lxd.check-kernel.

$ sudo lxd.check-kernel Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /lib/modules/4.10.0-40-generic/build/.config --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled newuidmap is not installed newgidmap is not installed Network namespace: enabled --- Control groups --- Cgroups: enabled Cgroup v1 mount points: /sys/fs/cgroup/systemd /sys/fs/cgroup/net_cls,net_prio /sys/fs/cgroup/freezer /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/memory /sys/fs/cgroup/devices /sys/fs/cgroup/perf_event /sys/fs/cgroup/cpuset /sys/fs/cgroup/hugetlb /sys/fs/cgroup/pids /sys/fs/cgroup/blkio Cgroup v2 mount points: Cgroup v1 clone_children flag: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded Macvlan: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded Vlan: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded Bridges: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded Advanced netfilter: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded CONFIG_NF_NAT_IPV4: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded CONFIG_NF_NAT_IPV6: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded CONFIG_IP_NF_TARGET_MASQUERADE: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded CONFIG_IP6_NF_TARGET_MASQUERADE: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabledmodprobe: ERROR: missing parameters. See -h. , not loadedCONFIG_NETFILTER_XT_MATCH_COMMENT: enabledmodprobe: ERROR: missing parameters. See -h. , not loaded FUSE (for use with lxcfs): enabledmodprobe: ERROR: missing parameters. See -h. , not loaded --- Checkpoint/Restore --- checkpoint restore: enabled CONFIG_FHANDLE: enabled CONFIG_EVENTFD: enabled CONFIG_EPOLL: enabled CONFIG_UNIX_DIAG: enabled CONFIG_INET_DIAG: enabled CONFIG_PACKET_DIAG: enabled CONFIG_NETLINK_DIAG: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /snap/lxd/5182/bin/lxc-checkconfig

This is an important tool if you have issues in getting the LXD to run. In this example in the Misc section, it shows some errors about missing parameters. I suppose they are issues with the tool as the appropriate kernel modules are indeed loaded. My installation of the LXD snap works okay.

Trying out lxd.benchmark

Let’s try out the command without parameters.

$ lxd.benchmark Usage: lxd-benchmark launch [--count=COUNT] [--image=IMAGE] [--privileged=BOOL] [--start=BOOL] [--freeze=BOOL] [--parallel=COUNT] lxd-benchmark start [--parallel=COUNT] lxd-benchmark stop [--parallel=COUNT] lxd-benchmark delete [--parallel=COUNT] --count (= 100) Number of containers to create --freeze (= false) Freeze the container right after start --image (= "ubuntu:") Image to use for the test --parallel (= -1) Number of threads to use --privileged (= false) Use privileged containers --report-file (= "") A CSV file to write test file to. If the file is present, it will be appended to. --report-label (= "") A label for the report entry. By default, the action is used. --start (= true) Start the container after creation error: A valid action (launch, start, stop, delete) must be passed. Exit 1

It is a benchmark tool that allows to create many containers. We can then use the tool to remove those containers. There is an issue with the default number of containers, 100, which is too high. If you run lxd-benchmark launch without specifying a smaller count,  you will mess up your LXD installation because you will run out of memory and maybe of disk space. Looking for a bug report… Okay it got buried into this pull request https://github.com/lxc/lxd/pull/3857 and needs to re-open. Ideally, the default count should be 1, and let the user knowingly select a bigger number. TODO. Here is the new pull request, https://github.com/lxc/lxd/pull/4074

Let’s try carefully lxd-benchmark.

$ lxd.benchmark launch --count 3 Test environment: Server backend: lxd Server version: 2.20 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.10.0-40-generic Storage backend: zfs Storage version: 0.6.5.9-2 Container backend: lxc Container version: 2.1.1 Test variables: Container count: 3 Container mode: unprivileged Startup mode: normal startup Image: ubuntu: Batches: 0 Batch size: 4 Remainder: 3 [Dec 5 13:24:26.044] Found image in local store: 5f364e2e3f460773a79e9bec2edb5e993d236f035f70267923d43ab22ae3bb62 [Dec 5 13:24:26.044] Batch processing start [Dec 5 13:24:28.817] Batch processing completed in 2.773s

It took just 2.8s to launch then on this computer.
lxd-benchmark
launched 3 containers, with names benchmark-%d. Obviously, refrain from using the word benchmark as a name for your own containers. Let’s see these containers

$ lxc list --columns ns4 +---------------+---------+----------------------+ | NAME | STATE | IPV4 | +---------------+---------+----------------------+ | benchmark-1 | RUNNING | 10.52.251.121 (eth0) | +---------------+---------+----------------------+ | benchmark-2 | RUNNING | 10.52.251.20 (eth0) | +---------------+---------+----------------------+ | benchmark-3 | RUNNING | 10.52.251.221 (eth0) | +---------------+---------+----------------------+ ...

Let’s stop them, and finally remove them.

$ lxd.benchmark stop Test environment: Server backend: lxd Server version: 2.20 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.10.0-40-generic Storage backend: zfs Storage version: 0.6.5.9-2 Container backend: lxc Container version: 2.1.1 [Dec 5 13:31:16.517] Stopping 3 containers [Dec 5 13:31:16.517] Batch processing start [Dec 5 13:31:20.159] Batch processing completed in 3.642s $ lxd.benchmark delete Test environment: Server backend: lxd Server version: 2.20 Kernel: Linux Kernel architecture: x86_64 Kernel version: 4.10.0-40-generic Storage backend: zfs Storage version: 0.6.5.9-2 Container backend: lxc Container version: 2.1.1 [Dec 5 13:31:24.902] Deleting 3 containers [Dec 5 13:31:24.902] Batch processing start [Dec 5 13:31:25.007] Batch processing completed in 0.105s

Note that the lxd-benchmark actions follow the naming of the lxc actions (launch, start, stop and delete).

Troubleshooting Error “Target LXD already has images” $ sudo lxd.migrate => Connecting to source server => Connecting to destination server => Running sanity checks error: Target LXD already has images, aborting. Exit 1

This means that the snap version of LXD has some images and it is not clean. lxd.migrate requires the snap version of LXD to be clean. Solution: remove the LXD snap and install again.

$ snap remove lxd lxd removed $ snap install lxd lxd 2.20 from 'canonical' installed Which “lxc” command am I running?

This is the lxc command of the DEB/PPA package,

$ which lxc /usr/bin/lxc

This is the lxc command from the LXD snap package.

$ which lxc /snap/bin/lxc

If you installed the LXD snap but you do not see the the /snap/bin/lxc executable, it could be an artifact of your Unix shell. You may have to close that shell window and open a new one.

Error “bash: /usr/bin/lxc: No such file or directory”

If you get the following,

$ which lxc /snap/bin/lxc

but the lxc command is not found,

$ lxc bash: /usr/bin/lxc: No such file or directory Exit 127

then you must close the terminal window and open a new one.

Note: if you loudly refuse to close the current terminal window, you can just type

$ hash -r

which will refresh the list of executables from the $PATH. Applies to bash, zsh. Use rehash if on *csh.

 

Simos Xenitellishttps://blog.simos.info/

Is the short movie «Empty Socks» from 1927 in the public domain or not?

Planet Debian - Mar, 05/12/2017 - 12:30md

Three years ago, a presumed lost animation film, Empty Socks from 1927, was discovered in the Norwegian National Library. At the time it was discovered, it was generally assumed to be copyrighted by The Walt Disney Company, and I blogged about my reasoning to conclude that it would would enter the Norwegian equivalent of the public domain in 2053, based on my understanding of Norwegian Copyright Law. But a few days ago, I came across a blog post claiming the movie was already in the public domain, at least in USA. The reasoning is as follows: The film was released in November or Desember 1927 (sources disagree), and presumably registered its copyright that year. At that time, right holders of movies registered by the copyright office received government protection for there work for 28 years. After 28 years, the copyright had to be renewed if the wanted the government to protect it further. The blog post I found claim such renewal did not happen for this movie, and thus it entered the public domain in 1956. Yet someone claim the copyright was renewed and the movie is still copyright protected. Can anyone help me to figure out which claim is correct? I have not been able to find Empty Socks in Catalog of copyright entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures available from the University of Pennsylvania, neither in page 45 for the first half of 1955, nor in page 119 for the second half of 1955. It is of course possible that the renewal entry was left out of the printed catalog by mistake. Is there some way to rule out this possibility? Please help, and update the wikipedia page with your findings.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

4.14.4: stable

Kernel Linux - Mar, 05/12/2017 - 11:26pd
Version:4.14.4 (stable) Released:2017-12-05 Source:linux-4.14.4.tar.xz PGP Signature:linux-4.14.4.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.14.4

4.9.67: longterm

Kernel Linux - Mar, 05/12/2017 - 11:24pd
Version:4.9.67 (longterm) Released:2017-12-05 Source:linux-4.9.67.tar.xz PGP Signature:linux-4.9.67.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.9.67

4.4.104: longterm

Kernel Linux - Mar, 05/12/2017 - 11:22pd
Version:4.4.104 (longterm) Released:2017-12-05 Source:linux-4.4.104.tar.xz PGP Signature:linux-4.4.104.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.4.104

3.18.86: longterm

Kernel Linux - Mar, 05/12/2017 - 11:20pd
Version:3.18.86 (EOL) (longterm) Released:2017-12-05 Source:linux-3.18.86.tar.xz PGP Signature:linux-3.18.86.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-3.18.86

Build a Privacy-respecting and Threat-blocking DNS Server

LinuxSecurity.com - Mar, 05/12/2017 - 11:11pd
LinuxSecurity.com: DNS blackholing can be an powerful technique for blocking malware, ransomware and phishing on your home network. Although numerous public DNS services boast threat-blocking features, these providers cannot guarantee you total privacy.

DR.CHECKER - A Soundy Vulnerability Detection Tool for Linux Kernel Drivers

LinuxSecurity.com - Mar, 05/12/2017 - 11:09pd
LinuxSecurity.com: DR.CHECKER: A Soundy Vulnerability Detection Tool for Linux Kernel Drivers Tested on Ubuntu >= 14.04.5 LTS

BoopSuite - A Suite of Tools for Wireless Auditing and Security Testing

LinuxSecurity.com - Mar, 05/12/2017 - 11:07pd
LinuxSecurity.com: BoopSuite is an up and coming suite of wireless tools designed to be easy to use and powerful in scope, that support both the 2 and 5 GHz spectrums. Written purely in python. A handshake sniffer (CLI and GUI), a monitor mode enabling script and a deauth script are all parts of this suite with more to come.

Daniel G. Siegel: summing up 93

Planet GNOME - Mar, 05/12/2017 - 2:22pd

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it – and much more – straight in your inbox.

The future of humanity and technology, by Stephen Fry

Above all, be prepared for the bullshit, as AI is lazily and inaccurately claimed by every advertising agency and app developer. Companies will make nonsensical claims like "our unique and advanced proprietary AI system will monitor and enhance your sleep" or "let our unique AI engine maximize the value of your stock holdings". Yesterday they would have said "our unique and advanced proprietary algorithms" and the day before that they would have said "our unique and advanced proprietary code". But let's face it, they're almost always talking about the most basic software routines. The letters A and I will become degraded and devalued by overuse in every field in which humans work. Coffee machines, light switches, christmas trees will be marketed as AI proficient, AI savvy or AI enabled. But despite this inevitable opportunistic nonsense, reality will bite.

If we thought the Pandora's jar that ruined the utopian dream of the internet contained nasty creatures, just wait till AI has been overrun by the malicious, the greedy, the stupid and the maniacal. We sleepwalked into the internet age and we're now going to sleepwalk into the age of machine intelligence and biological enhancement. How do we make sense of so much futurology screaming in our ears?

Perhaps the most urgent need might seem counterintuitive. While the specialist bodies and institutions I've mentioned are necessary we need surely to redouble our efforts to understand who we humans are before we can begin to grapple with the nature of what machines may or may not be. So the arts and humanities strike me as more important than ever. Because the more machines rise, the more time we will have to be human and fulfill and develop to their uttermost, our true natures.

an outstanding lecture exploring the impact of technology on humanity by looking back at human history in order to understand the present and the future.

We're building a dystopia just to make people click on ads, by Zeynep Tufekci

We use digital platforms because they provide us with great value. I use Facebook to keep in touch with friends and family around the world. I've written about how crucial social media is for social movements. I have studied how these technologies can be used to circumvent censorship around the world. But it's not that the people who run Facebook or Google are maliciously and deliberately trying to make the world more polarized and encourage extremism. I read the many well-intentioned statements that these people put out. But it's not the intent or the statements people in technology make that matter, it's the structures and business models they're building. And that's the core of the problem.

So what can we do? We need to restructure the whole way our digital technology operates. Everything from the way technology is developed to the way the incentives, economic and otherwise, are built into the system. We have to mobilize our technology, our creativity and yes, our politics so that we can build artificial intelligence that supports us in our human goals but that is also constrained by our human values. And I understand this won't be easy. We might not even easily agree on what those terms mean. But if we take seriously how these systems that we depend on for so much operate, I don't see how we can postpone this conversation anymore. We need a digital economy where our data and our attention is not for sale to the highest-bidding authoritarian or demagogue.

no new technology has only a one-sided effect. every technology is always both a burden and a blessing. not either or, but this and that. what bothers me is that we seem to ignore the negative impact of new technologies, justifying this attitude with their positive aspects.

the bullet hole misconception, by daniel g. siegel

If you're never exposed to new ideas and contexts, if you grow up only being shown one way of thinking about the computer and being told that there are no other ways to think about this, you grow up thinking you know what we're doing. We have already fleshed out all the details, improved and optimized everything a computer has to offer. We celebrate alleged innovation and then delegate picking up the broken pieces to society, because it's not our fault – we figured it out already.

We have to tell ourselves that we haven't the faintest idea of what we're doing. We, as a field, haven't the faintest idea of what we're doing. And we have to tell ourselves that everything around us was made up by people that were no smarter than us, so we can change, influence and build things that make a small dent in the universe.

And once we understand that, only then might we be able to do what the early fathers of computing dreamed about: To make humans better – with the help of computers.

the sequel to my previous talk, the lost medium, on bullet holes in world war 2 bombers, page numbering, rotating point of views and how we can escape the present to invent the future.

Michael Meeks: 2017-12-04 Monday.

Planet GNOME - Hën, 04/12/2017 - 10:00md
  • Mail chew, consultancy call, synched with Dennis; admin: customer, partner contacts, variously. TDF board call.

FAI.me build server improvements

Planet Debian - Hën, 04/12/2017 - 9:59md

Only one week ago, I've announced the FAI.me build service for creating your own installation images. I've got some feedback and people like to have root login without a password but using a ssh key. This feature is now available. You can upload you public ssh key which will be installed as authorized_keys for the root account.

You can now also download the configuration space that is used on the installation image and you can get the whole log file from the fai-mirror call. This command creates the partial package mirror. The log file helps you debugging if you add some packages which have conflicts on other packages, or if you misspelt a package name.

FAI.me

Thomas Lange http://blog.fai-project.org/ FAI (Fully Automatic Installation) / Plan your Installation and FAI installs your Plan

Faqet

Subscribe to AlbLinux agreguesi