You are here

Agreguesi i feed

Back Online

Planet Debian - Pre, 08/12/2017 - 11:58pd

I now have Internet back! Which means I can try to get the Debian WordPress packages bashed into shape. Unfortunately they still have the problem with the json horrible “no evil” license which causes so many problems all over the place.

I’m hoping there is a simple way of just removing that component and going from there.

Craig http://dropbear.xyz Small Dropbear

Stephen Michael Kellat: Not Messing With Hot Wheels Car Insertion

Planet Ubuntu - Pre, 08/12/2017 - 7:03pd

Being on furlough from your job for just under four full months and losing 20 pounds during that time can hardly be considered healthy. If anything, it means that something is wrong. I allude in various fora that I work for a bureau of the United States of America's federal government as a civil servant. I am not particularly high-ranking as I only come in at GS-7 Step 1 under "CLEVELAND-AKRON-CANTON, OH" locality pay. My job doesn't normally have me working a full 12 months out of the year (generally 6-8 months depending upon the needs of the bureau) and I am normally on-duty only 32 hours per week.

As you might imagine, I have been trying to leave that job. Unfortunately, working for this particular government bureau makes any resume look kinda weird. My local church has some domestic missions work to do and not much money to fund it. I already use what funding we have to help with our mission work reaching out to one of the local nursing homes to provide spiritual care as well as frankly one of the few lifelines to the outside world some of those residents have. Xubuntu and the bleeding edge of LaTeX2e plus CTAN help greatly in preparing devotional materials for use in the field at the nursing home. Funding held us back from letting me assist with Hurricane Harvey or Hurricane Maria relief especially since I am currently finishing off quite a bit of training in homeland security/emergency management. But for the lack of finances to back it up as well as the lack of a large enough congregation, there is quite a bit to do. Unfortunately the numbers we get on a Sunday morning are not what they once were when the congregation had over a hundred in attendance.

I don't like talking about numbers in things like this. If you take 64 hours in a two week pay period multiplied it by the minimum of 20 pay periods that generally occur and then multiplied by the hourly equivalent rate for my grade and step it only comes out to a pre-tax gross under $26,000. I rounded up to a whole number. Admittedly it isn't too much.

At this time of the year last year, many people across the Internet burned cash by investing in the Holiday Hole event put on by the Cards Against Humanity people. Over $100,000 was raised to dig a hole about 90 miles outside Chicago and then fill the thing back in. This year people spent money to help buy a piece of land to tie up the construction of President Trump's infamous border wall and even more which resulted in Cards Against Humanity raking in $2,250,000 in record time.

Now, the church I would be beefing up the missionary work with doesn't have a web presence. It doesn't have an e-mail address. It doesn't have a fax machine. Again, it is a small church in rural northeast Ohio. According to IRS Publiction 526, contributions to them are deductible under current law provided you read through the stipulations in that thin booklet and are a taxpayer in the USA. Folks outside the USA could contribute in US funds but I don't know what the rules are for foreign tax administrations to advise about how such is treated if at all.

The congregation is best reached by writing to:

West Avenue Church of Christ 5901 West Avenue Ashtabula, OH 44004 United States of America

With the continuing budget shenanigans about how to fund Fiscal Year 2018 for the federal government, I get left wondering if/when I might be returning to duty. Helping the congregation fund me to undertake missions for it removes that as a concern. Besides, any job that gives you gray hair and puts 30 pounds on you during eight months of work cannot be good for you to remain at. Too many co-workers took rides away in ambulances at times due to the pressures of the job during the last work season.


Not Messing With Hot Wheels Car Insertion by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

next-20171208: linux-next

Kernel Linux - Pre, 08/12/2017 - 5:56pd
Version:next-20171208 (linux-next) Released:2017-12-08

Robert Ancell: Setting up Continuous Integration on gitlab.gnome.org

Planet Ubuntu - Pre, 08/12/2017 - 1:40pd
Simple Scan recently migrated to the new gitlab.gnome.org infrastructure. With modern infrastructure I now have the opportunity to enable Continuous Integration (CI), which is a fancy name for automatically building and testing your software when you make changes (and it can do more than that too).

I've used CI in many projects in the past, and it's a really handy tool. However, I've never had to set it up myself and when I've looked it's been non-trivial to do so. The great news is this is really easy to do in GitLab!

There's lots of good documentation on how to set it up, but to save you some time I'll show how I set it up for Simple Scan, which is a fairly typical GNOME application.

To configure CI you need to create a file called .gitlab-ci.yml in your git repository. I started with the following:

build-ubuntu:
  image: ubuntu:rolling
  before_script:
    - apt-get update
    - apt-get install -q -y --no-install-recommends meson valac gcc gettext itstool libgtk-3-dev libgusb-dev libcolord-dev libpackagekit-glib2-dev libwebp-dev libsane-dev
  script:
    - meson _build
    - ninja -C _build install

The first line is the name of the job - "build_ubuntu". This is going to define how we build Simple Scan on Ubuntu.

The "image" is the name of a Docker image to build with. You can see all the available images on Docker Hub. In my case I chose an official Ubuntu image and used the "rolling" link which uses the most recently released Ubuntu version.

The "before_script" defines how to set up the system before building. Here I just install the packages I need to build simple-scan.

Finally the "script" is what is run to build Simple Scan. This is just what you'd do from the command line.

And with that, every time a change is made to the git repository Simple Scan is built on Ubuntu and tells me if that succeeded or not! To make things more visible I added the following to the top of the README.md:

[![Build Status](https://gitlab.gnome.org/GNOME/simple-scan/badges/master/build.svg)](https://gitlab.gnome.org/GNOME/simple-scan/pipelines)

This gives the following image that shows the status of the build:

And because there's many more consumers of Simple Scan that just Ubuntu, I added the following to.gitlab-ci.yml:

build-fedora:
  image: fedora:latest
  before_script:
    - dnf install -y meson vala gettext itstool gtk3-devel libgusb-devel colord-devel PackageKit-glib-devel libwebp-devel sane-backends-devel
  script:
    - meson _build
    - ninja -C _build install

Now it builds on both Ubuntu and Fedora with every commit!

I hope this helps you getting started with CI and gitlab.gnome.org. Happy hacking.

Testing OpenStack using tempest: all is packaged, try it yourself

Planet Debian - Pre, 08/12/2017 - 12:00pd

tl;dr: this post explains how the new openstack-tempest-ci-live-booter package configures a machine to PXE boot a Debian Live system running on KVM in order to run functional testing of OpenStack. It may be of interest to you if you want to learn how to PXE boot a KVM virtual machine running Debian Live, even if you aren’t interested in OpenStack.

Moving my CI from one location to another leads to package it fully

After packaging a release of OpenStack, it’s kind of mandatory to functionally test the set of packages. This is done by running the tempest test suite on an already deployed OpenStack installation. I used to do that on a real hardware, provided by my employer. But since I’ve lost my job (I’m still looking for a new employer at this time), I also lost access to the hardware they were providing to me.

As a consequence, I searched for a sponsor to provide the hardware to run tempest on. I first sent a mail to the openstack-dev list, asking for such a hardware. Then Rochelle Grober and Stephen Li from Huawei got me in touch with Zachary Smith, the CEO of Packet.net. And packet.net gave me an account on their system. I am amazed how good their service is. They provide baremetal servers around the world (15 data centers), provisioned using an API (meaning, fully automatically). A big thanks to them!

Anyway, even if I planned for a few weeks to give a big thanks to the above people (they really deserves it!), this isn’t the only goal of this post. This is to introduce how to run your own tempest CI on your own machine. Because since I have been in the situation where my CI had to move twice, I decided to industrialize it, and fully automate the setup of the CI server. And what does a DD do when writing software? Package it of course. So I packaged it all, and uploaded it to the archive. Here’s how to use all of this.

General principle

The best way to run an OpenStack tempest CI is to run it on a Debian Live system. Why? Because setting-up a full OpenStack environment takes a lot of time, mostly spent on disk I/O. And on a live system, everything runs on a RAM disk, so installing under this environment is the fastest way one could do. This is what I did when working with Mirantis: I had a real baremetal server, which I was PXE booting on a Debian Live system. However nice, this imposes having access to 2 servers: one for running the Live system, and one running the dhcp/pxe/tftp server. Also, this means the boot server needs 2 nics, one on the internet, and one for booting the 2nd server that will run the Live system. It was not possible to have such specific setup at packet, so I decided to replicate this using KVM, so it would become portable. And since the servers at packet.net are very fast, it isn’t much of an issue anymore to not run on baremetal.

Anyway, let’s dive into setting-up all of this.

Network topology

We’ll assume that one of your interface has internet access, let’s say eth0. Since we don’t want to destroy any of your network config, the openstack-tempest-ci-live-booter package will use a dummy network interface (ie: modprobe dummy) and bridge it to the network interface of the KVM virtual machine. That dummy network interface will be configured with 192.168.100.1, and the Debian Live KVM will use 192.168.100.2. This convenient default can be changed, but then you’ll have to pass your specific network configuration to each and every script (just read the beginning of each script to read the parameters).

Configure the host machine

First install the openstack-tempest-ci-live-booter package. This runtime depends on the isc-dhcp-server, tftpd-hpa, apache2, qemu-kvm and all what’s needed to run a Debian Live machine, booting it over PXE / iPXE (the package support both, more on iPXE later). So, let’s do it:

apt-get install openstack-tempest-ci-live-booter

The package, once installed, doesn’t do much. To respect the Debian policy, it can’t touch configuration files of other packages in maintainer scripts. Therefore, you have to manually run:

openstack-tempest-ci-live-booter-config --configure-dummy-nick

Running this script will:

  • configure the kvm-intel module to allow nested visualization (by unloading the module, adding “options kvm-intel nested=y” to /etc/modprobe.d, and reloading the module)
  • modprobe the dummy kernel module, run “ip link set name tempestnic0 dev dummy0” to create a tempestnic0 dummy interface
  • create a tempestbr bridge, set 192.168.100.1 for the bridge IP, bridge the tempestnic0 and tempesttap
  • configure tftpd-hpa to listen on 192.168.100.1
  • configure isc-dhcp-server to dhcpreply 192.168.100.2 on the tempestbr, so that the KVM machine can boot up with an IP
  • configure apache2 to serve the filesystem.squashfs root filesystem, loaded by the Linux kernel at boot time. Note that you may need to manually start and/or reload apache after this setup though.

Again, you can change the IP addresses if you like. You can also use a real interface if you intend to boot a real hardware rather than a KVM machine (in which case, just omit the –configure-dummy-nick, and manually configure your 2nd interface).

Also, openstack-tempest-ci-live-booter provides a /etc/init.d/openstack-tempest-ci-live-booter script which will configure NAT on your server, so that the Debian Live machine has internet access (needed for apt-get operations). Edit the file if you need to change 192.168.100.1/24 by something else. The script will pick-up the interface that is connected to the default gateway by itself.

The dhcp server is configured to support both legacy PXE and the new iPXE standard. I had to support iPXE, because that’s what the standard KVM ROM does, and also I wanted to keep legacy support for older baremetal hardware. The way iPXE works is that dhcpd tells the client where to fetch the iPXE script, which itself chains to lpxelinux.0 (instead of the standard pxelinux.0). It’s rather easy to setup once you understood how it works.

Build the live image

Now that the PXE server is configured, it’s now time to build the Debian live image. Simply do this to build the image, and copy its resulting files in the PXE server folder (ie: /var/lib/tempest-live-booter):

mkdir live cd live openstack-tempest-ci-build-live-image --debian-mirror-addr http://ftp.nl.debian.org/debian

Since we need to login in that server later on, the script will create an ssh key-pair. If you want your own keys, simply drop the id_rsa and id_rsa.pub files in your current folder before running the script. Then make it so that this key-pair can be later on used by default by the user who will run the tempest script (ie: copy id_rsa and id_rsa.pub in the ~/.ssh folder).

Running the openstack-tempest-ci

What the openstack-tempest-ci script does is (re-)starting your KVM virtual machine, ssh into it, upgrade it to sid, install OpenStack, and eventually run all the tempest suite. There’s 2 ways to run it: either install the openstack-tempest-ci package, eventually configure it (in /etc/default/openstack-tempest-ci), and simply run the “openstack-tempest-ci” command. Or, you can skip the installation of the package, and simply run it from source:

git clone http://anonscm.debian.org/git/openstack/debian/openstack-meta-packages.git cd openstack-meta-packages/src ./openstack-tempest-ci

Indeed, the script is designed to copy all scripts from source inside the Debian Live machine before using these scripts. The reason it’s doing that is because we want to avoid the situation where a modification needs to be uploaded to Debian before being able to test it, and also it was needed to be able to run the openstack-tempest-ci script without installing a package (which would need root access that I don’t have on casulana.debian.org, where running tempest is needed to test official OpenStack Debian images). So, definitively, feel free to hack everything in openstack-meta-packages/src before running the tempest script. Also, openstack-tempest-ci will look for a sources.list file in the current directory, and upload it to the Debian Live system before doing the upgrade/install. This way, it is easy to use the closest mirror.

Goirand Thomas http://thomas.goirand.fr/blog Zigo's blog

Simple media cachebusting with GitHub pages

Planet Debian - Enj, 07/12/2017 - 11:10md

GitHub Pages makes it really easy to host static websites, including sites with custom domains or even with HTTPS via CloudFlare.

However, one typical annoyance with static site hosting in general is the lack of cachebusting so updating an image or stylesheet does not result in any change in your users' browsers until they perform an explicit refresh.

One easy way to add cachebusting to your Pages-based site is to use GitHub's support for Jekyll-based sites. To start, first we add some scaffolding to use Jekyll:

$ cd "$(git rev-parse --show-toplevel) $ touch _config.yml $ mkdir _layouts $ echo '{{ content }}' > _layouts/default.html $ echo /_site/ >> .gitignore

Then in each of our HTML files, we prepend the following header:

--- layout: default ---

This can be performed on your index.html file using sed:

$ sed -i '1s;^;---\nlayout: default\n---\n;' index.html

Alternatively, you can run this against all of your HTML files in one go with:

$ find -not -path './[._]*' -type f -name '*.html' -print0 | \ xargs -0r sed -i '1s;^;---\nlayout: default\n---\n;'

Due to these new headers, we can obviously no longer simply view our site by pointing our web browser directly at the local files. Thus, we now test our site by running:

$ jekyll serve --watch

... and navigate to http://127.0.0.1:3000/.

Finally, we need to append the cachebusting strings itself. For example, if we had the following HTML to include a CSS stylesheet:

<link href="/css/style.css" rel="stylesheet">

... we should replace it with:

<link href="/css/style.css?{{ site.time | date: '%s%N' }}" rel="stylesheet">

This adds the current "build" timestamp to the file, resulting in the following HTML once deployed:

<link href="/css/style.css?1507450135153299034" rel="stylesheet">

Don't forget to to apply it all your other static media, including images and Javascript:

<img src="image.jpg?{{ site.time | date: '%s%N' }}"> <script src="/js/scripts.js?{{ site.time | date: '%s%N' }}')">

To ensure that transitively-linked images are cachebusted, instead of referencing them in the CSS you can specify them directly in the HTML instead:

<header style="background-image: url(/img/bg.jpg?{{ site.time | date: '%s%N' }})"> Chris Lamb https://chris-lamb.co.uk/blog/category/planet-debian lamby: Items or syndication on Planet Debian.

Thoughts on AlphaZero

Planet Debian - Enj, 07/12/2017 - 8:35md

The chess world woke up to something of an earthquake two days ago, when DeepMind (a Google subsidiary) announced that they had adapted their AlphaGo engine to play chess with only minimal domain knowledge—and it was already beating Stockfish. (It also plays shogi, but who cares about shogi. :-) ) Granted, the shock wasn't as huge as what the Go community must have felt when the original AlphaGo came in from nowhere and swept with it the undisputed Go throne and a lot of egos in the Go community over the course of a few short months—computers have been better at chess than humans for a long time—but it's still a huge event.

I see people are trying to make sense of what this means for the chess world. I'm not a strong chess player, an AI expert or a top chess programmer, but I do play chess, I've worked in AI (in Google, briefly in the same division as the DeepMind team) and I run what's the strongest chess analysis website online whenever Magnus Carlsen is playing (next game 17:00 UTC tomorrow!), so I thought I should share some musings.

First some background: We've been trying to make computers play chess for almost 70 years now; originally in the hopes that it would lead us to general AI, although we sort of abandoned that eventually. In almost all of that time, we've used the same basic structure; you have an evaluation function that can look at a specific position and say “I think this is good for white”, and then search that sees what happens with that evaluation function by playing all possible moves and countermoves (“oh wow, no matter what happens black is going to take white's queen, so maybe this wasn't so good after all”). The evaluation function roughly consists of a few hundred of hand-crafted features (everything from “the queen is worth nine points and rooks are five” to more complex issues around king safety, pawn structure and piece mobility) which are more or less added together, and the search tries very hard to prune out uninteresting lines so it can go deeper into the more interesting ones. In the end, you're left with a single “principal variation” (PV) consisting of a series of chess moves (presumably the best the engine can find within the allotted time), and the evaluation of the position at the end of the PV is the final evaluation of the position.

AlphaZero is different. Instead of a hand-crafted evaluation function, it just throws the raw information about the position (where the pieces are, and a few other tidbits like right-to-castle) into a neural network and gets out something like an expected win percentage. And instead of searching for the best line, it uses Monte Carlo tree search to make sort-of a weighted average of possible outcomes, explored in a stochastic way. The neural network is simply optimized through reinforcement learning under self-play; it starts off playing what's essentially random moves (it's restricted from playing illegal ones—that's one of the very few pieces of domain-specific knowledge), but rapidly gets better as the neural network learns what works or not.

These are not new ideas (in fact, I'm hard pressed to find a single new thing in the paper), and the basic structure has been attempted applied to chess in the past with master-level results, but it hasn't really made something approaching the top before now. The idea of numerical optimization through self-play is widely used, though, mostly to tune things like piece-square tables and other evaluation function weights. So I think that it's mainly
through great engineering and tons of computing power, not a radical breakthrough, that DeepMind has managed to make what's now probably the world's strongest chess entity on the planet. (I say “probably” because it “only” won 64–36 against Stockfish 8, which is about 100 Elo, and that's probably possible to do with a few hardware doublings and/or Stockfish improvements. Granted, it didn't lose a single game, and it's highly likely that AlphaZero's approach has a lot more room for further improvement than classical alpha-beta has.)

So what do I think AlphaZero will change? In the short term: Nothing. The paper contains ten games (presumably cherry-picked wins) of the 100-game match, and while those show beautiful chess that at times makes Stockfish seem cramped and limited, they don't seem to show any radically new chess ideas like AlphaGo did with Go. Nobody knows when or if DeepMind will release more games, although they have released a fair amount of Go games in the past, and also done Go exhibition matches. People are trying to pick out information from its opening choices (for instance, it prefers the infamous Berlin defense as black), which is interesting, but right now, there's just too little data to kill entire lines or openings.

We're also not likely to stop playing chess anytime soon, for the same reason that Magnus Carlsen nearly hitting 3000 Elo in blitz didn't stop me from playing online. AlphaZero hasn't solved chess by any means, and even though checkers has been weakly solved (Chinook) provably never loses a game from the opening position, although it won't win every won position), people still play it even on the top level. Most people simply are not at the level where the existence of perfect play matters, nor are their primary motivation to try to explore the frontiers of it.

So the primary question is whether top players can use this to improve their game. Now, DeepMind is not in the business of selling software; they're an AI research company, and AlphaZero runs on hardware (TPUs) you can't buy at this moment, and hardly even rent in the cloud. (Granted, you could probably make AlphaZero run efficiently on GPUs, especially the newer ones that start to get custom blocks for accelerating neural networks, although probably slower and with higher power usage.) Thus, it's unlikely that they will be selling or open-sourcing AlphaZero anytime soon. You could imagine top players wanting to go into talks to pay for exclusive access, but if you look at the costs of developing such a thing (just the training time alone has to be significant), it's obvious that they didn't do this in the hope of recouping the development costs. If anything, you would imagine that they'd sell it as a cloud service, but nothing like that has emerged for AlphaGo, where they have a competitive much larger market lead, so it seems unlikely.

Could anyone take their paper and reimplement it? The answer is: Maybe. AlphaGo was two years ago, has been backed up with several papers, and we still don't have anything public that's really close. Tencent's AI lab has made their own clone (Fine Art), and then there's DeepZenGo and others, but nothing nearly as strong that you can download or buy at this stage (as far as I know, anyway). Chess engines are typically made by teams of one or two people, and so far, deep learning-based approaches seem to require larger teams and a fair amount of (expensive) computing time, and most chess programmers are nor deep learning experts anyway. It's hard to make a living off of selling chess engines even in a small team; one could again assume a for-hire project, but I think not even most of the top players have the money to hire someone for a year or two for doing a speculative project to making an entirely new kind of one. It's limited how much a 100 Elo stronger engine will help you during opening preparation/training anyway; knowing how to work effectively with the computer is much more valuable. After all, it's not like you can use it while playing (unless it's freestyle chess).

The good news is that DeepMind's approach seems to become simpler and simpler over time. The first version of AlphaGo had all sorts of complexities and relied partially on hand-crafted features (although it wasn't very widely publicized), while the latest versions have removed a lot of the fluff. Make no mistake, though; the devil is in the details, and writing a top-class chess engine is a huge undertaking. My hunch is on two to three years before you can buy something that beats Stockfish on the same hardware. But I will hedge my bet here; it's hard to make predictions, especially about the future. Even with a world-class neural network in your brain.

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Three Minimalism reads

Planet Debian - Enj, 07/12/2017 - 5:26md

"The Life-Changing Magic of Tidying Up" by Marie Kondo is a popular (New York Times best selling) book by lifestyle consultant Mari Kondo about tidying up and decluttering. It's not strictly about minimalism, although her approach is informed by her own preferences which are minimalist. Like all self-help books, there's some stuff in here that you might find interesting or applicable to your own life, amongst other stuff you might not. Kondo believes, however, that her methods only works if you stick to them utterly.

Next is "Goodbye, Things" by Fumio Sasaki. The end-game for this book really is minimalism, but the book is structured in such a way that readers at any point on a journey to minimalism (or coinciding with minimalism, if that isn't your end-goal) can get something out of it. A large proportion of the middle of the book is given over to a general collection of short, one-page-or-less tips on decluttering, minimising, etc. You can randomly flip through this section a bit like randomly drawing a card from a deck. I started to wonder whether there's a gap in the market for an Oblique Strategies-like minimalism product. The book recommended several blogs for further reading, but they are all written in Japanese.

Finally issue #18 of New Philosopher is the "Stuff" issue and features several articles from modern Philosophers (as well as some pertinent material from classical ones) on the nature of materialism. I've been fascinated by Philosophy from a distance ever since my brother read it as an Undergraduate so I occasionally buy the philosophical equivalent of Pop Science books or magazines, but this was the most accessible for me that I've read to date.

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

Ubuntu Insights: Security Team Weekly Summary: December 7, 2017

Planet Ubuntu - Enj, 07/12/2017 - 4:11md

The Security Team weekly reports are intended to be very short summaries of the Security Team’s weekly activities.

If you would like to reach the Security Team, you can find us at the #ubuntu-hardened channel on FreeNode. Alternatively, you can mail the Ubuntu Hardened mailing list at: ubuntu-hardened@lists.ubuntu.com

Due to the holiday last week, there was no weekly report, so this report covers the previous two weeks. During the last two weeks, the Ubuntu Security team:

  • Triaged 379 public security vulnerability reports, retaining the 74 that applied to Ubuntu.
  • Published 32 Ubuntu Security Notices which fixed 70 security issues (CVEs) across 34 supported packages.
Ubuntu Security Notices

 

Bug Triage

 

Mainline Inclusion Requests

 

Development

 

  • add max compressed size check to the review tools
  • adjust review-tools runtime errors output for store (final)
  • adjust review-tools for redflagged base snap overrides
  • adjust review-tools for resquashing with fakeroot
  • upload a couple of bad snaps to test r945 of the review tools in the store. The store is correctly not auto-approving, but is also not handling them right. File LP: #1733699
  • investigate SNAPCRAFT_BUILD_INFO=1 with snapcraft cleanbuild and attempt rebuilds
  • respond to feedback in PR 4245, close and resubmit as PR 4255 (interfaces/screen-inhibit-control: fix case in screen inhibit control)
  • investigate reported godot issue. Send up PR 4257 (interfaces/opengl: also allow ‘revision’ on /sys/devices/pci…)
  • investigation of potential biometrics-observe interface
  • snapd reviews
    • PR 4258: fix unmounting on systems without rshared
    • PR 4170: cmd/snap-update-ns: add planWritableMimic
    • PR 4306 (use #include instead of bare ‘include’)
    • PR 4224 – cmd/snap-update-ns: teach update logic to handle synthetic changes
    • PR 4312 – ‘create mount targe for lib32,vulkan on demand
    • PR 4323 – interfaces: add gpio-memory-control interface
    • PR 4325 (add test for netlink-connector interface) and investigate NETLINK_CONNECTOR denials
    • review design of PR 4329 – discard stale mountspaces (v2)
  • finalized squashfs fix for 1555305 and submitted it upstream (https://sourceforge.net/p/squashfs/mailman/message/36140758/)

  • investigation into users 16.04 apparmor issues with tomcat
What the Security Team is Reading This Week

 

Weekly Meeting

 

More Info

 

Adding subtitles with FFmpeg

Planet Debian - Enj, 07/12/2017 - 1:52md

For future reference (to myself, for the most part):

ffmpeg -i foo.webm -i foo.en.vtt -i foo.nl.vtt -map 0:v -map 0:a \ -map 1:s -map 2:s -metadata:s:a language=eng -metadata:s:s:0 \ language=eng -metadata:s:s:1 language=nld -c copy -y \ foo.subbed.webm

... is one way to create a single .webm file from one .webm input file and multiple .vtt files. A little bit of explanation:

  • The -i arguments pass input files. You can have multiple input files for one output file. They are numbered internally (this is necessary for the -map and -metadata options later), starting from 0.
  • The -map options take a "mapping". With them, you specify which input streams should go where in the output stream. By default, if you have multiple streams of the same type, ffmpeg will only pick one (the "best" one, whatever that is). The mappings we specify are:

    • -map 0:v: this means to take the video stream from the first file (this is the default if you do not specify any mapping at all; but if you do specify a mapping, you need to be complete)
    • -map 0:a: take the audio stream from the first file as well (same as with the video).
    • -map 1:s: take the subtitle stream from the second (i.e., indexed 1) file.
    • -map 2:s: take the subtitle stream from the third (i.e., indexed 2) file.
  • The -metadata options set metadata on the output file. Here, we pass:

    • -metadata:s:a language=eng, to add a 's'tream metadata item on the 'a'udio stream, with name language and content eng. The language metadata in ffmpeg is special, in that it gets automatically translated to the correct way of specifying the language in the target container format.
    • -metadata:s:s:0 language=eng, to add a 's'tream metadata item on the first (indexed 0) 's'ubtitle stream in the output file. This too has the english language set
    • `-metadata:s:s:1 language=nld', to add a 's'tream metadata item on the second (indexed 1) 's'ubtitle stream in the output file. This has dutch set as the language.
  • The -c copy option tells ffmpeg to not transcode the input video data, but just to rewrite the container. This works because all input files (WebM video plus VTT subtitles) are valid for WebM. If you do not have an input subtitle format that is valid for WebM, you can instead limit the copy modifier to the video and audio only, allowing ffmpeg to transcode the subtitles. This is done by way of -c:v copy -c:a copy.
  • Finally, we pass -y to specify that any pre-existing output file should be overwritten, and the name of the output file.
Wouter Verhelst https://grep.be/blog//pd/ pd

Quantum Computing Is the Next Big Security Risk

LinuxSecurity.com - Enj, 07/12/2017 - 10:23pd
LinuxSecurity.com: The 20th century gave birth to the Nuclear Age as the power of the atom was harnessed and unleashed. Today, we are on the cusp of an equally momentous and irrevocable breakthrough: the advent of computers that draw their computational capability from quantum mechanics.

The Most Exciting Linux Kernel Stories Of 2017

LinuxSecurity.com - Enj, 07/12/2017 - 10:21pd
LinuxSecurity.com: This year on Phoronix has been more than 290 original news articles pertaining to advancements and changes within the Linux kernel. Here are those highlights.

FCC Chair Ajit Pai Falsely Claims Killing Net Neutrality Will Help Sick and Disabled People

LinuxSecurity.com - Enj, 07/12/2017 - 10:15pd
LinuxSecurity.com: For the duration of the fight over net neutrality, there have been a constant stream of falsehoods pushed by AT&T, Verizon, and Comcast to justify their frontal assault on the popular rules. One popular bogeyman has been that net neutrality rules devastated telecom sector investment, a claim consistently disproven by publicly-accessible SEC filings, earnings reports, independent analysis, and statements to investors from more than a half-dozen industry executives.

4.1.47: longterm

Kernel Linux - Enj, 07/12/2017 - 3:34pd
Version:4.1.47 (longterm) Released:2017-12-07 Source:linux-4.1.47.tar.xz PGP Signature:linux-4.1.47.tar.sign Patch:full (incremental) ChangeLog:ChangeLog-4.1.47

RcppArmadillo 0.8.300.1.0

Planet Debian - Enj, 07/12/2017 - 1:59pd

Another RcppArmadillo release hit CRAN today. Since our last 0.8.100.1.0 release in October, Conrad kept busy and produced Armadillo releases 8.200.0, 8.200.1, 8.300.0 and now 8.300.1. We tend to now package these (with proper reverse-dependency checks and all) first for the RcppCore drat repo from where you can install them "as usual" (see the repo page for details). But this actual release resumes within our normal bi-monthly CRAN release cycle.

These releases improve a few little nags on the recent switch to more extensive use of OpenMP, and round out a number of other corners. See below for a brief summary.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 405 other packages on CRAN.

A high-level summary of changes follows.

Changes in RcppArmadillo version 0.8.300.1.0 (2017-12-04)
  • Upgraded to Armadillo release 8.300.1 (Tropical Shenanigans)

    • faster handling of band matrices by solve()

    • faster handling of band matrices by chol()

    • faster randg() when using OpenMP

    • added normpdf()

    • expanded .save() to allow appending new datasets to existing HDF5 files

  • Includes changes made in several earlier GitHub-only releases (versions 0.8.300.0.0, 0.8.200.2.0 and 0.8.200.1.0).

  • Conversion from simple_triplet_matrix is now supported (Serguei Sokol in #192).

  • Updated configure code to check for g++ 5.4 or later to enable OpenMP.

  • Updated the skeleton package to current packaging standards

  • Suppress warnings from Armadillo about missing OpenMP support and -fopenmp flags by setting ARMA_DONT_PRINT_OPENMP_WARNING

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Ubuntu Insights: Kernel Team Summary – December 6, 2017

Planet Ubuntu - Mër, 06/12/2017 - 9:14md
November 21 through December 04 Development (18.04)

Every 6 months the Ubuntu Kernel Team is tasked to pick the kernel to be used in the next release. This is a difficult thing to do because we don’t definitively know what will be going into the upstream kernel over the next 6 months nor the quality of that kernel. We look at the Ubuntu release schedule and how that will line up with the upstream kernel releases. We talk to hardware vendors about when they will be landing their changes upstream and what they would prefer as the Ubuntu kernel version. We talk to major cloud vendors and ask them what they would like. We speak to large consumers of Ubuntu to solicit their opinion. We look at what will be the next upstream stable kernel. We get input from members of the Canonical product strategy team. Taking all of that into account we are tentatively planning to converge on 4.15 for the Bionic Beaver 18.04 LTS release.

On the road to 18.04 we have a 4.14 based kernel in the Bionic -proposed repository.

Stable (Released & Supported)
  • The kernels for the current SRU cycle are being respun to include fixes for CVE-2017-16939 and CVE-2017-1000405.

  • Kernel versions in -proposed:

    trusty 3.13.0-137.186 trusty/linux-lts-xenial 4.4.0-103.126~14.04.1 xenial 4.4.0-103.126 xenial/linux-hwe 4.10.0-42.46~16.04.1 xenial/linux-hwe-edge 4.13.0-19.22~16.04.1 zesty 4.10.0-42.46 artful 4.13.0-19.22
  • Current cycle: 17-Nov through 09-Dec

    17-Nov Last day for kernel commits for this cycle. 20-Nov - 25-Nov Kernel prep week. 26-Nov - 08-Dec Bug verification & Regression testing. 11-Dec Release to -updates.
  • Next cycle: 08-Dec through 30-Dec(This cycle will only contain CVE fixes)

    08-Dec Last day for kernel commits for this cycle. 11-Dec - 16-Dec Kernel prep week. 17-Dec - 29-Dec Bug verification & Regression testing. 01-Jan Release to -updates.
Misc
  • The current CVE status
  • If you would like to reach the kernel team, you can find us at the #ubuntu-kernel
    channel on FreeNode. Alternatively, you can mail the Ubuntu Kernel Team mailing
    list at: kernel-team@lists.ubuntu.com.

My Free Software Activities in November 2017

Planet Debian - Mër, 06/12/2017 - 7:33md

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

Debian Games Debian Java
  • New upstream versions this month: undertow, jackrabbit, libpdfbox2, easymock, libokhttp-java, mediathekview, pdfsam, libsejda-java, libsambox-java and libnative-platform-java.
  • I updated bnd (2.4.1-7) in order to help with the removal of Eclipse from Testing. Unfortunately there is more work to do and the only way forward is to package a newer version of Eclipse and to split the package in a way, so that such issues can be avoided in the future. P.S.: We appreciate help with maintaining Eclipse! (#681726)
  • I sponsored libimglib2-java for Ghislain Antony Vaillant.
  • I fixed a regression in libmetadata-extractor-java related to relative classpaths. (#880746)
  • I spent more time on upgrading Gradle to version 3.4.1 and finally succeeded. The package is in experimental now. Upgrading from 3.2.1 to 3.4.1 didn’t seem like a big undertaking but the 8 MB debdiff and ~170000 lines of code changes proved me wrong. I discovered two regressions with this version in mockito and bnd. The former one could be resolved but bnd requires probably an upgrade as well. I would like to avoid that at the moment because major bnd upgrades tend to affect dozens of reverse-dependencies, mostly in a negative way.
  • Netbeans was affected by a regression in jaxb and failed to build from source. (#882525) I could partly revert the damage but another bug in jaxb 2.3.0 is currently preventing a complete recovery.
  • I fixed two Java 9 transition bugs in libnative-platform-java (#874645) and  jedit (#875583).
Debian LTS

This was my twenty-first month as a paid contributor and I have been paid to work 14.75 hours (13 +1.75 from October) on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-1177-1. Issued a security update for poppler fixing 4 CVE.
  • DLA-1178-1. Issued a security update for opensaml2 fixing 1 CVE.
  • DLA-1179-1. Issued a security update for shibboleth-sp2 fixing 1 CVE.
  • DLA-1180-1. Issued a security update for libspring-ldap-java fixing 1 CVE.
  • DLA-1184-1. Issued a security update for optipng fixing 1 CVE.
  • DLA-1185-1. Issued a security update for sam2p fixing 1 CVE.
  • DLA-1197-1. Issued a security update for sox fixing 7 CVE.
  • DLA-1198-1. Issued a security update for libextractor fixing 6 CVE. I also discovered that libextractor in buster/sid is still affected by more security issues and reported my findings as Debian bug #883528.
Misc
  • I packaged a new upstream release of osmo, a neat task manager and calendar application.
  • I prepared a security update for sam2p, which will be part of the next Jessie point release, and libspring-ldap-java. (DSA-4046-1)

Thanks for reading and see you next time.

Apo https://gambaru.de/blog planetdebian – gambaru.de

Karuna Grewal: Outreachy's finally here !

Planet GNOME - Mër, 06/12/2017 - 5:22md

It’s been a month since the Outreachy Round 15 results were announced . Yay! my proposal for adding a network panel to GNOME Usage was selected. I am glad to be working on something I personally have been longing for. Moreover, I finally have something to cut down on my Xbox addiction and channelize it into bringing the network panel to life.
It’s going to be really amazing working with my mentor Felipe Borges , and Usage’s co-maintainer Petr Stetka ,given their experience and expertise.

Here’s a walkthrough of what the project is all about:

Currently there are not many Linux based GUI tools to monitor network statistics on our system ,unlike the CLI tools. Network Panel in GNOME-Usage will serve the purpose of making a UI available at the user’s service enabling them to monitor their network in a process oriented manner.

This panel can be designed to provide not only the per-process data transfer rates ,but also other details : open ports dedicated to some service (this can be of a great use to start or stop services from a UI ), list of interfaces. Currently it’s not finalized what all additional data will be available apart from the data transfer rates, but this panel surely has loads of new things in store for the users.

Lately, I’ve been discussing with my mentor regarding the approach for the backend API , which we plan to be incorporated in libgtop. As the Outreachy round officially started yesterday ,I plan to dig into the libgtop codebase and get started with coding ,the most amazing part of this internship !

This week onwards, I will be regular with my blog posts , updating about my progress on the project.
Lots in store for the geeky network enthusiasts looking forward to having a new compelling look to otherwise conventional network details.

Stay tuned! :)

Tobias Bernard: UX Hackfest London

Planet GNOME - Mër, 06/12/2017 - 4:09md

Last week I took part in the GNOME Shell UX Hackfest in London, along with other designers and developers from GNOME and adjacent communities such as Endless, Pop!, and elementary. We talked about big, fundamental things, like app launching and the lock/login screen, as well as some smaller items, like the first-run experience and legacy window decorations.

I won’t recap everything in detail, because Cassidy from System76 has already done a great job at that. Instead, I want to highlight some of the things I found most interesting.

Spatial model

One of my main interests for this hackfest was to push for better animations and making better use of the spatial dimension in GNOME Shell. If you’ve seen my GUADEC Talk, you know about my grand plan to introduce semantic animations across all of GNOME, and the Shell is obviously no exception. I’m happy to report that we made good progress towards a clear, unified spatial model for GNOME Shell last week.

Everything we came up with are very early stage concepts at this point, but I’m especially excited about the possibility of having the login/unlock screen be part of the same space as the rest of the system, and making the transition between these fluid and semantic.

Tiling

Another utopian dream of mine is a tiling-first desktop. I’ve long felt that overlapping windows are not the best way to do multitasking on screens, and tiling is something I’m very interested in exploring as an alternative. Tiling window managers have long done this, but their UX is usually subpar. However, some text editors like Atom have pretty nice graphical implementations of tiling window managers nowadays, and I feel like this approach might be scalable enough to cover most OS-level use cases as well (perhaps with something like a picture-in-picture mode for certain use cases).

Tiling in the Atom text editor

We touched on this topic at various points during this hackfest, especially in relation to the resizable half-tiling introduced in 3.26, and the coming quarter-tiling. However, our current tech stack and the design of most apps are not well suited to a tiling-first approach, so this is unlikely ot happen anytime soon. That said, I want to keep exploring alternatives to free-floating, overlapping windows, and will report on my progress here.

Header bars everywhere

A topic we only briefly touched on, but which I care about a lot, was legacy window decorations (aka title bars). Even though header bars have been around for a while, there are still a lot of apps we all rely on with ugly, space-eating bars at the top (Inkscape, I’m looking at you).

On a 1366x768px display, a 35px title bar takes up close to 5% of the entire screen.

We discussed possible solutions such as conditionally hiding title bars in certain situations, but finally decided that the best course of action is to work with apps upstream to add support for header bars. Firefox and Chromium are currently in the process of implementing this, and we want to encourage other third-party apps to do the same.

Firefox with client-side decorations (in development)

This will be a long and difficult process, but it will result in better apps for everyone, instead of hacky partial solutions. The work on this has just begun, and I’ll blog more about it as this initiative develops.

In summary, I think the hackfest set a clear direction for the future of GNOME Shell, and one that I’m excited to work towards. I’d like to thank the GNOME Foundation for sponsoring my attendance, Allan and Mario for organizing the hackfest, and everyone who attended for being there, and being awesome! Until next time!

Bastien Nocera: UTC and Anywhere on Earth support

Planet GNOME - Mër, 06/12/2017 - 3:32md
A quick post to tell you that we finally added UTC support to Clocks' and the Shell's World Clocks section. And if you're into it, there's also Anywhere on Earth support.

You will need to have git master versions of libgweather (our cities and timezones database), and gnome-clocks. This feature will land in GNOME 3.28.



Many thanks to Giovanni for coming up with an API he was happy with after I attempted a couple of iterations on one. Enjoy!

Update: As expected, a bug crept in. Thanks to Colin Guthrie for spotting the error in the "Anywhere on Earth" timezone. See this section for the fun we have to deal with.

Faqet

Subscribe to AlbLinux agreguesi