You are here

Planet Debian

Subscribe to Feed Planet Debian
Entries tagged english Ben Hutchings's diary of life and technology Reproducible builds blog Google Summer of Code 2018 Intern with Debian Chez Charles Bálint's blog about some of the important things in the Universe ganbatte kudasai! Entries tagged english Dude! Sweet! Entries tagged english Google Summer of Code 2018 Intern with Debian Thoughts about programming, sysadmin, Perl, Debian ... Stuff, Debian, Free Software and Craig jmtd Any sufficiently advanced thinking is indistinguishable from madness As time goes by ... Insider infos, master your Debian/Ubuntu distribution Thinking inside the box Digital-Scurf Ramblings Free Software Hacking Recent content in Debian on /home/athos Reproducible builds blog Thoughts, actions and projects Debian and Free Software sesse's blog Thinking inside the box showing latest 10 pabs Entries tagged english Thinking inside the box Recent content in Gsoc18 on bisco.org I began this blog as part of my homework of Master of Libre Software at URJC. I finished my studies but I keep on writing about free (as in freedom) software, networks and knowledge. Dude! Sweet! Recent content in Debian on Tickets'n'patches Just another WordPress.com weblog anarcat Blog from the Debian Project mejo roaming Entries tagged english anarcat joey Debian and Free Software Reproducible builds blog rebel with rather too many causes Thinking inside the box "Passion and dispassion. Choose two." -- Larry Wall something around Debian, written in funny Eng"r"ish ;)
Përditësimi: 7 months 1 javë më parë

Collecting statistics from TP-Link HS110 SmartPlugs using collectd

Mër, 13/12/2017 - 7:15md

Running a 3d printer alone at home is not necessarily the best idea - so I was looking for a way to force it off from remote. As OctoPrint user I stumbled upon a plugin to control TP-Link Smartplugs, so I decided to give them a try. What I found especially nice on the HS110 model was that it is possible to monitor power usage, current and voltage. Of course I wanted to have a long term graph of it. The result is a small collectd plugin, written in Python. It is available on github: https://github.com/bzed/collectd-tplink_hs110. Enjoy!

Bernd Zeimetz https://bzed.de/categories/linux/ Linux on linux & the mountains

Debsources now in sources.debian.org

Mër, 13/12/2017 - 6:40md

Debsources is a web application for publishing, browsing and searching an unpacked Debian source mirror on the Web. With Debsources, all the source code of every Debian release is available in https://sources.debian.org, both via an HTML user interface and a JSON API.

This service was first offered in 2013 with the sources.debian.net instance, which was kindly hosted by IRILL, and is now becoming official under sources.debian.org, hosted on the Debian infrastructure.

This new instance offers all the features of the old one (an updater that runs four times a day, various plugins to count lines of code or measure the size of packages, and sub-apps to show lists of patches and copyright files), plus integration with other Debian services such as codesearch.debian.net and the PTS.

The Debsources Team has taken the opportunity of this move of Debsources onto the Debian infrastructure to officially announce the service. Read their message as well as the Debsources documentation page for more details.

Laura Arjona Reina https://bits.debian.org/ Bits from Debian

Idea for finding all public domain movies in the USA

Mër, 13/12/2017 - 10:15pd

While looking at the scanned copies for the copyright renewal entries for movies published in the USA, an idea occurred to me. The number of renewals are so few per year, it should be fairly quick to transcribe them all and add references to the corresponding IMDB title ID. This would give the (presumably) complete list of movies published 28 years earlier that did _not_ enter the public domain for the transcribed year. By fetching the list of USA movies published 28 years earlier and subtract the movies with renewals, we should be left with movies registered in IMDB that are now in the public domain. For the year 1955 (which is the one I have looked at the most), the total number of pages to transcribe is 21. For the 28 years from 1950 to 1978, it should be in the range 500-600 pages. It is just a few days of work, and spread among a small group of people it should be doable in a few weeks of spare time.

A typical copyright renewal entry look like this (the first one listed for 1955):

ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH); 10Jun55; R151558.

The movie title as well as registration and renewal dates are easy enough to locate by a program (split on first comma and look for DDmmmYY). The rest of the text is not required to find the movie in IMDB, but is useful to confirm the correct movie is found. I am not quite sure what the L and R numbers mean, but suspect they are reference numbers into the archive of the US Copyright Office.

Tracking down the equivalent IMDB title ID is probably going to be a manual task, but given the year it is fairly easy to search for the movie title using for example http://www.imdb.com/find?q=adam+and+evil+1927&s=all. Using this search, I find that the equivalent IMDB title ID for the first renewal entry from 1955 is http://www.imdb.com/title/tt0017588/.

I suspect the best way to do this would be to make a specialised web service to make it easy for contributors to transcribe and track down IMDB title IDs. In the web service, once a entry is transcribed, the title and year could be extracted from the text, a search in IMDB conducted for the user to pick the equivalent IMDB title ID right away. By spreading out the work among volunteers, it would also be possible to make at least two persons transcribe the same entries to be able to discover any typos introduced. But I will need help to make this happen, as I lack the spare time to do all of this on my own. If you would like to help, please get in touch. Perhaps you can draft a web service for crowd sourcing the task?

Note, Project Gutenberg already have some transcribed copies of the US Copyright Office renewal protocols, but I have not been able to find any film renewals there, so I suspect they only have copies of renewal for written works. I have not been able to find any transcribed versions of movie renewals so far. Perhaps they exist somewhere?

I would love to figure out methods for finding all the public domain works in other countries too, but it is a lot harder. At least for Norway and Great Britain, such work involve tracking down the people involved in making the movie and figuring out when they died. It is hard enough to figure out who was part of making a movie, but I do not know how to automate such procedure without a registry of every person involved in making movies and their death year.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

#metoo

Mër, 13/12/2017 - 9:48pd

I long thought about whether I should post a/my #metoo. It wasn't a rape. Nothing really happened. And a lot of these stories are very disturbing.

And yet it still it bothers me every now and then. I was in school age, late elementary or lower school ... In my hometown there is a cinema. Young as we've been we weren't allowed to see Rambo/Rocky. Not that I was very interested in the movie ... But there the door to the screening room stood open. And curious as we were we looked through the door. The projectionist saw us and waved us in. It was exciting to see a moview from that perspective that was forbidden to us.

He explained to us how the machines worked, showed us how the film rolls were put in and showed us how to see the signals on the screen which are the sign to turn on the second projector with the new roll.

During these explenations he was standing very close to us. Really close. He put his arm around us. The hand moved towards the crotch. It was unpleasantly and we knew that it wasn't all right. But screaming? We weren't allowed to be there ... So we thanked him nicely and retreated disturbed. The movie wasn't that good anyway.

Nothing really happened, and we didn't say anything.

/personal | permanent link | Comments: 2 | Flattr this

Rhonda http://rhonda.deb.at/blog/ Rhonda's Blog

Thinkpad X301

Mër, 13/12/2017 - 9:02pd
Another Broken Thinkpad

A few months ago I wrote a post about “Observing Reliability” [1] regarding my Thinkpad T420. I noted that the T420 had been running for almost 4 years which was a good run, and therefore the failed DVD drive didn’t convince me that Thinkpads have quality problems.

Since that time the plastic on the lid by the left hinge broke, every time I open or close the lid it breaks a bit more. That prevents use of that Thinkpad by anyone who wants to use it as a serious laptop as it can’t be expected to last long if opened and closed several times a day. It probably wouldn’t be difficult to fix the lid but for an old laptop it doesn’t seem worth the effort and/or money. So my plan now is to give the Thinkpad to someone who wants a compact desktop system with a built-in UPS, a friend in Vietnam can probably find a worthy recipient.

My Thinkpad History

I bought the Thinkpad T420 in October 2013 [2], it lasted about 4 years and 2 months. It cost $306.

I bought my Thinkpad T61 in February 2010 [3], it lasted about 3 years and 8 months. It cost $796 [4].

Prior to the T61 I had a T41p that I received well before 2006 (maybe 2003) [5]. So the T41p lasted close to 7 years, as it was originally bought for me by a multinational corporation I’m sure it cost a lot of money. By the time I bought the T61 it had display problems, cooling problems, and compatibility issues with recent Linux distributions.

Before the T41p I had 3 Thinkpads in 5 years, all of which had the type of price that only made sense in the dot-com boom.

In terms of absolute lifetime the Thinkpad T420 did ok. In terms of cost per year it did very well, only $6 per month. The T61 was $18 per month, and while the T41p lasted a long time it probably cost over $2000 giving it a cost of over $20 per month. $20 per month is still good value, I definitely get a lot more than $20 per month benefit from having a laptop. While it’s nice that my most recent laptop could be said to have saved me $12 per month over the previous one, it doesn’t make much difference to my financial situation.

Thinkpad X301

My latest Thinkpad is an X301 that I found on an e-waste pile, it had a broken DVD drive which is presumably the reason why someone decided to throw it out. It has the same power connector as my previous 2 Thinkpads which was convenient as I didn’t find a PSU with it. I saw a review of the T301 dated 2008 which probably means it was new in 2009, but it has no obvious signs of wear so probably hasn’t been used much.

My X301 has a 1440*900 screen which isn’t as good as the T420 resolution of 1600*900. But a lower resolution is an expected trade-off for a smaller laptop. The T310 comes with a 64G SSD which is a significant limitation.

I previously wrote about a “cloud lifestyle” [6]. I hadn’t implemented all the ideas from that post due to distractions and a lack of time. But now that I’ll have a primary PC with only 64G of storage I have more incentive to do that. The 100G disk in the T61 was a minor limitation at the time I got it but since then everything got bigger and 64G is going to be a big problem and the fact that it’s an unusual 1.8″ form factor means that I can’t cheaply upgrade it or use the SSD that I’ve used in the Thinkpad T420.

My current Desktop PC is an i7-2600 system which builds the SE Linux policy packages for Debian (the thing I compile most frequently) in about 2 minutes with about 5 minutes of CPU time used. the same compilation on the X301 takes just over 6.5 minutes with almost 9 minutes of CPU time used. The i5 CPU in the Thinkpad T420 was somewhere between those times. While I can wait 6.5 minutes for a compile to test something it is an annoyance. So I’ll probably use one of the i7 or i5 class servers I run to do builds.

On the T420 I had chroot environments running with systemd-nspawn for the last few releases of Debian in both AMD64 and i386 variants. Now I have to use a server somewhere for that.

I stored many TV shows, TED talks, and movies on the T420. Probably part of the problem with the hinge was due to adjusting the screen while watching TV in bed. Now I have a phone with 64G of storage and a tablet with 32G so I will use those for playing videos.

I’ve started to increase my use of Git recently. There’s many programs I maintain that I really should have had version control for years ago. Now the desire to develop them on multiple systems gives me an incentive to do this.

Comparing to a Phone

My latest phone is a Huawei Mate 9 (I’ll blog about that shortly) which has a 1920*1080 screen and 64G of storage. So it has a higher resolution screen than my latest Thinkpad as well as equal storage. My phone has 4G of RAM while the Thinkpad only has 2G (I plan to add RAM soon).

I don’t know of a good way of comparing CPU power of phones and laptops (please comment if you have suggestions about this). The issues of GPU integration etc will make this complex. But I’m sure that the octa-core CPU in my phone doesn’t look too bad when compared to the dual-core CPU in my Thinkpad.

Conclusion

The X301 isn’t a laptop I would choose to buy today. Since using it I’ve appreciated how small and light it is, so I would definitely consider a recent X series. But being free the value for money is NaN which makes it more attractive. Maybe I won’t try to get 4+ years of use out of it, in 2 years time I might buy something newer and better in a similar form factor.

I can just occasionally poll an auction site and bid if there’s anything particularly tempting. If I was going to buy a new laptop now before the old one becomes totally unusable I would be rushed and wouldn’t get the best deal (particularly given that it’s almost Christmas).

Who knows, I might even find something newer and better on an e-waste pile. It’s amazing the type of stuff that gets thrown out nowadays.

Related posts:

  1. Observing Reliability Last year I wrote about how great my latest Thinkpad...
  2. I Just Bought a new Thinkpad and the Lenovo Web Site Sucks I’ve just bought a Thinkpad T61 at auction for $AU796....
  3. Thinkpad T420 I’ve owned a Thinkpad T61 since February 2010 [1]. In...
etbe https://etbe.coker.com.au etbe – Russell Coker

Altos1.8.3

Mar, 12/12/2017 - 6:44md
AltOS 1.8.3 — TeleMega version 3.0 support and bug fixes

Bdale and I are pleased to announce the release of AltOS version 1.8.3.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, STMF042, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a minor release of AltOS, including support for our new TeleMega v3.0 board and a selection of bug fixes

Announcing TeleMega v3.0

TeleMega is our top of the line flight computer with 9-axis IMU, 6 pyro channels, uBlox Max 7Q GPS and 40mW telemetry system. Version 3.0 is feature compatible with version 2.0, incorporating a new higher-perfomance 9-axis IMU in place of the former 6-axis IMU and separate 3-axis magnetometer.

AltOS 1.8.3

In addition to support for TeleMega v3.0 boards, AltOS 1.8.3 contains some important bug fixes for all flight computers. Users are advised to upgrade their devices.

  • Ground testing EasyMega and TeleMega additional pyro channels could result in a sticky 'fired' status which would prevent these channels from firing on future flights.

  • Corrupted flight log records could prevent future flights from capturing log data.

  • Fixed saving of pyro configuration that ended with 'Descending'. This would cause the configuration to revert to the previous state during setup.

The latest AltosUI and TeleGPS applications have improved functionality for analyzing flight data. The built-in graphing capabilities are improved with:

  • Graph lines have improved appearance to make them easier to distinguish. Markers may be placed at data points to show captured recorded data values.

  • Graphing offers the ability to adjust the smoothing of computed speed and acceleration data.

Exporting data for other applications has some improvements as well:

  • KML export now reports both barometric and GPS altitude data to make it more useful for Tripoli record reporting.

  • CSV export now includes TeleMega/EasyMega pyro voltages and tilt angle.

Keith Packard http://keithp.com/blog/ blog

two holiday stories

Hën, 11/12/2017 - 10:04md

Two stories of something nice coming out of something not-so-nice for the holidays.

Story the first: The Gift That Kept on Giving

I have a Patreon account that is a significant chunk of my funding to do what I do. Patreon has really pissed off a lot of people this week, and people are leaving it in droves. My Patreon funding is down 25%.

This is an opportunity for Liberapay, which is run by a nonprofit, and avoids Patreon's excessive fees, and is free software to boot. So now I have a Liberapay account and have diversified my sustainable funding some more, although only half of the people I lost from Patreon have moved over. A few others have found other ways to donate to me, including snail mail and Paypal, and others I'll just lose. Thanks, Patreon..

Yesterday I realized I should check if anyone had decided to send me Bitcoin. Bitcoin donations are weird because noone ever tells me that they made them. Also because it's never clear if the motive is to get me invested in bitcoin or send me some money. I prefer not to be invested in risky currency speculation, preferring risks like "write free software without any clear way to get paid for it", so I always cash it out immediately.

I have not used bitcoin for a long time. I could see a long time ago that its development community was unhealthy, that there was going to be a messy fork and I didn't want the drama of that. My bitcoin wallet files were long deleted. Checking my address online, I saw that in fact two people had reacted to Patreon by sending a little bit of bitcoin to me.

I checked some old notes to find the recovery seeds, and restored "hot wallet" and "cold wallet", not sure which was my public incoming wallet. Neither was, and after some concerned scrambling, I found the gpg locked file in a hidden basement subdirectory that let me access my public incoming wallet, and in fact two people had reacted to Patreon by sending bitcoin to me.

What of the other two wallets? "Hot wallet" was empty. But "cold wallet" turned out to be some long forgotten wallet, and yes, this is now a story about "some long forgotten bitcoin wallet" -- you know where this is going right?

Yeah, well it didn't have a life changing amount of bitcoin in it, but it had a little almost-dust from a long-ago bitcoin donor, which at current crazy bitcoin prices, is enough that I may need to fill out a tax form now that I've sold it. And so I will be having a happy holidays, no matter how the Patreon implosion goes. But for sustainable funding going forward, I do hope that Liberapay works out.

Story the second: "a lil' positive end note does wonders"

I added this to the end of git-annex's bug report template on a whim two years ago:

Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)

That prompt turned out to be much more successful than I had expected, and so I want to pass the gift of the idea on to you. Consider adding something like that to your project's bug report template.

It really works: I'll see a bug report be lost and confused and discouraged, and keep reading to make sure I see whatever nice thing there might be at the end. It's not just about meaningless politeness either, it's about getting an impression about whether the user is having any success at all, and how experienced they are in general, which is important in understanding where a bug report is coming from.

I've learned more from it than I have from most other interactions with git-annex users, including the git-annex user surveys. Out of 217 bug reports that used this template, 182 answered the question. Here are some of my favorite answers.

Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)
  • I do! I wouldn't even have my job, if it wasn't for git-annex. ;-)

  • Yeah, it works great! If not for it I would not have noticed this data corruption until it was too late.

  • Indeed. All my stuff (around 3.5 terabytes) is stored in git-annex with at least three copies of each file on different disks and locations, spread over various hard disks, memory sticks, servers and you name it. Unused disk space is a waste, so I fill everything up to the brim with extra copies.

    In other words, Git-Annex and I are very happy together, and I'd like to marry it. And because you are the father, I hereby respectfully ask for your blessing.

  • Yes, git with git annex has revolutionised my scientific project file organisation and thats why I want to improve it.

  • <3 <3 <3

  • We use git-annex for our open-source FreeSurfer software and find very helpful indeed. Thank you. https://surfer.nmr.mgh.harvard.edu/

  • Yes I have! I've used it manage lots of video editing disks before, and am now migrating several slightly different copies of 15TB sized documentary footage from random USB3 disks and LTO tapes to a RAID server with BTRFS.

  • Oh yeah! This software is awesome. After getting used to having "dummy" shortcuts to content I don't currently have, with the simple ability to get/drop that content, I can't believe I haven't seen this anywhere before. If there is anything more impressive than this software, it's the support it has had from Joey over all this time. I'd have pulled my hair out long ago. :P

  • kinda

  • Yep, works apart from the few tests that fail.

  • Not yet, but I'm excited to make it work!

  • Roses are red
    Violets are blue
    git-annex is awesome
    and so are you
    ;-)
    But bloody hell, it's hard to get this thing to build.

  • git-annex is awesome, I lean on it heavily nearly every single day.

  • I have a couple of repositories atm, one with my music, another that backs up our family pictures for the family and uses Amazon S3 as a backup.

  • Yes! It's by far one of my favorite apps! it works very well on my laptop, on my home file server, and on my internal storage on my Android phone :)

  • Yes! I've been using git-annex quite a bit over the past year, for everything from my music collection to my personal files. Using it for a not-for-profit too. Even trying to get some Mac and Windows users to use it for our HOA's files.

  • I use git-annex for everything. I've got 10 repositories and around 2.5TB of data in those repos which in turn is synced all over the place. It's excellent.

  • Really nice tool. Thanks Joey!

  • Git-annex rocks !!!!

  • I'd love to say I have. You'll hear my shout of joy when I do.

  • Mixed bag, works when it works, but I've had quite a few "unexplained" happenings. Perservering for now, hoping me reporting bugs will see things improve...

  • Yes !!! I'm moving all my files into my annex. It is very robust; whenever something is wrong there is always some other copy somewhere that can be used.

  • Yes! git annex has been enormously helpful. Thanks so much for this tool.

  • Oh yes! I love git-annex :) I've written the hubiC special remote for git-annex, the zsh completion, contributed to the crowdfunding campaigns, and I'm a supporter on Patreon :)

  • Yes, managing 30000 files, on operating systems other than Windows though...

  • Of course ;) All the time

  • I trust Git Annex to keep hundreds of GB of data safe, and it has never failed me - despite my best efforts

  • Oh yeah, I am still discovering this powerfull git annex tool. In fact, collegues and I are forming a group during the process to exchange about different use cases, encountered problems and help each other.

  • I love the metadata functionality so much that I wrote a gui for metadata operations and discovered this bug.

  • Sure, it works marvels :-) Also what I was trying to do is perhaps not by the book...

  • Oh, yes. It rules. :) One of the most important programs I use because I have all my valuable stuff in it. My files have never been safer.

  • I'm an extremely regular user of git-annex on OS X and Linux, for several years, using it as a podcatcher and to manage most of my "large file" media. It's one of those "couldn't live without" tools. Thanks for writing it.

  • Yes, I've been using git annex for I think a year and a half now, on several repositories. It works pretty well. I have a total of around 315GB and 23K annexed keys across them (counting each annex only once, even though they're cloned on a bunch of machines).

  • I only find (what I think are) bugs because I use it and I use it because I like it. I like it because it works (except for when I find actual bugs :]).

  • I'm new to git-annex and immediately astonished by its unique strength.

  • As mentioned before, I am very, very happy with git-annex :-) Discovery of 2015 for me.

  • git-annex is great and revolutionized my file organization and backup structure (if they were even existing before)

  • That’s just a little hiccup in, up to now, various months of merry use! ;-)

  • Yes. Love it. Donated. Have been using it for years. Recommend it and get(/force) my collaborators to use it. ;-)

  • git-annex is an essential building block in my digital life style!

  • Well, git-annex is wonderful!

A lil' positive end note turned into a big one, eh? :)

Joey Hess http://joeyh.name/blog/ see shy jo

Systemd, Devuan, and Debian

Hën, 11/12/2017 - 2:00md

Somebody recently pointed me towards a blog post by a small business owner who proclaimed to the world that using Devuan (and not Debian) is better, because it's cheaper.

Hrm.

Looking at creating Devuan, which means splitting of Debian, economically, you caused approximately infinite cost.

Well, no. I'm immensely grateful to the Devuan developers, because when they announced their fork, all the complaints about systemd on the debian-devel mailinglist ceased to exist. Rather than a cost, that was an immensely gratifying experience, and it made sure that I started reading the debian-devel mailinglist again, which I had stopped for a while before that. Meanwhile, life in Debian went on as it always has.

Debian values choice. Fedora may not be about choice, but Debian is. If there are two ways of doing something, Debian will include all four. If you want to run a Linux system, and you're not sure whether to use systemd, upstart, or something else, then Debian is for you! (well, except if you want to use upstart, which is in jessie but not in stretch). Debian defaults to using systemd, but it doesn't enforce it; and while it may require a bit of manual handholding to make sure that systemd never ever ever ends up on your system, this is essentially not difficult.

you@your-machine:~$ apt install equivs; equivs-control your-sanity; $EDITOR your-sanity

Now make sure that what you get looks something like this (ignoring comments):

Section: misc Priority: standard Standards-Version: <whatever was there> Package: your-sanity Essential: yes Conflicts: systemd-sysv Description: Make sure this system does not install what I don't want The packages in the Conflicts: header cannot be installed without very difficult steps, and apt will never offer to install them.

Install it on every system where you don't want to run systemd. You're done, you'll never run systemd. Well, except if someone types the literal phrase "Yes, do as I say!", including punctuation and everything, when asked to do so. If you do that, well, you get to keep both pieces. Also, did you see my pun there? Yes, it's a bit silly, I admit it.

But before you take that step, consider this.

Four years ago, I was an outspoken opponent of systemd. It was a bad idea, I thought. It is not portable. It will cause the death of Debian GNU/kFreeBSD, and a few other things. It is difficult to understand and debug. It comes with a truckload of other things that want to replace the universe. Most of all, their developers had a pretty bad reputation of being, pardon my French, arrogant assholes.

Then, the systemd maintainers filed bug 796633, asking me to provide a systemd unit for nbd-client, since it provided an rcS init script (which is really a very special case), and the compatibility support for that in systemd was complicated and support for it would be removed from the systemd side. Additionally, providing a systemd template unit would make the systemd nbd experience much better, without dropping support for other init systems (those cases can still use the init script). In order to develop that, I needed a system to test things on. Since I usually test things on my laptop, I installed systemd on my laptop. The intent was to remove it afterwards. However, for various reasons, that never happened, and I still run systemd as my pid1. Here's why:

  • Systemd is much faster. Where my laptop previously took 30 to 45 seconds to boot using sysvinit, it takes less than five. In fact, it took longer for it to do the POST than it took for the system to boot from the time the kernel was loaded. I changed the grub timeout from the default of five seconds to something more reasonable, because I found that five seconds was just ridiculously long if it takes about half that for the rest of the system to boot to a login prompt afterwards.
  • Systemd is much more reliable. That is, it will fail more often, but it will reliably fail. When it fails, it will tell you why it failed, so you can figure out what went wrong and fix it, making sure the system never fails again in the same fashion. The unfortunate fact of the matter is that there were many bugs in our init scripts, but they were never discovered and therefore lingered. For instance, you would not know about this race condition between two init scripts, because sysvinit is so dog slow that 99 times out of 100 it would not trigger, and therefore you don't see it. The one time you do see it, something didn't come up, but sysvinit doesn't log about such errors (it expects the init script to do so), so all you can do is go "damn, wtf happened?!?" and manually start things, allowing the bug to remain. These race conditions were much more likely to trigger with systemd, which caused it a lot of grief originally; but really, you should be thankful, because now that all these race conditions have been discovered by way of an init system that is much more verbose about such problems, they have also been fixed, and your sysvinit system is more reliable, too, as a result. There are other similar issues (dependency loops, to name one) that systemd helped fix.
  • Systemd is different, and that requires some re-schooling. When I first moved my laptop to systemd, I remember running into some kind of issue that I couldn't figure out how to fix. No, I don't remember the specifics of that issue, but they don't really matter. The point is this: at first, I thought "this is horrible, you can't debug it, how can you use such a system". And while it's true that undebuggable systems are not very useful, the systemd maintainers know this too, and therefore systemd is debuggable. It's just that you don't debug it by throwing some imperative init script code through a debugger (or, worse, something like sh -x), because there is no imperative init script code to throw through such a debugger, and therefore that makes little sense. Instead, there is a wealth of different tools to inspect the systemd state, and a lot of documentation on what the different things mean. It takes a while to internalize all that; and if you're not convinced that systemd is a good thing then it may mean some cursing while you're fighting your way through. But in the end, systemd is not more difficult to debug than simple init scripts -- in fact, it sometimes may be easier, because the system is easier to reason about.
  • While systemd comes with a truckload of extra daemons (systemd-networkd, systemd-resolved, systemd-hostnamed, etc etc etc), the systemd in their name do not imply that they are required by systemd. In fact, it's the other way around: you are required to run systemd if you want to run systemd-networkd (etc), because systemd-networkd (etc) make extensive use of the systemd infrastructure and public APIs; but nothing inside systemd requires that systemd-networkd (etc) are running. In fact, on my personal laptop, beyond systemd and udev themselves, I'm not using anything that gets built from the systemd source.

I'm not saying these reasons are universally true, and I'm not saying that you'll like systemd as much as I have. I am saying, however, that you should give it an honest attempt before you say "I'm not going to run systemd, ever," because you might be surprised by the huge gap of difference between what you expected and what you got. I know I was.

So, given all that, do I think that Devuan is a good idea? It is if you want flamewars. It gives those people who want vilify systemd a place to do that without bothering Debian with their opinion. But beyond that, if you want to run Debian and you don't want to run systemd, you can! Just make sure you choose the right options, and you're done.

All that makes me wonder why today, almost half a year after the initial release of Debian 9.0 "Stretch", Devuan Ascii still hasn't released, and why it took them over two years to release their Devuan Jessie based on Debian Jessie. But maybe that's just me.

Wouter Verhelst https://grep.be/blog//pd/ pd

Using all of the 5 GHz WiFi frequencies in a Gargoyle Router

Hën, 11/12/2017 - 3:03pd

WiFi in the 2.4 GHz range is usually fairly congested in urban environments. The 5 GHz band used to be better, but an increasing number of routers now support it and so it has become fairly busy as well. It turns out that there are a number of channels on that band that nobody appears to be using despite being legal in my region.

Why are the middle channels unused?

I'm not entirely sure why these channels are completely empty in my area, but I would speculate that access point manufacturers don't want to deal with the extra complexity of the middle channels. Indeed these channels are not entirely unlicensed. They are also used by weather radars, for example. If you look at the regulatory rules that ship with your OS:

$ iw reg get global country CA: DFS-FCC (2402 - 2472 @ 40), (N/A, 30), (N/A) (5170 - 5250 @ 80), (N/A, 17), (N/A), AUTO-BW (5250 - 5330 @ 80), (N/A, 24), (0 ms), DFS, AUTO-BW (5490 - 5600 @ 80), (N/A, 24), (0 ms), DFS (5650 - 5730 @ 80), (N/A, 24), (0 ms), DFS (5735 - 5835 @ 80), (N/A, 30), (N/A)

you will see that these channels are flagged with "DFS". That stands for Dynamic Frequency Selection and it means that WiFi equipment needs to be able to detect when the frequency is used by radars (by detecting their pulses) and automaticaly switch to a different channel for a few minutes.

So an access point needs extra hardware and extra code to avoid interfering with priority users. Additionally, different channels have different bandwidth limits so that's something else to consider if you want to use 40/80 MHz at once.

Using all legal channels in Gargoyle

The first time I tried setting my access point channel to one of the middle 5 GHz channels, the SSID wouldn't show up in scans and the channel was still empty in WiFi Analyzer.

I tried changing the channel again, but this time, I ssh'd into my router and looked at the errors messages using this command:

logread -f

I found a number of errors claiming that these channels were not authorized for the "world" regulatory authority.

Because Gargoyle is based on OpenWRT, there are a lot more nnwireless configuration options available than what's exposed in the Web UI.

In this case, the solution was to explicitly set my country in the wireless options by putting:

country 'CA'

(where CA is the country code where the router is physically located) in the 5 GHz radio section of /etc/config/wireless on the router.

Then I rebooted and I was able to set the channel successfully via the Web UI.

If you are interested, there is a lot more information about how all of this works in the kernel documentation for the wireless stack.

François Marier http://feeding.cloud.geek.nz/tags/debian/ pages tagged debian

Debian 8.10 and Debian 9.3 released - CDs and DVDs published

Dje, 10/12/2017 - 6:27md
Done a tiny bit of testing for this: Sledge and RattusRattus and others have done far more.

Always makes me feel good: always makes me feel as if Debian is progressing - and I'm always amazed I can persuade my oldest 32 bit laptop to work :) Andrew Cater noreply@blogger.com FLOSSLinux

WordPress 4.9.1

Sht, 09/12/2017 - 7:23pd

After a much longer than expected break due to moving and the resulting lack of Internet, plus WordPress releasing a package with a non-free file, the Debian package for WordPress 4.9.1 has been uploaded!

WordPress 4.9 has a number of improvements, especially around the customiser components so that looked pretty slick. The editor for the customiser now has a series of linters what will warn if you write something bad, which is a very good thing! Unfortunately the Javascript linter is jshint which uses a non-free license which that team is attempting to fix.  I have also reported the problem to WordPress upstream to have a look at.

While this was all going on, there were 4 security issues found in WordPress which resulted in the 4.9.1 release.

Finally I got the time to look into the jshint problem and Internet to actually download the upstream files and upload the Debian packages. So version 4.9.1-1 of the packages have now been uploaded and should be in the mirrors soon.  I’ll start looking at the 4.9.1 patches to see what is relevant for Stretch and Jessie.

Craig http://dropbear.xyz Small Dropbear

Back Online

Pre, 08/12/2017 - 11:58pd

I now have Internet back! Which means I can try to get the Debian WordPress packages bashed into shape. Unfortunately they still have the problem with the json horrible “no evil” license which causes so many problems all over the place.

I’m hoping there is a simple way of just removing that component and going from there.

Craig http://dropbear.xyz Small Dropbear

Testing OpenStack using tempest: all is packaged, try it yourself

Pre, 08/12/2017 - 12:00pd

tl;dr: this post explains how the new openstack-tempest-ci-live-booter package configures a machine to PXE boot a Debian Live system running on KVM in order to run functional testing of OpenStack. It may be of interest to you if you want to learn how to PXE boot a KVM virtual machine running Debian Live, even if you aren’t interested in OpenStack.

Moving my CI from one location to another leads to package it fully

After packaging a release of OpenStack, it’s kind of mandatory to functionally test the set of packages. This is done by running the tempest test suite on an already deployed OpenStack installation. I used to do that on a real hardware, provided by my employer. But since I’ve lost my job (I’m still looking for a new employer at this time), I also lost access to the hardware they were providing to me.

As a consequence, I searched for a sponsor to provide the hardware to run tempest on. I first sent a mail to the openstack-dev list, asking for such a hardware. Then Rochelle Grober and Stephen Li from Huawei got me in touch with Zachary Smith, the CEO of Packet.net. And packet.net gave me an account on their system. I am amazed how good their service is. They provide baremetal servers around the world (15 data centers), provisioned using an API (meaning, fully automatically). A big thanks to them!

Anyway, even if I planned for a few weeks to give a big thanks to the above people (they really deserves it!), this isn’t the only goal of this post. This is to introduce how to run your own tempest CI on your own machine. Because since I have been in the situation where my CI had to move twice, I decided to industrialize it, and fully automate the setup of the CI server. And what does a DD do when writing software? Package it of course. So I packaged it all, and uploaded it to the archive. Here’s how to use all of this.

General principle

The best way to run an OpenStack tempest CI is to run it on a Debian Live system. Why? Because setting-up a full OpenStack environment takes a lot of time, mostly spent on disk I/O. And on a live system, everything runs on a RAM disk, so installing under this environment is the fastest way one could do. This is what I did when working with Mirantis: I had a real baremetal server, which I was PXE booting on a Debian Live system. However nice, this imposes having access to 2 servers: one for running the Live system, and one running the dhcp/pxe/tftp server. Also, this means the boot server needs 2 nics, one on the internet, and one for booting the 2nd server that will run the Live system. It was not possible to have such specific setup at packet, so I decided to replicate this using KVM, so it would become portable. And since the servers at packet.net are very fast, it isn’t much of an issue anymore to not run on baremetal.

Anyway, let’s dive into setting-up all of this.

Network topology

We’ll assume that one of your interface has internet access, let’s say eth0. Since we don’t want to destroy any of your network config, the openstack-tempest-ci-live-booter package will use a dummy network interface (ie: modprobe dummy) and bridge it to the network interface of the KVM virtual machine. That dummy network interface will be configured with 192.168.100.1, and the Debian Live KVM will use 192.168.100.2. This convenient default can be changed, but then you’ll have to pass your specific network configuration to each and every script (just read the beginning of each script to read the parameters).

Configure the host machine

First install the openstack-tempest-ci-live-booter package. This runtime depends on the isc-dhcp-server, tftpd-hpa, apache2, qemu-kvm and all what’s needed to run a Debian Live machine, booting it over PXE / iPXE (the package support both, more on iPXE later). So, let’s do it:

apt-get install openstack-tempest-ci-live-booter

The package, once installed, doesn’t do much. To respect the Debian policy, it can’t touch configuration files of other packages in maintainer scripts. Therefore, you have to manually run:

openstack-tempest-ci-live-booter-config --configure-dummy-nick

Running this script will:

  • configure the kvm-intel module to allow nested visualization (by unloading the module, adding “options kvm-intel nested=y” to /etc/modprobe.d, and reloading the module)
  • modprobe the dummy kernel module, run “ip link set name tempestnic0 dev dummy0” to create a tempestnic0 dummy interface
  • create a tempestbr bridge, set 192.168.100.1 for the bridge IP, bridge the tempestnic0 and tempesttap
  • configure tftpd-hpa to listen on 192.168.100.1
  • configure isc-dhcp-server to dhcpreply 192.168.100.2 on the tempestbr, so that the KVM machine can boot up with an IP
  • configure apache2 to serve the filesystem.squashfs root filesystem, loaded by the Linux kernel at boot time. Note that you may need to manually start and/or reload apache after this setup though.

Again, you can change the IP addresses if you like. You can also use a real interface if you intend to boot a real hardware rather than a KVM machine (in which case, just omit the –configure-dummy-nick, and manually configure your 2nd interface).

Also, openstack-tempest-ci-live-booter provides a /etc/init.d/openstack-tempest-ci-live-booter script which will configure NAT on your server, so that the Debian Live machine has internet access (needed for apt-get operations). Edit the file if you need to change 192.168.100.1/24 by something else. The script will pick-up the interface that is connected to the default gateway by itself.

The dhcp server is configured to support both legacy PXE and the new iPXE standard. I had to support iPXE, because that’s what the standard KVM ROM does, and also I wanted to keep legacy support for older baremetal hardware. The way iPXE works is that dhcpd tells the client where to fetch the iPXE script, which itself chains to lpxelinux.0 (instead of the standard pxelinux.0). It’s rather easy to setup once you understood how it works.

Build the live image

Now that the PXE server is configured, it’s now time to build the Debian live image. Simply do this to build the image, and copy its resulting files in the PXE server folder (ie: /var/lib/tempest-live-booter):

mkdir live cd live openstack-tempest-ci-build-live-image --debian-mirror-addr http://ftp.nl.debian.org/debian

Since we need to login in that server later on, the script will create an ssh key-pair. If you want your own keys, simply drop the id_rsa and id_rsa.pub files in your current folder before running the script. Then make it so that this key-pair can be later on used by default by the user who will run the tempest script (ie: copy id_rsa and id_rsa.pub in the ~/.ssh folder).

Running the openstack-tempest-ci

What the openstack-tempest-ci script does is (re-)starting your KVM virtual machine, ssh into it, upgrade it to sid, install OpenStack, and eventually run all the tempest suite. There’s 2 ways to run it: either install the openstack-tempest-ci package, eventually configure it (in /etc/default/openstack-tempest-ci), and simply run the “openstack-tempest-ci” command. Or, you can skip the installation of the package, and simply run it from source:

git clone http://anonscm.debian.org/git/openstack/debian/openstack-meta-packages.git cd openstack-meta-packages/src ./openstack-tempest-ci

Indeed, the script is designed to copy all scripts from source inside the Debian Live machine before using these scripts. The reason it’s doing that is because we want to avoid the situation where a modification needs to be uploaded to Debian before being able to test it, and also it was needed to be able to run the openstack-tempest-ci script without installing a package (which would need root access that I don’t have on casulana.debian.org, where running tempest is needed to test official OpenStack Debian images). So, definitively, feel free to hack everything in openstack-meta-packages/src before running the tempest script. Also, openstack-tempest-ci will look for a sources.list file in the current directory, and upload it to the Debian Live system before doing the upgrade/install. This way, it is easy to use the closest mirror.

Goirand Thomas http://thomas.goirand.fr/blog Zigo's blog

Simple media cachebusting with GitHub pages

Enj, 07/12/2017 - 11:10md

GitHub Pages makes it really easy to host static websites, including sites with custom domains or even with HTTPS via CloudFlare.

However, one typical annoyance with static site hosting in general is the lack of cachebusting so updating an image or stylesheet does not result in any change in your users' browsers until they perform an explicit refresh.

One easy way to add cachebusting to your Pages-based site is to use GitHub's support for Jekyll-based sites. To start, first we add some scaffolding to use Jekyll:

$ cd "$(git rev-parse --show-toplevel) $ touch _config.yml $ mkdir _layouts $ echo '{{ content }}' > _layouts/default.html $ echo /_site/ >> .gitignore

Then in each of our HTML files, we prepend the following header:

--- layout: default ---

This can be performed on your index.html file using sed:

$ sed -i '1s;^;---\nlayout: default\n---\n;' index.html

Alternatively, you can run this against all of your HTML files in one go with:

$ find -not -path './[._]*' -type f -name '*.html' -print0 | \ xargs -0r sed -i '1s;^;---\nlayout: default\n---\n;'

Due to these new headers, we can obviously no longer simply view our site by pointing our web browser directly at the local files. Thus, we now test our site by running:

$ jekyll serve --watch

... and navigate to http://127.0.0.1:3000/.

Finally, we need to append the cachebusting strings itself. For example, if we had the following HTML to include a CSS stylesheet:

<link href="/css/style.css" rel="stylesheet">

... we should replace it with:

<link href="/css/style.css?{{ site.time | date: '%s%N' }}" rel="stylesheet">

This adds the current "build" timestamp to the file, resulting in the following HTML once deployed:

<link href="/css/style.css?1507450135153299034" rel="stylesheet">

Don't forget to to apply it all your other static media, including images and Javascript:

<img src="image.jpg?{{ site.time | date: '%s%N' }}"> <script src="/js/scripts.js?{{ site.time | date: '%s%N' }}')">

To ensure that transitively-linked images are cachebusted, instead of referencing them in the CSS you can specify them directly in the HTML instead:

<header style="background-image: url(/img/bg.jpg?{{ site.time | date: '%s%N' }})"> Chris Lamb https://chris-lamb.co.uk/blog/category/planet-debian lamby: Items or syndication on Planet Debian.

Thoughts on AlphaZero

Enj, 07/12/2017 - 8:35md

The chess world woke up to something of an earthquake two days ago, when DeepMind (a Google subsidiary) announced that they had adapted their AlphaGo engine to play chess with only minimal domain knowledge—and it was already beating Stockfish. (It also plays shogi, but who cares about shogi. :-) ) Granted, the shock wasn't as huge as what the Go community must have felt when the original AlphaGo came in from nowhere and swept with it the undisputed Go throne and a lot of egos in the Go community over the course of a few short months—computers have been better at chess than humans for a long time—but it's still a huge event.

I see people are trying to make sense of what this means for the chess world. I'm not a strong chess player, an AI expert or a top chess programmer, but I do play chess, I've worked in AI (in Google, briefly in the same division as the DeepMind team) and I run what's the strongest chess analysis website online whenever Magnus Carlsen is playing (next game 17:00 UTC tomorrow!), so I thought I should share some musings.

First some background: We've been trying to make computers play chess for almost 70 years now; originally in the hopes that it would lead us to general AI, although we sort of abandoned that eventually. In almost all of that time, we've used the same basic structure; you have an evaluation function that can look at a specific position and say “I think this is good for white”, and then search that sees what happens with that evaluation function by playing all possible moves and countermoves (“oh wow, no matter what happens black is going to take white's queen, so maybe this wasn't so good after all”). The evaluation function roughly consists of a few hundred of hand-crafted features (everything from “the queen is worth nine points and rooks are five” to more complex issues around king safety, pawn structure and piece mobility) which are more or less added together, and the search tries very hard to prune out uninteresting lines so it can go deeper into the more interesting ones. In the end, you're left with a single “principal variation” (PV) consisting of a series of chess moves (presumably the best the engine can find within the allotted time), and the evaluation of the position at the end of the PV is the final evaluation of the position.

AlphaZero is different. Instead of a hand-crafted evaluation function, it just throws the raw information about the position (where the pieces are, and a few other tidbits like right-to-castle) into a neural network and gets out something like an expected win percentage. And instead of searching for the best line, it uses Monte Carlo tree search to make sort-of a weighted average of possible outcomes, explored in a stochastic way. The neural network is simply optimized through reinforcement learning under self-play; it starts off playing what's essentially random moves (it's restricted from playing illegal ones—that's one of the very few pieces of domain-specific knowledge), but rapidly gets better as the neural network learns what works or not.

These are not new ideas (in fact, I'm hard pressed to find a single new thing in the paper), and the basic structure has been attempted applied to chess in the past with master-level results, but it hasn't really made something approaching the top before now. The idea of numerical optimization through self-play is widely used, though, mostly to tune things like piece-square tables and other evaluation function weights. So I think that it's mainly
through great engineering and tons of computing power, not a radical breakthrough, that DeepMind has managed to make what's now probably the world's strongest chess entity on the planet. (I say “probably” because it “only” won 64–36 against Stockfish 8, which is about 100 Elo, and that's probably possible to do with a few hardware doublings and/or Stockfish improvements. Granted, it didn't lose a single game, and it's highly likely that AlphaZero's approach has a lot more room for further improvement than classical alpha-beta has.)

So what do I think AlphaZero will change? In the short term: Nothing. The paper contains ten games (presumably cherry-picked wins) of the 100-game match, and while those show beautiful chess that at times makes Stockfish seem cramped and limited, they don't seem to show any radically new chess ideas like AlphaGo did with Go. Nobody knows when or if DeepMind will release more games, although they have released a fair amount of Go games in the past, and also done Go exhibition matches. People are trying to pick out information from its opening choices (for instance, it prefers the infamous Berlin defense as black), which is interesting, but right now, there's just too little data to kill entire lines or openings.

We're also not likely to stop playing chess anytime soon, for the same reason that Magnus Carlsen nearly hitting 3000 Elo in blitz didn't stop me from playing online. AlphaZero hasn't solved chess by any means, and even though checkers has been weakly solved (Chinook) provably never loses a game from the opening position, although it won't win every won position), people still play it even on the top level. Most people simply are not at the level where the existence of perfect play matters, nor are their primary motivation to try to explore the frontiers of it.

So the primary question is whether top players can use this to improve their game. Now, DeepMind is not in the business of selling software; they're an AI research company, and AlphaZero runs on hardware (TPUs) you can't buy at this moment, and hardly even rent in the cloud. (Granted, you could probably make AlphaZero run efficiently on GPUs, especially the newer ones that start to get custom blocks for accelerating neural networks, although probably slower and with higher power usage.) Thus, it's unlikely that they will be selling or open-sourcing AlphaZero anytime soon. You could imagine top players wanting to go into talks to pay for exclusive access, but if you look at the costs of developing such a thing (just the training time alone has to be significant), it's obvious that they didn't do this in the hope of recouping the development costs. If anything, you would imagine that they'd sell it as a cloud service, but nothing like that has emerged for AlphaGo, where they have a competitive much larger market lead, so it seems unlikely.

Could anyone take their paper and reimplement it? The answer is: Maybe. AlphaGo was two years ago, has been backed up with several papers, and we still don't have anything public that's really close. Tencent's AI lab has made their own clone (Fine Art), and then there's DeepZenGo and others, but nothing nearly as strong that you can download or buy at this stage (as far as I know, anyway). Chess engines are typically made by teams of one or two people, and so far, deep learning-based approaches seem to require larger teams and a fair amount of (expensive) computing time, and most chess programmers are nor deep learning experts anyway. It's hard to make a living off of selling chess engines even in a small team; one could again assume a for-hire project, but I think not even most of the top players have the money to hire someone for a year or two for doing a speculative project to making an entirely new kind of one. It's limited how much a 100 Elo stronger engine will help you during opening preparation/training anyway; knowing how to work effectively with the computer is much more valuable. After all, it's not like you can use it while playing (unless it's freestyle chess).

The good news is that DeepMind's approach seems to become simpler and simpler over time. The first version of AlphaGo had all sorts of complexities and relied partially on hand-crafted features (although it wasn't very widely publicized), while the latest versions have removed a lot of the fluff. Make no mistake, though; the devil is in the details, and writing a top-class chess engine is a huge undertaking. My hunch is on two to three years before you can buy something that beats Stockfish on the same hardware. But I will hedge my bet here; it's hard to make predictions, especially about the future. Even with a world-class neural network in your brain.

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Three Minimalism reads

Enj, 07/12/2017 - 5:26md

"The Life-Changing Magic of Tidying Up" by Marie Kondo is a popular (New York Times best selling) book by lifestyle consultant Mari Kondo about tidying up and decluttering. It's not strictly about minimalism, although her approach is informed by her own preferences which are minimalist. Like all self-help books, there's some stuff in here that you might find interesting or applicable to your own life, amongst other stuff you might not. Kondo believes, however, that her methods only works if you stick to them utterly.

Next is "Goodbye, Things" by Fumio Sasaki. The end-game for this book really is minimalism, but the book is structured in such a way that readers at any point on a journey to minimalism (or coinciding with minimalism, if that isn't your end-goal) can get something out of it. A large proportion of the middle of the book is given over to a general collection of short, one-page-or-less tips on decluttering, minimising, etc. You can randomly flip through this section a bit like randomly drawing a card from a deck. I started to wonder whether there's a gap in the market for an Oblique Strategies-like minimalism product. The book recommended several blogs for further reading, but they are all written in Japanese.

Finally issue #18 of New Philosopher is the "Stuff" issue and features several articles from modern Philosophers (as well as some pertinent material from classical ones) on the nature of materialism. I've been fascinated by Philosophy from a distance ever since my brother read it as an Undergraduate so I occasionally buy the philosophical equivalent of Pop Science books or magazines, but this was the most accessible for me that I've read to date.

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

Adding subtitles with FFmpeg

Enj, 07/12/2017 - 1:52md

For future reference (to myself, for the most part):

ffmpeg -i foo.webm -i foo.en.vtt -i foo.nl.vtt -map 0:v -map 0:a \ -map 1:s -map 2:s -metadata:s:a language=eng -metadata:s:s:0 \ language=eng -metadata:s:s:1 language=nld -c copy -y \ foo.subbed.webm

... is one way to create a single .webm file from one .webm input file and multiple .vtt files. A little bit of explanation:

  • The -i arguments pass input files. You can have multiple input files for one output file. They are numbered internally (this is necessary for the -map and -metadata options later), starting from 0.
  • The -map options take a "mapping". With them, you specify which input streams should go where in the output stream. By default, if you have multiple streams of the same type, ffmpeg will only pick one (the "best" one, whatever that is). The mappings we specify are:

    • -map 0:v: this means to take the video stream from the first file (this is the default if you do not specify any mapping at all; but if you do specify a mapping, you need to be complete)
    • -map 0:a: take the audio stream from the first file as well (same as with the video).
    • -map 1:s: take the subtitle stream from the second (i.e., indexed 1) file.
    • -map 2:s: take the subtitle stream from the third (i.e., indexed 2) file.
  • The -metadata options set metadata on the output file. Here, we pass:

    • -metadata:s:a language=eng, to add a 's'tream metadata item on the 'a'udio stream, with name language and content eng. The language metadata in ffmpeg is special, in that it gets automatically translated to the correct way of specifying the language in the target container format.
    • -metadata:s:s:0 language=eng, to add a 's'tream metadata item on the first (indexed 0) 's'ubtitle stream in the output file. This too has the english language set
    • `-metadata:s:s:1 language=nld', to add a 's'tream metadata item on the second (indexed 1) 's'ubtitle stream in the output file. This has dutch set as the language.
  • The -c copy option tells ffmpeg to not transcode the input video data, but just to rewrite the container. This works because all input files (WebM video plus VTT subtitles) are valid for WebM. If you do not have an input subtitle format that is valid for WebM, you can instead limit the copy modifier to the video and audio only, allowing ffmpeg to transcode the subtitles. This is done by way of -c:v copy -c:a copy.
  • Finally, we pass -y to specify that any pre-existing output file should be overwritten, and the name of the output file.
Wouter Verhelst https://grep.be/blog//pd/ pd

RcppArmadillo 0.8.300.1.0

Enj, 07/12/2017 - 1:59pd

Another RcppArmadillo release hit CRAN today. Since our last 0.8.100.1.0 release in October, Conrad kept busy and produced Armadillo releases 8.200.0, 8.200.1, 8.300.0 and now 8.300.1. We tend to now package these (with proper reverse-dependency checks and all) first for the RcppCore drat repo from where you can install them "as usual" (see the repo page for details). But this actual release resumes within our normal bi-monthly CRAN release cycle.

These releases improve a few little nags on the recent switch to more extensive use of OpenMP, and round out a number of other corners. See below for a brief summary.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 405 other packages on CRAN.

A high-level summary of changes follows.

Changes in RcppArmadillo version 0.8.300.1.0 (2017-12-04)
  • Upgraded to Armadillo release 8.300.1 (Tropical Shenanigans)

    • faster handling of band matrices by solve()

    • faster handling of band matrices by chol()

    • faster randg() when using OpenMP

    • added normpdf()

    • expanded .save() to allow appending new datasets to existing HDF5 files

  • Includes changes made in several earlier GitHub-only releases (versions 0.8.300.0.0, 0.8.200.2.0 and 0.8.200.1.0).

  • Conversion from simple_triplet_matrix is now supported (Serguei Sokol in #192).

  • Updated configure code to check for g++ 5.4 or later to enable OpenMP.

  • Updated the skeleton package to current packaging standards

  • Suppress warnings from Armadillo about missing OpenMP support and -fopenmp flags by setting ARMA_DONT_PRINT_OPENMP_WARNING

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

My Free Software Activities in November 2017

Mër, 06/12/2017 - 7:33md

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

Debian Games Debian Java
  • New upstream versions this month: undertow, jackrabbit, libpdfbox2, easymock, libokhttp-java, mediathekview, pdfsam, libsejda-java, libsambox-java and libnative-platform-java.
  • I updated bnd (2.4.1-7) in order to help with the removal of Eclipse from Testing. Unfortunately there is more work to do and the only way forward is to package a newer version of Eclipse and to split the package in a way, so that such issues can be avoided in the future. P.S.: We appreciate help with maintaining Eclipse! (#681726)
  • I sponsored libimglib2-java for Ghislain Antony Vaillant.
  • I fixed a regression in libmetadata-extractor-java related to relative classpaths. (#880746)
  • I spent more time on upgrading Gradle to version 3.4.1 and finally succeeded. The package is in experimental now. Upgrading from 3.2.1 to 3.4.1 didn’t seem like a big undertaking but the 8 MB debdiff and ~170000 lines of code changes proved me wrong. I discovered two regressions with this version in mockito and bnd. The former one could be resolved but bnd requires probably an upgrade as well. I would like to avoid that at the moment because major bnd upgrades tend to affect dozens of reverse-dependencies, mostly in a negative way.
  • Netbeans was affected by a regression in jaxb and failed to build from source. (#882525) I could partly revert the damage but another bug in jaxb 2.3.0 is currently preventing a complete recovery.
  • I fixed two Java 9 transition bugs in libnative-platform-java (#874645) and  jedit (#875583).
Debian LTS

This was my twenty-first month as a paid contributor and I have been paid to work 14.75 hours (13 +1.75 from October) on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-1177-1. Issued a security update for poppler fixing 4 CVE.
  • DLA-1178-1. Issued a security update for opensaml2 fixing 1 CVE.
  • DLA-1179-1. Issued a security update for shibboleth-sp2 fixing 1 CVE.
  • DLA-1180-1. Issued a security update for libspring-ldap-java fixing 1 CVE.
  • DLA-1184-1. Issued a security update for optipng fixing 1 CVE.
  • DLA-1185-1. Issued a security update for sam2p fixing 1 CVE.
  • DLA-1197-1. Issued a security update for sox fixing 7 CVE.
  • DLA-1198-1. Issued a security update for libextractor fixing 6 CVE. I also discovered that libextractor in buster/sid is still affected by more security issues and reported my findings as Debian bug #883528.
Misc
  • I packaged a new upstream release of osmo, a neat task manager and calendar application.
  • I prepared a security update for sam2p, which will be part of the next Jessie point release, and libspring-ldap-java. (DSA-4046-1)

Thanks for reading and see you next time.

Apo https://gambaru.de/blog planetdebian – gambaru.de

Creating a blog with pelican and Github pages

Mar, 05/12/2017 - 11:30md

Today I'm going to talk about how this blog was created. Before we begin, I expect you to be familiarized with using Github and creating a Python virtual enviroment to develop. If you aren't, I recommend you to learn with the Django Girls tutorial, which covers that and more.

This is a tutorial to help you publish a personal blog hosted by Github. For that, you will need a regular Github user account (instead of a project account).

The first thing you will do is to create the Github repository where your code will live. If you want your blog to point to only your username (like rsip22.github.io) instead of a subfolder (like rsip22.github.io/blog), you have to create the repository with that full name.

I recommend that you initialize your repository with a README, with a .gitignore for Python and with a free software license. If you use a free software license, you still own the code, but you make sure that others will benefit from it, by allowing them to study it, reuse it and, most importantly, keep sharing it.

Now that the repository is ready, let's clone it to the folder you will be using to store the code in your machine:

$ git clone https://github.com/YOUR_USERNAME/YOUR_USERNAME.github.io.git

And change to the new directory:

$ cd YOUR_USERNAME.github.io

Because of how Github Pages prefers to work, serving the files from the master branch, you have to put your source code in a new branch, preserving the "master" for the output of the static files generated by Pelican. To do that, you must create a new branch called "source":

$ git checkout -b source

Create the virtualenv with the Python3 version installed on your system.

On GNU/Linux systems, the command might go as:

$ python3 -m venv venv

or as

$ virtualenv --python=python3.5 venv

And activate it:

$ source venv/bin/activate

Inside the virtualenv, you have to install pelican and it's dependencies. You should also install ghp-import (to help us with publishing to github) and Markdown (for writing your posts using markdown). It goes like this:

(venv)$ pip install pelican markdown ghp-import

Once that is done, you can start creating your blog using pelican-quickstart:

(venv)$ pelican-quickstart

Which will prompt us a series of questions. Before answering them, take a look at my answers below:

> Where do you want to create your new web site? [.] ./ > What will be the title of this web site? Renata's blog > Who will be the author of this web site? Renata > What will be the default language of this web site? [pt] en > Do you want to specify a URL prefix? e.g., http://example.com (Y/n) n > Do you want to enable article pagination? (Y/n) y > How many articles per page do you want? [10] 10 > What is your time zone? [Europe/Paris] America/Sao_Paulo > Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) Y **# PAY ATTENTION TO THIS!** > Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) n > Do you want to upload your website using FTP? (y/N) n > Do you want to upload your website using SSH? (y/N) n > Do you want to upload your website using Dropbox? (y/N) n > Do you want to upload your website using S3? (y/N) n > Do you want to upload your website using Rackspace Cloud Files? (y/N) n > Do you want to upload your website using GitHub Pages? (y/N) y > Is this your personal page (username.github.io)? (y/N) y Done. Your new project is available at /home/username/YOUR_USERNAME.github.io

About the time zone, it should be specified as TZ Time zone (full list here: List of tz database time zones).

Now, go ahead and create your first blog post! You might want to open the project folder on your favorite code editor and find the "content" folder inside it. Then, create a new file, which can be called my-first-post.md (don't worry, this is just for testing, you can change it later). The contents should begin with the metadata which identifies the Title, Date, Category and more from the post before you start with the content, like this:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes Title: My first post Date: 2017-11-26 10:01 Modified: 2017-11-27 12:30 Category: misc Tags: first, misc Slug: My-first-post Authors: Your name Summary: What does your post talk about? Write here. This is the *first post* from my Pelican blog. **YAY!**

Let's see how it looks?

Go to the terminal, generate the static files and start the server. To do that, use the following command:

(venv)$ make html && make serve

While this command is running, you should be able to visit it on your favorite web browser by typing localhost:8000 on the address bar.

Pretty neat, right?

Now, what if you want to put an image in a post, how do you do that? Well, first you create a directory inside your content directory, where your posts are. Let's call this directory 'images' for easy reference. Now, you have to tell Pelican to use it. Find the pelicanconf.py, the file where you configure the system, and add a variable that contains the directory with your images:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes STATIC_PATHS = ['images']

Save it. Go to your post and add the image this way:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes ![Write here a good description for people who can't see the image]({filename}/images/IMAGE_NAME.jpg)

You can interrupt the server at anytime pressing CTRL+C on the terminal. But you should start it again and check if the image is correct. Can you remember how?

(venv)$ make html && make serve

One last step before your coding is "done": you should make sure anyone can read your posts using ATOM or RSS feeds. Find the pelicanconf.py, the file where you configure the system, and edit the part about feed generation:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes FEED_ALL_ATOM = 'feeds/all.atom.xml' FEED_ALL_RSS = 'feeds/all.rss.xml' AUTHOR_FEED_RSS = 'feeds/%s.rss.xml' RSS_FEED_SUMMARY_ONLY = False

Save everything so you can send the code to Github. You can do that by adding all files, committing it with a message ('first commit') and using git push. You will be asked for your Github login and password.

$ git add -A && git commit -a -m 'first commit' && git push --all

And... remember how at the very beginning I said you would be preserving the master branch for the output of the static files generated by Pelican? Now it's time for you to generate them:

$ make github

You will be asked for your Github login and password again. And... voilà! Your new blog should be live on https://YOUR_USERNAME.github.io.

If you had an error in any step of the way, please reread this tutorial, try and see if you can detect in which part the problem happened, because that is the first step to debbugging. Sometimes, even something simple like a typo or, with Python, a wrong indentation, can give us trouble. Shout out and ask for help online or on your community.

For tips on how to write your posts using Markdown, you should read the Daring Fireball Markdown guide.

To get other themes, I recommend you visit Pelican Themes.

This post was adapted from Adrien Leger's Create a github hosted Pelican blog with a Bootstrap3 theme. I hope it was somewhat useful for you.

Renata https://rsip22.github.io/blog/ Renata's blog

Faqet