You are here

Agreguesi i feed

I pushed an implementation of myself to GitHub

Planet Debian - Dje, 14/01/2018 - 10:22md

Roughly 4 years ago, I mentioned that there appears to be an esotieric programming language which shares my full name.

I know, it is really late, but two days ago, I discovered Racket. As a Lisp person, I immediately felt at home. And realizing how the language dispatch mechanism works, I couldn't resist and write a Racket implementation of MarioLANG. A nice play on words and a good toy project to get my feet wet.

Racket programs always start with #lang. How convenient. MarioLANG programs for Racket therefore look something like this:

#lang mario ++++++++++++ ===========+: ==

So much about abusing coincidences. Phew, this was a fun weekend project! And it has some potential for more challenges. Right now, it is only an interpreter, because it appears to be tricky to compile a 2d instruction "space" to traditional code. MarioLANG does not only allow for nested loops as BrainFuck does, it also includes weird concepts like the reversal of the instruction pointer direction. Coupled with the "skip" ([) instruction, this allow to create loops which have two exit conditions and reverse code execution on every pass. Something like this:

@[ some brainfuck [@ ====================

And since this is a 2d programming language, this theoretical loop could be entered by jumping onto any of the instruction inbetween from above. And, the heading could be either leftward or rightward when entering.

Discovering these patterns and translating them to compilable code is quite beyond me right now. Lets see what time will bring.

Mario Lang https://blind.guru/ The Blind Guru

SSL migration

Planet Debian - Dje, 14/01/2018 - 11:05pd
SSL migration

This week I managed to finally migrate my personal website to SSL, and on top of that migrate the SMTP/IMAP services to certificates signed by "proper" a CA (instead of my own). This however was more complex than I thought…

Let's encrypt?

I first wanted to do this when Let's Encrypt became available, but the way it works - with short term certificates with automated renewal put me off at first. The certbot tool needs to make semi-arbitrary outgoing requests to renew the certificates, and on public machines I have a locked-down outgoing traffic policy. So I gave up, temporarily…

I later found out that at least for now (for the current protocol), certbot only needs to talk to a certain API endpoint, and after some more research, I realized that the http-01 protocol is very straight-forward, only needing to allow some specific plain http URLs.

So then:

Issue 1: allowing outgoing access to a given API endpoint, somewhat restricted. I solved this by using a proxy, forcing certbot to go through it via env vars, learning about systemctl edit on the way, and from the proxy, only allowing that hostname. Quite weak, but at least not "open policy".

Issue 2: due to how http-01 works, it requires to leave some specific paths under http, which means you can't have (in Apache) a "redirect everything to https" config. While fixing this I learned about mod_macro, which is quite interesting (and doesn't need an external pre-processor).

The only remaining problem is that you can't renew automatically certificates for non-externally accessible systems; the dns protocol also need changing externally-visible state, so more or less the same. So:

Issue 3: For internal websites, still need a solution if own CA (self-signed, needs certificates added to clients) is not acceptable.

How did it go?

It seems that using SSL is more than SSLEngine on. I learned in this exercise about quite a few things.

CAA

DNS Certification Authority Authorization is pretty nice, and although it's not a strong guarantee (against malicious CAs), it gives some more signals that proper clients could check ("For this domain, only this CA is expected to sign certificates"); also, trivial to configure, with the caveat that one would need DNSSEC as well for end-to-end checks.

OCSP stapling

I was completely unaware of OCSP Stapling, and yay, seems like a good solution to actually verifying that the certs were not revoked. However… there are many issues with it:

  • there needs to be proper configuration on the webserver to not cause more problems than without; Apache at least, needs increasing the cache lifetime, disable sending error responses (for transient CA issues), etc.
  • but even more, it requires the web server user to be able to make "random" outgoing requests, which IMHO is a big no-no
  • even the command line tools (i.e. openssl ocsp) are somewhat deficient: no proxy support (while s_client can use one)

So the proper way to do this seems to be a separate piece of software, isolated from the webserver, that does proper/eager refresh of certificates while handling errors well.

Issue 4: No OCSP until I find a good way to do it.

HSTS, server-side and preloading

HTTP Strict Transport Security represent a commitment to encryption: once published with recommended lifetime, browsers will remember that the website shouldn't be accessed over plain http, so you can't rollback.

Preloading HSTS is even stronger, and so far I haven't done it. Seems worthwhile, but I'll wait another week or so ☺ It's easily doable online.

HPKP

HTTP Public Key Pinning seems dangerous, at least by some posts. Properly deployed, it would solve a number of problems with the public key infrastructure, but still, complex and a lot of overhead.

Certificate chains

Something I didn't know before is that the servers are supposed to serve the entire chain; I thought, naïvely, that just the server is enough, since the browsers will have the root-root CA, but the intermediaries seem to be problematic.

So, one needs to properly serve the full chain (Let's Encrypt makes this trivial, by the way), and also monitor that it is so.

Ciphers and SSL protocols

OpenSSL disabled SSLv2 in recent builds, but at least Debian stable still has SSLv3+ enabled and Apache does not disable it, so if you put your shiny new website through a SSL checker you get many issues (related strictly to ciphers).

I spent a bit of time researching and getting to the conclusion that:

  • every reasonable client (for my small webserver) supports TLSv1.1+, so disabling SSLv3/TLSv1.0 solved a bunch of issues
  • however, even for TLSv1.1+, a number of ciphers are not recommended by US standards, but going into explicit cipher disable is a pain because I don't see a way to make it "cheap" (without needing manual maintenance); so there's that, my website is not HIPAA compliant due to Camellia cipher.

Issue 5: Weak default configs

Issue 6: Getting perfect ciphers not easy.

However, while not perfect, getting a proper config once you did the research is pretty trivial in terms of configuration.

My apache config. Feedback welcome:

SSLCipherSuite HIGH:!aNULL SSLHonorCipherOrder on SSLProtocol all -SSLv3 -TLSv1

And similarly for dovecot:

ssl_cipher_list = HIGH:!aNULL ssl_protocols = !SSLv3 !TLSv1 ssl_prefer_server_ciphers = yes ssl_dh_parameters_length = 4096

The last line there - the dh_params - I found via nmap, as my previous config has it do 1024, which is weaker than the key, defeating the purpose of a long key. Which leads to the next point:

DH parameters

It seems that DH parameters can be an issue, in the sense that way too many sites/people reuse the same params. Dovecot (in Debian) generates its own, but Apache (AFAIK) not, and needs explicit configuration added to use your own.

Issue 7: Investigate DH parameters for all software (postfix, dovecot, apache, ssh); see instructions.

Tools

A number interesting tools:

  • Online resources to analyse https config: e.g. SSL labs, and htbridge; both give very detailed information.
  • CAA checker (but this is trivial).
  • nmap ciphers report: nmap --script ssl-enum-ciphers, and very useful, although I don't think this works for STARTTLS protocols.
  • Cert Spotter from SSLMate. This seems to be useful as a complement to CAA (CAA being the policy, and Cert Spotter the monitoring for said policy), but it goes beyond it (key sizes, etc.); for the expiration part, I think nagios/icinga is easier if you already have it setup (check_http has options for lifetime checks).
  • Certificate chain checker; trivial, but a useful extra check that the configuration is right.
Summary

Ah, the good old days of plain http. SSL seems to add a lot of complexity; I'm not sure how much is needed and how much could actually be removed by smarter software. But, not too bad, a few evenings of study is enough to get a start; probably the bigger cost is in the ongoing maintenance and keeping up with the changes.

Still, a number of unresolved issues. I think the next goal will be to find a way to properly do OCSP stapling.

Iustin Pop http://k1024.org/~iustin/blog/ blog

Make 'bts' (devscripts) accept TLS connection to mail server with self signed certificate

Planet Debian - Dje, 14/01/2018 - 3:46pd

My mail server runs with a self signed certificate. So bts, configured like this ...


BTS_SMTP_HOST=mail.wgdd.de:587
BTS_SMTP_AUTH_USERNAME='user'
BTS_SMTP_AUTH_PASSWORD='pass'

...lately refused to send mails with this error:


bts: failed to open SMTP connection to mail.wgdd.de:587
(SSL connect attempt failed error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed)

After searching a bit, I found a way to fix this locally without turning off the server certificate verification. The fix belongs into the send_mail() function. When calling the Net::SMTPS->new() constructor, it is possible to add the fingerprint of my self signed certificate like this (bold):


if (have_smtps) {
$smtp = Net::SMTPS->new($host, Port => $port,
Hello => $smtphelo, doSSL => 'starttls',
SSL_fingerprint => 'sha1$hex-fingerprint'
)
or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";
} else {
$smtp = Net::SMTP->new($host, Port => $port, Hello => $smtphelo)
or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";
}

Pretty happy to being able to use the bts command again.

Daniel Leidert noreply@blogger.com [erfahrungen, meinungen, halluzinationen]

Fixing a Nintendo Game Boy Screen

Planet Debian - Dje, 14/01/2018 - 1:28pd

Over the holidays my old Nintendo Game Boy (the original DMG-01 model) has resurfaced. It works, but the display had a bunch of vertical lines near the left and right border that stay blank. Apparently a common problem with these older Game Boys and the solution is to apply heat to the connector foil upper side to resolder the contacts hidden underneath. There’s lots of tutorials and videos on the subject so I won’t go into much detail here.

Just one thing: The easiest way is to use a soldering iron (the foil is pretty heat resistant, it has to be soldered during production after all) and move it along the top at the affected locations. Which I tried at first and it kind of works but takes ages. Some columns reappear, others disappear, reappeared columns disappear again… In someone’s comment I read that they needed over five minutes until it was fully fixed!

So… simply apply a small drop of solder to the tip. That’s what you do for better heat transfer in normal soldering and of course it also works here (since the foil connector back doesn’t take solder this doesn’t make a mess or anything). That way, the missing columns reappeared practically instantly at the touch of the solder iron and stayed fixed. Temperature setting was 250°C, more than sufficient for the task.

This particular Game Boy always had issues with the speaker stopping to work but we never had it replaced, I think because the problem was intermittent. After locating the bad solder joint on the connector and reheating it this problem was also fixed. Basically this almost 28 year old device is now in better working condition than it ever was.

Andreas Bombe https://activelow.net/tags/pdo/ pdo on Active Low

The VR Show

Planet Debian - Pre, 29/12/2017 - 5:39md

One of the things If I had got the visa on time for Debconf 15 (Germany) apart from the conference itself was the attention on VR (Virtual Reality) and AR (Augmented Reality) . I had heard the hype so much for so many years that I wanted to experience and did know that with Debianities who might be perhaps a bit better in crystal-gazing and would have perhaps more of an idea as I had then. The only VR which I knew about was from Hollywood movies and some VR videos but that doesn’t tell you anything. Also while movie like Chota-Chetan and others clicked they were far lesser immersive than true VR has to be.

I was glad that it didn’t happen after the fact as in 2016 while going to the South African Debconf I experienced VR at Qatar Airport in a Samsung showroom. I was quite surprised as how heavy the headset was and also surprised by how little content they had. Something which has been hyped for 20 odd years had not much to show for it. I was also able to trick the VR equipment as the eye/motion tracking was not good enough so if you put shook the head fast enough it couldn’t keep up with you.

I shared the above as I was invited to another VR conference by a web-programmer/designer friend Mahendra couple of months ago here in Pune itself . We attended the conference and were showcased quite a few success stories. One of the stories which was liked by the geek in me was framastore’s 360 Mars VR Experience on a bus the link shows how the framastore developers mapped Mars or part of Mars on Washington D.C. streets and how kids were able to experience how it would feel to be on Mars without knowing any of the risks the astronauts or the pioneers would have to face if we do get the money, the equipment and the technology to send people to Mars. In reality we are still decades from making such a trip keeping people safe to Mars and back or to have Mars for the rest of their life.

If my understanding is correct, the gravity of Mars is half of earth and once people settle there they or their exoskeleton would no longer be able to support Earth’s gravity, at least a generation who is born on Mars.

An interesting take on how things might turn out is shown in ‘The Expanse

But this is taking away from the topic at hand. While I saw the newer generation VR headsets there are still a bit ways off. It would be interesting once the headset becomes similar to eye-glasses and you do not have to either be tethered to a power unit or need to lug a heavy backpack full of dangerous lithium-ion battery. The chemistry for battery or some sort of self-powered unit would need to be much more safer, lighter.

While being in the conference and seeing the various scenarios being played out between potential developers and marketeers, it crossed my mind that people were not at all thinking of safe-guarding users privacy. Right from what games or choices you make to your biometric and other body sensitive information which has a high chance of being misused by companies and individuals.

There were also questions about how Sony and other developers are asking insane amounts for use of their SDK to develop content while it should be free as games and any content is going to enhance the marketability of their own ecosystem. For both the above questions (privacy and security asked by me) and SDK-related questions asked by some of the potential developers were not really answered.

At the end, they also showed AR or Augmented Reality which to my mind has much more potential to be used for reskilling and upskilling of young populations such as India and other young populous countries. It was interesting to note that both China and the U.S. are inching towards the older demographics while India would relatively be a still young country till another 20-30 odd years. Most of the other young countries (by median age) seem to be in the African continent and I believe (might be a myth) is that they are young because most of the countries are still tribal-like and they still are perhaps a lot of civil wars for resources.

I was underwhelmed by what they displayed in Augmented Reality, part of which I do understand that there may be lot many people or companies working on their IP and hence didn’t want to share or show or show a very rough work so their idea doesn’t get stolen.

I was also hoping somebody would take about motion-sickness or motion displacement similar to what people feel when they are train-lagged or jet-lagged. I am surprised that wikipedia still doesn’t have an article on train-lag as millions of Indians go through the process every year. The one which is most pronounced on Indian Railways is Motion being felt but not seen.

There are both challenges and opportunities provided by VR and AR but until costs come down both in terms of complexity, support and costs (for both the deployer and the user) it would remain a distant dream.

There are scores of ideas that could be used or done. For instance, the whole of North India is one big palace in the sense that there are palaces built by Kings and queens which have their own myth and lore over centuries. A story-teller could use a modern story and use say something like Chota Imambara or/and Bara Imambara where there have been lots of stories of people getting lost in the alleyways.

Such sort of lore, myths and mysteries are all over India. The Ramayana and the Mahabharata are just two of the epics which tell how grand the tales could be spun. The History of Indus Valley Civilization till date and the modern contestations to it are others which come to my mind.

Even the humble Panchtantra can be re-born and retold to generations who have forgotten it. I can’t express it much better as the variety of stories and contrasts to offer as bolokids does as well as SRK did in opening of IFFI. Even something like Khakee which is based on true incidents and a real-life inspector could be retold in so many ways. Even Mukti Bhavan which I saw few months ago, coincidentally before I became ill tells of stories which have complex stories and each person or persons have their own rich background which on VR could be much more explored.

Even titles such as the ever-famous Harry Potter or even the ever-beguiling RAMA could be shared and retooled for generations to come. The Shiva Trilogy is another one which comes to my mind which could be retold as well. There was another RAMA trilogy by the same author and another competing one which comes out in 2018 by an author called PJ Annan

We would need to work out the complexities of both hardware, bandwidth and the technologies but stories or content waiting to be developed is aplenty.

Once upon a time I had the opportunity to work, develop and understand make-believe walk-throughs (2-d blueprints animated/bought to life and shown to investors/clients) for potential home owners in a society (this was in the hey-days and heavy days of growth circa around y2k ) , it was 2d or 2.5 d environment, tools were lot more complex and I was the most inept person as I had no idea of what camera positioning and what source of light meant.

Apart from the gimmickry that was shown, I thought it would have been interesting if people had shared both the creative and the budget constraints while working in immersive technologies and bringing something good enough for the client. There was some discussion in a ham-handed way but not enough as there was considerable interest from youngsters to try this new medium but many lacked both the opportunities, knowledge, the equipment and the software stack to make it a reality.

Lastly, as far as the literature I have just shared bits and pieces of just the Indian English literature. There are 16 recognized Indian languages and all of them have a vibrant literature scene. Just to take an example, Bengal has been a bed-rock of new Bengali Detective stories all the time. I think I had shared the history of Bengali Crime fiction sometime back as well but nevertheless here it is again.

So apart from games, galleries, 3-d visual interactive visual novels with alternative endings could make for some interesting immersive experiences provided we are able to shed the costs and the technical challenges to make it a reality.


Filed under: Miscellenous Tagged: #Augmented Reality, #Debconf South Africa 2016, #Epics, #framastore, #indian literature, #Mars trip, #median age population inded, #motion sickness, #Palaces, #planet-debian, #Pune VR Conference, #RAMA, #RAMA trilogy, #Samsung VR, #Shiva Trilogy, #The Expanse, #Virtual Reality, #VR Headsets, #walkthroughs, Privacy shirishag75 https://flossexperiences.wordpress.com #planet-debian – Experiences in the community

Compute rescaling progress

Planet Debian - Pre, 29/12/2017 - 2:18md

My Lanczos rescaling compute shader for Movit is finally nearing usable performance improvements:

BM_ResampleEffectInt8/Fragment/Int8Downscale/1280/720/640/360 3149 us 69.7767M pixels/s BM_ResampleEffectInt8/Fragment/Int8Downscale/1280/720/320/180 2720 us 20.1983M pixels/s BM_ResampleEffectHalf/Fragment/Float16Downscale/1280/720/640/360 3777 us 58.1711M pixels/s BM_ResampleEffectHalf/Fragment/Float16Downscale/1280/720/320/180 3269 us 16.8054M pixels/s BM_ResampleEffectInt8/Compute/Int8Downscale/1280/720/640/360 2007 us 109.479M pixels/s [+ 56.9%] BM_ResampleEffectInt8/Compute/Int8Downscale/1280/720/320/180 1609 us 34.1384M pixels/s [+ 69.0%] BM_ResampleEffectHalf/Compute/Float16Downscale/1280/720/640/360 2057 us 106.843M pixels/s [+ 56.7%] BM_ResampleEffectHalf/Compute/Float16Downscale/1280/720/320/180 1633 us 33.6394M pixels/s [+100.2%]

Some tuning and bugfixing still needed; this is on my Haswell (the NVIDIA results are somewhat different). Upscaling also on its way. :-)

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

Jackpot

Planet Debian - Pre, 29/12/2017 - 12:11md
I have no idea whatsover of how I achieved this, but there you go. This citizen's legal draft is moving forward to the Finnish parliament. Martin-Éric noreply@blogger.com Funkyware: ITCetera

Debian Policy call for participation -- December 2017

Planet Debian - Enj, 28/12/2017 - 11:47md

Yesterday we released Debian Policy 4.1.3.0, containing patches from numerous different contributors, some of them first-time contributors. Thank you to everyone who was involved!

Please consider getting involved in preparing the next release of Debian Policy, which is likely to be uploaded sometime around the end of January.

Consensus has been reached and help is needed to write a patch

#780725 PATH used for building is not specified

#793499 The Installed-Size algorithm is out-of-date

#823256 Update maintscript arguments with dpkg >= 1.18.5

#833401 virtual packages: dbus-session-bus, dbus-default-session-bus

#835451 Building as root should be discouraged

#838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus

#845715 Please document that packages are not allowed to write outside thei…

#853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai…

#874019 Note that the ’-e’ argument to x-terminal-emulator works like ’–’

#874206 allow a trailing comma in package relationship fields

Wording proposed, awaiting review from anyone and/or seconds by DDs

#515856 remove get-orig-source

#582109 document triggers where appropriate

#610083 Remove requirement to document upstream source location in debian/c…

#645696 [copyright-format] clearer definitions and more consistent License:…

#649530 [copyright-format] clearer definitions and more consistent License:…

#662998 stripping static libraries

#682347 mark ‘editor’ virtual package name as obsolete

#737796 copyright-format: support Files: paragraph with both abbreviated na…

#742364 Document debian/missing-sources

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#835451 Building as root should be discouraged

#845255 Include best practices for packaging database applications

#846970 Proposal for a Build-Indep-Architecture: control file field

#864615 please update version of posix standard for scripts (section 10.4)

Sean Whitton https://spwhitton.name//blog/ Notes from the Library

Get rid of the backpack

Planet Debian - Enj, 28/12/2017 - 11:43md

In 2008 I read a blog post by Mark Pilgrim which made a profound impact on me, although I didn't realise it at the time. It was

  1. Stop buying stuff you don’t need
  2. Pay off all your credit cards
  3. Get rid of all the stuff that doesn’t fit in your house/apartment (storage lockers, etc.)
  4. Get rid of all the stuff that doesn’t fit on the first floor of your house (attic, garage, etc.)
  5. Get rid of all the stuff that doesn’t fit in one room of your house
  6. Get rid of all the stuff that doesn’t fit in a suitcase
  7. Get rid of all the stuff that doesn’t fit in a backpack
  8. Get rid of the backpack

At the time I first read it, I think I could see (and concur) with the logic behind the first few points, but not further. Revisiting it now I can agree much further along the list and I'm wondering if I'm brave enough to get to the last step, or anywhere near it.

Mark was obviously going on a journey, and another stopping-off point for him on that journey was to delete his entire online persona, which is why I've linked to the Wayback Machine copy of the blog post.

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

Successive Heresies

Planet Debian - Enj, 28/12/2017 - 2:37md

I prefer the book The Hobbit to The Lord Of The Rings.

I much prefer the Hobbit movies to the LOTR movies.

I like the fact the Hobbit movies were extended with material not in the original book: I'm glad there are female characters. I love the additional material with Radagast the Brown. I love the singing and poems and sense of fun preserved from what was a novel for children.

I find the foreshadowing of Sauron in The Hobbit movies to more effectively convey a sense of dread and power than actual Sauron in the LOTR movies.

Whilst I am generally bored by large CGI battles, I find the skirmishes in The Hobbit movies to be less boring than the epic-scale ones in LOTR.

jmtd http://jmtd.net/log/ Jonathan Dowland's Weblog

Reproducible Builds: Weekly report #139

Planet Debian - Enj, 28/12/2017 - 1:55md

Here's what happened in the Reproducible Builds effort between Sunday December 17 and Saturday December 23 2017:

Packages reviewed and fixed, and bugs filed

Bugs filed in Debian:

Bugs filed in openSUSE:

  • Bernhard M. Wiedemann:
    • WindowMaker (merged) - use modification date of ChangeLog, upstreamable
    • ntp (merged) - drop date
    • bzflag - version upgrade to include already-upstreamed SOURCE_DATE_EPOCH patch
Reviews of unreproducible packages

20 package reviews have been added, 36 have been updated and 32 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (6)
  • Matthias Klose (8)
diffoscope development strip-nondeterminism development disorderfs development reprotest development reproducible-website development
  • Chris Lamb:
    • rws3:
      • Huge number of formatting improvements, typo fixes, capitalisation
      • Add section headings to make splitting up easier.
  • Holger Levsen:
    • rws3:
      • Add a disclaimer that this part of the website is a Work-In-Progress.
      • Split notes from each session into separate pages (6 sessions).
      • Other formatting and style fixes.
      • Link to Ludovic Courtès' notes on GNU Guix.
  • Ximin Luo:
    • rws3:
      • Format agenda.md to look like previous years', and other fixes
      • Split notes from each session into separate pages (1 session).
jenkins.debian.net development Misc.

This week's edition was written by Ximin Luo and Bernhard M. Wiedemann & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible.alioth.debian.org/blog/ Reproducible builds blog

(Micro)benchmarking Linux kernel functions

Planet Debian - Enj, 28/12/2017 - 10:27pd

Usually, the performance of a Linux subsystem is measured through an external (local or remote) process stressing it. Depending on the input point used, a large portion of code may be involved. To benchmark a single function, one solution is to write a kernel module.

Minimal kernel module

Let’s suppose we want to benchmark the IPv4 route lookup function, fib_lookup(). The following kernel function executes 1,000 lookups for 8.8.8.8 and returns the average value.1 It uses the get_cycles() function to compute the execution “time.”

/* Execute a benchmark on fib_lookup() and put result into the provided buffer `buf`. */ static int do_bench(char *buf) { unsigned long long t1, t2; unsigned long long total = 0; unsigned long i; unsigned count = 1000; int err = 0; struct fib_result res; struct flowi4 fl4; memset(&fl4, 0, sizeof(fl4)); fl4.daddr = in_aton("8.8.8.8"); for (i = 0; i < count; i++) { t1 = get_cycles(); err |= fib_lookup(&init_net, &fl4, &res, 0); t2 = get_cycles(); total += t2 - t1; } if (err != 0) return scnprintf(buf, PAGE_SIZE, "err=%d msg=\"lookup error\"\n", err); return scnprintf(buf, PAGE_SIZE, "avg=%llu\n", total / count); }

Now, we need to embed this function in a kernel module. The following code registers a sysfs directory containing a pseudo-file run. When a user queries this file, the module runs the benchmark function and returns the result as content.

#define pr_fmt(fmt) "kbench: " fmt #include <linux/kernel.h> #include <linux/version.h> #include <linux/module.h> #include <linux/inet.h> #include <linux/timex.h> #include <net/ip_fib.h> /* When a user fetches the content of the "run" file, execute the benchmark function. */ static ssize_t run_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { return do_bench(buf); } static struct kobj_attribute run_attr = __ATTR_RO(run); static struct attribute *bench_attributes[] = { &run_attr.attr, NULL }; static struct attribute_group bench_attr_group = { .attrs = bench_attributes, }; static struct kobject *bench_kobj; int init_module(void) { int rc; /* ❶ Create a simple kobject named "kbench" in /sys/kernel. */ bench_kobj = kobject_create_and_add("kbench", kernel_kobj); if (!bench_kobj) return -ENOMEM; /* ❷ Create the files associated with this kobject. */ rc = sysfs_create_group(bench_kobj, &bench_attr_group); if (rc) { kobject_put(bench_kobj); return rc; } return 0; } void cleanup_module(void) { kobject_put(bench_kobj); } /* Metadata about this module */ MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Microbenchmark for fib_lookup()");

In ❶, kobject_create_and_add() creates a new kobject named kbench. A kobject is the abstraction behind the sysfs filesystem. This new kobject is visible as the /sys/kernel/kbench/ directory.

In ❷, sysfs_create_group() attaches a set of attributes to our kobject. These attributes are materialized as files inside /sys/kernel/kbench/. Currently, we declare only one of them, run, with the __ATTR_RO macro. The attribute is therefore read-only (0444) and when a user tries to fetch the content of the file, the run_show() function is invoked with a buffer of PAGE_SIZE bytes as last argument and is expected to return the number of bytes written.

For more details, you can look at the documentation in the kernel and the associated example. Beware, random posts found on the web (including this one) may be outdated.2

The following Makefile will compile this example:

# Kernel module compilation KDIR = /lib/modules/$(shell uname -r)/build obj-m += kbench_mod.o kbench_mod.ko: kbench_mod.c make -C $(KDIR) M=$(PWD) modules

After executing make, you should get a kbench_mod.ko file:

$ modinfo kbench_mod.ko filename: /home/bernat/code/…/kbench_mod.ko description: Microbenchmark for fib_lookup() license: GPL depends: name: kbench_mod vermagic: 4.14.0-1-amd64 SMP mod_unload modversions

You can load it and execute the benchmark:

$ insmod ./kbench_mod.ko $ ls -l /sys/kernel/kbench/run -r--r--r-- 1 root root 4096 déc. 10 16:05 /sys/kernel/kbench/run $ cat /sys/kernel/kbench/run avg=75

The result is a number of cycles. You can get an approximate time in nanoseconds if you divide it by the frequency of your processor in gigahertz (25 ns if you have a 3 GHz processor).3

Configurable parameters

The module hard-code two constants: the number of loops and the destination address to test. We can make these parameters user-configurable by exposing them as attributes of our kobject and define a pair of functions to read/write them:

static unsigned long loop_count = 5000; static u32 flow_dst_ipaddr = 0x08080808; /* A mutex is used to ensure we are thread-safe when altering attributes. */ static DEFINE_MUTEX(kb_lock); /* Show the current value for loop count. */ static ssize_t loop_count_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { ssize_t res; mutex_lock(&kb_lock); res = scnprintf(buf, PAGE_SIZE, "%lu\n", loop_count); mutex_unlock(&kb_lock); return res; } /* Store a new value for loop count. */ static ssize_t loop_count_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t count) { unsigned long val; int err = kstrtoul(buf, 0, &val); if (err < 0) return err; if (val < 1) return -EINVAL; mutex_lock(&kb_lock); loop_count = val; mutex_unlock(&kb_lock); return count; } /* Show the current value for destination address. */ static ssize_t flow_dst_ipaddr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { ssize_t res; mutex_lock(&kb_lock); res = scnprintf(buf, PAGE_SIZE, "%pI4\n", &flow_dst_ipaddr); mutex_unlock(&kb_lock); return res; } /* Store a new value for destination address. */ static ssize_t flow_dst_ipaddr_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t count) { mutex_lock(&kb_lock); flow_dst_ipaddr = in_aton(buf); mutex_unlock(&kb_lock); return count; } /* Define the new set of attributes. They are read/write attributes. */ static struct kobj_attribute loop_count_attr = __ATTR_RW(loop_count); static struct kobj_attribute flow_dst_ipaddr_attr = __ATTR_RW(flow_dst_ipaddr); static struct kobj_attribute run_attr = __ATTR_RO(run); static struct attribute *bench_attributes[] = { &loop_count_attr.attr, &flow_dst_ipaddr_attr.attr, &run_attr.attr, NULL };

The IPv4 address is stored as a 32-bit integer but displayed and parsed using the dotted quad notation. The kernel provides the appropriate helpers for this task.

After this change, we have two new files in /sys/kernel/kbench. We can read the current values and modify them:

# cd /sys/kernel/kbench # ls -l -rw-r--r-- 1 root root 4096 déc. 10 19:10 flow_dst_ipaddr -rw-r--r-- 1 root root 4096 déc. 10 19:10 loop_count -r--r--r-- 1 root root 4096 déc. 10 19:10 run # cat loop_count 5000 # cat flow_dst_ipaddr 8.8.8.8 # echo 9.9.9.9 > flow_dst_ipaddr # cat flow_dst_ipaddr 9.9.9.9

We still need to alter the do_bench() function to make use of these parameters:

static int do_bench(char *buf) { /* … */ mutex_lock(&kb_lock); count = loop_count; fl4.daddr = flow_dst_ipaddr; mutex_unlock(&kb_lock); for (i = 0; i < count; i++) { /* … */ Meaningful statistics

Currently, we only compute the average lookup time. This value is usually inadequate:

  • A small number of outliers can raise this value quite significantly. An outlier can happen because we were preempted out of CPU while executing the benchmarked function. This doesn’t happen often if the function execution time is short (less than a millisecond), but when this happens, the outliers can be off by several milliseconds, which is enough to make the average inadequate when most values are several order of magnitude smaller. For this reason, the median usually gives a better view.

  • The distribution may be asymmetrical or have several local maxima. It’s better to keep several percentiles or even a distribution graph.

To be able to extract meaningful statistics, we store the results in an array.

static int do_bench(char *buf) { unsigned long long *results; /* … */ results = kmalloc(sizeof(*results) * count, GFP_KERNEL); if (!results) return scnprintf(buf, PAGE_SIZE, "msg=\"no memory\"\n"); for (i = 0; i < count; i++) { t1 = get_cycles(); err |= fib_lookup(&init_net, &fl4, &res, 0); t2 = get_cycles(); results[i] = t2 - t1; } if (err != 0) { kfree(results); return scnprintf(buf, PAGE_SIZE, "err=%d msg=\"lookup error\"\n", err); } /* Compute and display statistics */ display_statistics(buf, results, count); kfree(results); return strnlen(buf, PAGE_SIZE); }

Then, We need an helper function to be able to compute percentiles:

static unsigned long long percentile(int p, unsigned long long *sorted, unsigned count) { int index = p * count / 100; int index2 = index + 1; if (p * count % 100 == 0) return sorted[index]; if (index2 >= count) index2 = index - 1; if (index2 < 0) index2 = index; return (sorted[index] + sorted[index+1]) / 2; }

This function needs a sorted array as input. The kernel provides a heapsort function, sort(), for this purpose. Another useful value to have is the deviation from the median. Here is a function to compute the median absolute deviation:4

static unsigned long long mad(unsigned long long *sorted, unsigned long long median, unsigned count) { unsigned long long *dmedian = kmalloc(sizeof(unsigned long long) * count, GFP_KERNEL); unsigned long long res; unsigned i; if (!dmedian) return 0; for (i = 0; i < count; i++) { if (sorted[i] > median) dmedian[i] = sorted[i] - median; else dmedian[i] = median - sorted[i]; } sort(dmedian, count, sizeof(unsigned long long), compare_ull, NULL); res = percentile(50, dmedian, count); kfree(dmedian); return res; }

With these two functions, we can provide additional statistics:

static void display_statistics(char *buf, unsigned long long *results, unsigned long count) { unsigned long long p95, p90, p50; sort(results, count, sizeof(*results), compare_ull, NULL); if (count == 0) { scnprintf(buf, PAGE_SIZE, "msg=\"no match\"\n"); return; } p95 = percentile(95, results, count); p90 = percentile(90, results, count); p50 = percentile(50, results, count); scnprintf(buf, PAGE_SIZE, "min=%llu max=%llu count=%lu 95th=%llu 90th=%llu 50th=%llu mad=%llu\n", results[0], results[count - 1], count, p95, p90, p50, mad(results, p50, count)); }

We can also append a graph of the distribution function (and of the cumulative distribution function):

min=72 max=33364 count=100000 95th=154 90th=142 50th=112 mad=6 value │ ┊ count 72 │ 51 77 │▒ 3548 82 │▒▒░░ 4773 87 │▒▒░░░░░ 5918 92 │░░░░░░░ 1207 97 │░░░░░░░ 437 102 │▒▒▒▒▒▒░░░░░░░░ 12164 107 │▒▒▒▒▒▒▒░░░░░░░░░░░░░░ 15508 112 │▒▒▒▒▒▒▒▒▒▒▒░░░░░░░░░░░░░░░░░░░░░░ 23014 117 │▒▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 6297 122 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 905 127 │▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 3845 132 │▒▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 6687 137 │▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 4884 142 │▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 4133 147 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 1015 152 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 1123 Benchmark validity

While the benchmark produces some figures, we may question their validity. There are several traps when writing a microbenchmark:

dead code
Compiler may optimize away our benchmark because the result is not used. In our example, we ensure to combine the result in a variable to avoid this.
warmup phase
One-time initializations may affect negatively the benchmark. This is less likely to happen with C code since there is no JIT. Nonetheless, you may want to add a small warmup phase.
too small dataset
If the benchmark is running using the same input parameters over and over, the input data may fit entirely in the L1 cache. This affects positively the benchmark. Therefore, it is important to iterate over a large dataset.
too regular dataset
A regular dataset may still affect positively the benchmark despite its size. While the whole dataset will not fit into L1/L2 cache, the previous run may have loaded most of the data needed for the current run. In the route lookup example, as route entries are organized in a tree, it’s important to not linearly scan the address space. Address space could be explored randomly (a simple linear congruential generator brings reproducible randomness).
large overhead
If the benchmarked function runs in a few nanoseconds, the overhead of the benchmark infrastructure may be too high. Typically, the overhead of the method presented here is around 5 nanoseconds. get_cycles() is a thin wrapper around the RDTSC instruction: it returns the number of cycles for the current processor since last reset. It’s also virtualized with low-overhead in case you run the benchmark in a virtual machine. If you want to measure a function with a greater precision, you need to wrap it in a loop. However, the loop itself adds to the overhead, notably if you need to compute a large input set (in this case, the input can be prepared). Compilers also like to mess with loops. At last, a loop hides the result distribution.
preemption
While the benchmark is running, the thread executing it can be preempted (or when running in a virtual machine, the whole virtual machine can be preempted by the host). When the function takes less than a millisecond to execute, one can assume preemption is rare enough to be filtered out by using a percentile function.
noise
When running the benchmark, noise from unrelated processes (or sibling hosts when benchmarking in a virtual machine) needs to be avoided as it may change from one run to another. Therefore, it is not a good idea to benchmark in a public cloud. On the other hand, adding controlled noise to the benchmark may lead to less artificial results: in our example, route lookup is only a small part of routing a packet and measuring it alone in a tight loop affects positively the benchmark.
syncing parallel benchmarks
While it is possible (and safe) to run several benchmarks in parallel, it may be difficult to ensure they really run in parallel: some invocations may work in better conditions because other threads are not running yet, skewing the result. Ideally, each run should execute bogus iterations and start measures only when all runs are present. This doesn’t seem a trivial addition.

As a conclusion, the benchmark module presented here is quite primitive (notably compared to a framework like JMH for Java) but, with care, can deliver some conclusive results like in these posts: “IPv4 route lookup on Linux” and “IPv6 route lookup on Linux.”

Alternative

Use of a tracing tool is an alternative approach. For example, if we want to benchmark IPv4 route lookup times, we can use the following process:

while true; do ip route get $((RANDOM%100)).$((RANDOM%100)).$((RANDOM%100)).5 sleep 0.1 done

Then, we instrument the __fib_lookup() function with eBPF (through BCC):

$ sudo funclatency-bpfcc __fib_lookup Tracing 1 functions for "__fib_lookup"... Hit Ctrl-C to end. ^C nsecs : count distribution 0 -> 1 : 0 | | 2 -> 3 : 0 | | 4 -> 7 : 0 | | 8 -> 15 : 0 | | 16 -> 31 : 0 | | 32 -> 63 : 0 | | 64 -> 127 : 0 | | 128 -> 255 : 0 | | 256 -> 511 : 3 |* | 512 -> 1023 : 1 | | 1024 -> 2047 : 2 |* | 2048 -> 4095 : 13 |****** | 4096 -> 8191 : 42 |********************|

Currently, the overhead is quite high, as a route lookup on an empty routing table is less than 100 ns. Once Linux supports inter-event tracing, the overhead of this solution may be reduced to be usable for such microbenchmarks.

  1. In this simple case, it may be more accurate to use:

    t1 = get_cycles(); for (i = 0; i < count; i++) { err |= fib_lookup(…); } t2 = get_cycles(); total = t2 - t1;

    However, this prevents us to compute more statistics. Moreover, when you need to provide a non-constant input to the fib_lookup() function, the first way is likely to be more accurate. 

  2. In-kernel API backward compatibility is a non-goal of the Linux kernel. 

  3. You can get the current frequency with cpupower frequency-info. As the frequency may vary (even when using the performance governor), this may not be accurate but this still provides an easier representation (comparable results should use the same frequency). 

  4. Only integer arithmetic is available in the kernel. While it is possible to approximate a standard deviation using only integers, the median absolute deviation just reuses the percentile() function defined above. 

Vincent Bernat https://vincent.bernat.im/en Vincent Bernat

Freezing of tasks failed

Planet Debian - Enj, 28/12/2017 - 7:33pd

It is interesting how a user-space task could lead to hinder a Linux kernel software suspend operation.

[11735.155443] PM: suspend entry (deep) [11735.155445] PM: Syncing filesystems ... done. [11735.215091] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11735.215172] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11735.558676] rfkill: input handler enabled [11735.608859] (NULL device *): firmware: direct-loading firmware rtlwifi/rtl8723befw_36.bin [11735.609910] (NULL device *): firmware: direct-loading firmware rtl_bt/rtl8723b_fw.bin [11735.611871] Freezing user space processes ... [11755.615603] Freezing of tasks failed after 20.003 seconds (1 tasks refusing to freeze, wq_busy=0): [11755.615854] digikam D 0 13262 13245 0x00000004 [11755.615859] Call Trace: [11755.615873] __schedule+0x28e/0x880 [11755.615878] schedule+0x2c/0x80 [11755.615889] request_wait_answer+0xa3/0x220 [fuse] [11755.615895] ? finish_wait+0x80/0x80 [11755.615902] __fuse_request_send+0x86/0x90 [fuse] [11755.615907] fuse_request_send+0x27/0x30 [fuse] [11755.615914] fuse_send_readpages.isra.30+0xd1/0x120 [fuse] [11755.615920] fuse_readpages+0xfd/0x110 [fuse] [11755.615928] __do_page_cache_readahead+0x200/0x2d0 [11755.615936] filemap_fault+0x37b/0x640 [11755.615940] ? filemap_fault+0x37b/0x640 [11755.615944] ? filemap_map_pages+0x179/0x320 [11755.615950] __do_fault+0x1e/0xb0 [11755.615953] __handle_mm_fault+0xc8a/0x1160 [11755.615958] handle_mm_fault+0xb1/0x200 [11755.615964] __do_page_fault+0x257/0x4d0 [11755.615968] do_page_fault+0x2e/0xd0 [11755.615973] page_fault+0x22/0x30 [11755.615976] RIP: 0033:0x7f32d3c7ff90 [11755.615978] RSP: 002b:00007ffd887c9d18 EFLAGS: 00010246 [11755.615981] RAX: 00007f32d3fc9c50 RBX: 000000000275e440 RCX: 0000000000000003 [11755.615982] RDX: 0000000000000002 RSI: 00007ffd887c9f10 RDI: 000000000275e440 [11755.615984] RBP: 00007ffd887c9f10 R08: 000000000275e820 R09: 00000000018d2f40 [11755.615986] R10: 0000000000000002 R11: 0000000000000000 R12: 000000000189cbc0 [11755.615987] R13: 0000000001839dc0 R14: 000000000275e440 R15: 0000000000000000 [11755.616014] OOM killer enabled. [11755.616015] Restarting tasks ... done. [11755.817640] PM: suspend exit [11755.817698] PM: suspend entry (s2idle) [11755.817700] PM: Syncing filesystems ... done. [11755.983156] rfkill: input handler disabled [11756.030209] rfkill: input handler enabled [11756.073529] Freezing user space processes ... [11776.084309] Freezing of tasks failed after 20.010 seconds (2 tasks refusing to freeze, wq_busy=0): [11776.084630] digikam D 0 13262 13245 0x00000004 [11776.084636] Call Trace: [11776.084653] __schedule+0x28e/0x880 [11776.084659] schedule+0x2c/0x80 [11776.084672] request_wait_answer+0xa3/0x220 [fuse] [11776.084680] ? finish_wait+0x80/0x80 [11776.084688] __fuse_request_send+0x86/0x90 [fuse] [11776.084695] fuse_request_send+0x27/0x30 [fuse] [11776.084703] fuse_send_readpages.isra.30+0xd1/0x120 [fuse] [11776.084711] fuse_readpages+0xfd/0x110 [fuse] [11776.084721] __do_page_cache_readahead+0x200/0x2d0 [11776.084730] filemap_fault+0x37b/0x640 [11776.084735] ? filemap_fault+0x37b/0x640 [11776.084743] ? __update_load_avg_blocked_se.isra.33+0xa1/0xf0 [11776.084749] ? filemap_map_pages+0x179/0x320 [11776.084755] __do_fault+0x1e/0xb0 [11776.084759] __handle_mm_fault+0xc8a/0x1160 [11776.084765] handle_mm_fault+0xb1/0x200 [11776.084772] __do_page_fault+0x257/0x4d0 [11776.084777] do_page_fault+0x2e/0xd0 [11776.084783] page_fault+0x22/0x30 [11776.084787] RIP: 0033:0x7f31ddf315e0 [11776.084789] RSP: 002b:00007ffd887ca068 EFLAGS: 00010202 [11776.084793] RAX: 00007f31de13c350 RBX: 00000000040be3f0 RCX: 000000000283da60 [11776.084795] RDX: 0000000000000001 RSI: 00000000040be3f0 RDI: 00000000040be3f0 [11776.084797] RBP: 00007f32d3fca1e0 R08: 0000000005679250 R09: 0000000000000020 [11776.084799] R10: 00000000058fc1b0 R11: 0000000004b9ac50 R12: 0000000000000000 [11776.084801] R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000000 [11776.084806] QXcbEventReader D 0 13268 13245 0x00000004 [11776.084810] Call Trace: [11776.084817] __schedule+0x28e/0x880 [11776.084823] schedule+0x2c/0x80 [11776.084827] rwsem_down_write_failed_killable+0x25a/0x490 [11776.084832] call_rwsem_down_write_failed_killable+0x17/0x30 [11776.084836] ? call_rwsem_down_write_failed_killable+0x17/0x30 [11776.084842] down_write_killable+0x2d/0x50 [11776.084848] do_mprotect_pkey+0xa9/0x2f0 [11776.084854] SyS_mprotect+0x13/0x20 [11776.084859] system_call_fast_compare_end+0xc/0x97 [11776.084861] RIP: 0033:0x7f32d1f7c057 [11776.084863] RSP: 002b:00007f32cbb8c8d8 EFLAGS: 00000206 ORIG_RAX: 000000000000000a [11776.084867] RAX: ffffffffffffffda RBX: 00007f32c4000020 RCX: 00007f32d1f7c057 [11776.084869] RDX: 0000000000000003 RSI: 0000000000001000 RDI: 00007f32c4024000 [11776.084871] RBP: 00000000000000c5 R08: 00007f32c4000000 R09: 0000000000024000 [11776.084872] R10: 00007f32c4024000 R11: 0000000000000206 R12: 00000000000000a0 [11776.084874] R13: 00007f32c4022f60 R14: 0000000000001000 R15: 00000000000000e0 [11776.084906] OOM killer enabled. [11776.084907] Restarting tasks ... done. [11776.289655] PM: suspend exit [11776.459624] IPv6: ADDRCONF(NETDEV_UP): wlp1s0: link is not ready [11776.469521] rfkill: input handler disabled [11776.978733] IPv6: ADDRCONF(NETDEV_UP): wlp1s0: link is not ready [11777.038879] IPv6: ADDRCONF(NETDEV_UP): wlp1s0: link is not ready [11778.022062] wlp1s0: authenticate with 50:8f:4c:82:4d:dd [11778.033155] wlp1s0: send auth to 50:8f:4c:82:4d:dd (try 1/3) [11778.038522] wlp1s0: authenticated [11778.041511] wlp1s0: associate with 50:8f:4c:82:4d:dd (try 1/3) [11778.059860] wlp1s0: RX AssocResp from 50:8f:4c:82:4d:dd (capab=0x431 status=0 aid=5) [11778.060253] wlp1s0: associated [11778.060308] IPv6: ADDRCONF(NETDEV_CHANGE): wlp1s0: link becomes ready [11778.987669] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11779.117608] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11779.160930] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11779.784045] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11779.913668] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch [11779.961517] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch 11:58 ♒♒♒ ☺ Categories: Keywords: Like:  Ritesh Raj Sarraf https://www.researchut.com/taxonomy/term/2 RESEARCHUT - Debian-Blog

Testing Ansible Playbooks With Vagrant

Planet Debian - Enj, 28/12/2017 - 12:00pd

I use Ansible to automate the deployments of my websites (LinuxJobs.fr, Journal du hacker) and my applications (Feed2toot, Feed2tweet). I’ll describe in this blog post my setup in order to test my Ansible Playbooks locally on my laptop.

Why testing the Ansible Playbooks

I need a simple and a fast way to test the deployments of my Ansible Playbooks locally on my laptop, especially at the beginning of writing a new Playbook, because deploying directly on the production server is both reeeeally slow… and risky for my services in production.

Instead of deploying on a remote server, I’ll deploy my Playbooks on a VirtualBox using Vagrant. This allows getting quickly the result of a new modification, iterating and fixing as fast as possible.

Disclaimer: I am not a profesionnal programmer. There might exist better solutions and I’m only describing one solution of testing Ansible Playbooks I find both easy and efficient for my own use cases.

My process
  1. Begin writing the new Ansible Playbook
  2. Launch a fresh virtual machine (VM) and deploy the playbook on this VM using Vagrant
  3. Fix the issues either from the playbook either from the application deployed by Ansible itself
  4. Relaunch the deployment on the VM
  5. If more errors, go back to step 3. Otherwise destroy the VM, recreate it and deploy to test a last time with a fresh install
  6. If no error remains, tag the version of your Ansible Playbook and you’re ready to deploy in production
What you need

First, you need Virtualbox. If you use the Debian distribution, this link describes how to install it, either from the Debian repositories either from the upstream.

Second, you need Vagrant. Why Vagrant? Because it’s a kind of middleware between your development environment and your virtual machine, allowing programmatically reproducible operations and easy linking your deployments and the virtual machine. Install it with the following command:

# apt install vagrant

Setting up Vagrant

Everything about Vagrant lies in the file Vagrantfile. Here is mine:

Vagrant.require_version ">= 2.0.0" Vagrant.configure(1) do |config| config.vm.box = "debian/stretch64" config.vm.provision "shell", inline: "apt install --yes git python3-pip" config.vm.provision "ansible" do |ansible| ansible.verbose = "v" ansible.playbook = "site.yml" ansible.vault_password_file = "vault_password_file" end end

Debian, the best OS to operate your online services

  1. The 1st line defines what versions of Vagrant should execute your Vagrantfile.
  2. The first loop of the file, you could define the following operations for as many virtual machines as you wish (here just 1).
  3. The 3rd line defines the official Vagrant image we’ll use for the virtual machine.
  4. The 4th line is really important: those are the missing apps we miss on the VM. Here we install git and python3-pip with apt.
  5. The next line indicates the start of the Ansible configuration.
  6. On the 6th line, we want a verbose output of Ansible.
  7. On the 7th line, we define the entry point of your Ansible Playbook.
  8. On the 8th line, if you use Ansible Vault to encrypt some files, just define here the file with your Ansible Vault passphrase.

When Vagrant launches Ansible, it’s going to launch something like:

$  ansible-playbook --inventory-file=/home/me/ansible/test-ansible-playbook/.vagrant/provisioners/ansible/inventory -v --vault-password-file=vault_password_file site.yml Executing Vagrant

After writing your Vagrantfile, you need to launch your VM. It’s as simple as using the following command:

$ vagrant up

That’s a slow operation, because the VM will be launched, the additionnal apps you defined in the Vagrantfile will be installed and finally your Playbook will be deployed on it. You should sparsely use it.

Ok, now we’re really ready to iterate fast. Between your different modifications, in order to test your deployments fast and on a regular basis, just use the following command:

$ vagrant provision

Once your Ansible Playbook is finally ready, usually after lots of iterations (at least that’s my case), you should test it on a fresh install, because your different iterations may have modified your virtual machine and could trigger unexpected results.

In order to test it from a fresh install, use the following command:

$ vagrant destroy && vagrant up

That’s again a slow operation. You should use it when you’re pretty sure your Ansible Playbook is almost finished. After testing your deployment on a fresh VM, you’re now ready to deploy in production.Or at least better prepared :p

Possible improvements? Let me know

I find the setup described in this blog post quite useful for my use cases. I can iterate quite fast especially when I begin writing a new playbook, not only on the playbook but sometimes on my own latest apps, not yet ready to be deployed in production. Deploying on a remote server would be both slow and dangerous for my services in production.

I could use a continous integration (CI) server, but that’s not the topic of this blog post.  As said before, the goal is to iterate as fast as possible in the beginning of writing a new Ansible Playbook.

Gitlab, offering Continuous Integration and Continuous Deployment services

Commiting, pushing to your Git repository and waiting for the execution of your CI tests is overkill at the beginning of your Ansible Playbook, when it’s full of errors waiting to be debugged one by one. I think CI is more useful later in the life of the Ansible Playbooks, especially when different people work on it and you have a set or code quality rules to enforce. That’s only my opinion and it’s open to discussion, one more time I’m not a professionnal programmer.

If you have better solutions to test Ansible Playbooks or to improve the one describe here, let me know by writing a comment or by contacting me through my accounts on social networks below, I’ll be delighted to listen to your improvements.

About Me

Carl Chenet, Free Software Indie Hacker, Founder of LinuxJobs.fr, a job board for Free and Open Source Jobs in France.

Follow Me On Social Networks

 

Carl Chenet https://carlchenet.com debian – Carl Chenet's Blog

Translating my website to Finnish

Planet Debian - Mër, 27/12/2017 - 11:00md

I've now been living in Finland for two years, and I'm pondering a small project to translate my main website into Finnish.

Obviously if my content is solely Finnish it will become of little interest to the world - if my vanity lets me even pretend it is useful at the moment!

The traditional way to do this, with Apache, is to render pages in multiple languages and let the client(s) request their preferred version with Accept-Language:. Though it seems that many clients are terrible at this, and the whole approach is a mess. Pretending it works though we render pages such as:

index.html index.en.html index.fi.html

Then "magic happens", such that the right content is served. I can then do extra-things, like add links to "English" or "Finnish" in the header/footers to let users choose.

Unfortunately I have an immediate problem! I host a bunch of websites on a single machine and I don't want to allow a single site compromise to affect other sites. To do that I run each website under its own Unix user. For example I have the website "steve.fi" running as the "s-fi" user, and my blog runs as "s-blog", or "s-blogfi":

root@www ~ # psx -ef | egrep '(s-blog|s-fi)' s-blogfi /usr/sbin/lighttpd -f /srv/blog.steve.fi/lighttpd.conf -D s-blog /usr/sbin/lighttpd -f /srv/blog.steve.org.uk/lighttpd.conf -D s-fi /usr/sbin/lighttpd -f /srv/steve.fi/lighttpd.conf -D

There you can see the Unix user, and the per-user instance of lighttpd which hosts the website. Each instance binds to a high-port on localhost, and I have a reverse proxy listening on the public IP address to route incoming connections to the appropriate back-end instance.

I used to use thttpd but switched to lighttpd to allow CGI scripts to be used - some of my sites are slightly/mostly dynamic.

Unfortunately lighttpd doesn't support multiviews without some Lua hacks which will require rewriting - as the supplied example only handles Accept rather than the language-header I want.

It seems my simplest solution is to switch from having lighttpd on the back-end to running apache2 instead, but I've not yet decided which way to jump.

Food for thought, anyway.

hyvää joulua!

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Debian-Med bug squashing

Planet Debian - Mar, 26/12/2017 - 12:16md

The Debian Med Advent Calendar was again really successful this year. As announced on the mailinglist, this year the second highest number of bugs has been closed during that bug squashing:

year number of bugs closed 2011 63 2012 28 2013 73 2014 5 2015 150 2016 95 2017 105

Well done everybody who participated!

alteholz http://blog.alteholz.eu blog.alteholz.eu » planetdebian

Dockerizing Compiled Software

Planet Debian - Mar, 26/12/2017 - 8:00pd

I recently went through a stint of closing a huge number of issues in the docker-library/php repository, and one of the oldest (and longest) discussions was related to installing depedencies for a compiling extensions, and I wrote a semi-long comment explaining how I do this in a general way for any software I wish to Dockerize.

I’m going to copy most of that comment here and perhaps expand a little bit more in order to have a better/cleaner place to link to!

The first step I take is to write the naïve version of the Dockerfile: download the source, run ./configure && make etc, clean up. I then try building my naïve creation, and in doing so hope for an error message. (yes, really!)

The error message will usually take the form of something like error: could not find "xyz.h" or error: libxyz development headers not found.

If I’m building in Debian, I’ll hit https://packages.debian.org/file:xyz.h (replacing “xyz.h” with the name of the header file from the error message), or even just Google something like “xyz.h debian”, to figure out the name of the package I require.

If I’m building in Alpine, I’ll use https://pkgs.alpinelinux.org/contents to perform a similar search.

The same works to some extent for “libxyz development headers”, but in my experience Google works better for those since different distributions and projects will call these development packages by different names, so sometimes it’s a little harder to figure out exactly which one is the “right” one to install.

Once I’ve got a package name, I add that package name to my Dockerfile, rinse, and repeat. Eventually, this usually leads to a successful build. Occationally I find that some library either isn’t in Debian or Alpine, or isn’t new enough, and I’ve also got to build it from source, but those instances are rare in my own experience – YMMV.

I’ll also often check the source for the Debian (via https://sources.debian.org) or Alpine (via https://git.alpinelinux.org/cgit/aports/tree) package of the software I’m looking to compile, especially paying attention to Build-Depends (ala php7.0=7.0.26-1’s debian/control file) and/or makedepends (ala php7’s APKBUILD file) for package name clues.

Personally, I find this sort of detective work interesting and rewarding, but I realize I’m probably a bit of a unique creature. Another good technique I use occationally is to determine whether anyone else has already Dockerized the thing I’m trying to, so I can simply learn directly from their Dockerfile which packages I’ll need to install.

For the specific case of PHP extensions, there’s almost always someone who’s already figured out what’s necessary for this or that module, and all I have to do is some light detective work to find them.

Anyways, that’s my method! Hope it’s helpful, and happy hunting!

Tianon Gravi admwiggin@gmail.com Tianon's Ramblings

Salsa batch import

Planet Debian - Hën, 25/12/2017 - 4:43md

Now that Salsa is in beta, it's time to import projects (= GitLab speak for "repository"). This is probably best done automated. Head to Access Tokens and generate a token with "api" scope, which you can then use with curl:

$ cat salsa-import #!/bin/sh set -eux PROJECT="${1%.git}" DESCRIPTION="$PROJECT packaging" ALIOTH_URL="https://anonscm.debian.org/git" ALIOTH_GROUP="collab-maint" SALSA_URL="https://salsa.debian.org/api/v4" SALSA_NAMESPACE="2" # 2 is "debian" SALSA_TOKEN="yourcryptictokenhere" curl -f "$SALSA_URL/projects?private_token=$SALSA_TOKEN" \ --data "path=$PROJECT&namespace_id=$SALSA_NAMESPACE&description=$DESCRIPTION&import_url=$ALIOTH_URL/$ALIOTH_GROUP/$PROJECT&visibility=public"

This will create the GitLab project in the chosen namespace, and import the repository from Alioth.

To get the namespace id, use something like:

curl https://salsa.debian.org/api/v4/groups | jq . | less

Pro tip: To import a whole Alioth group to GitLab, run this on Alioth:

for f in *.git; do sh salsa-import $f; done Christoph Berg http://www.df7cb.de/blog/tag/debian.html Myon's Debian Blog

Kitten Block equivalent for Firefox 57

Planet Debian - Mar, 19/12/2017 - 1:00pd

I’ve been using Kitten Block for years, since I don’t really need the blood pressure spike caused by accidentally following links to certain UK newspapers. Unfortunately it hasn’t been ported to Firefox 57. I tried emailing the author a couple of months ago, but my email bounced.

However, if your primary goal is just to block the websites in question rather than seeing kitten pictures as such (let’s face it, the internet is not short of alternative sources of kitten pictures), then it’s easy to do with uBlock Origin. After installing the extension if necessary, go to Tools → Add-ons → Extensions → uBlock Origin → Preferences → My filters, and add www.dailymail.co.uk and www.express.co.uk, each on its own line. (Of course you can easily add more if you like.) Voilà: instant tranquility.

Incidentally, this also works fine on Android. The fact that it was easy to install a good ad blocker without having to mess about with a rooted device or strange proxy settings was the main reason I switched to Firefox on my phone.

Colin Watson https://www.chiark.greenend.org.uk/~cjwatson/blog/ Colin Watson's blog

littler 0.3.3

Planet Debian - Dje, 17/12/2017 - 5:37md

The fourth release of littler as a CRAN package is now available, following in the now more than ten-year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. In my very biased eyes better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. Last but not least it is also less silly than Rscript and always loads the methods package avoiding those bizarro bugs between code running in R itself and a scripting front-end.

littler prefers to live on Linux and Unix, has its difficulties on OS X due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems where a good idea?) and simply does not exist on Windows (yet -- the build system could be extended -- see RInside for an existence proof, and volunteers welcome!).

A few examples as highlighted at the Github repo:

This release brings a few new examples scripts, extends a few existing ones and also includes two fixes thanks to Carl. Again, no internals were changed. The NEWS file entry is below.

Changes in littler version 0.3.3 (2017-12-17)
  • Changes in examples

    • The script installGithub.r now correctly uses the upgrade argument (Carl Boettiger in #49).

    • New script pnrrs.r to call the package-native registration helper function added in R 3.4.0

    • The script install2.r now has more robust error handling (Carl Boettiger in #50).

    • New script cow.r to use R Hub's check_on_windows

    • Scripts cow.r and c4c.r use #!/usr/bin/env r

    • New option --fast (or -f) for scripts build.r and rcc.r for faster package build and check

    • The build.r script now defaults to using the current directory if no argument is provided.

    • The RStudio getters now use the rvest package to parse the webpage with available versions.

  • Changes in package

    • Travis CI now uses https to fetch script, and sets the group

Courtesy of CRANberries, there is a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs off my littler page and the local directory here -- and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

Faqet

Subscribe to AlbLinux agreguesi