You are here

Planet Debian

Subscribe to Feed Planet Debian
Insider infos, master your Debian/Ubuntu distribution WEBlog -- Wouter's Eclectic Blog liw's English language blog feed An informal account of my adventures in coding and Free Software world. I intend to cover a great variety of themes. ral-arturo blog, about free software, debian, networks, systems, or whatever Thinking inside the box liw's English language blog feed Reproducible builds blog Thinking inside the box A place for my random thoughts about software "Passion and dispassion. Choose two." -- Larry Wall Thinking inside the box glandium.org Debian and Free Software sesse's blog spwhitton Debian and Free Software Recent content in Gsoc18 on bisco.org Entries tagged english jmtd Entries tagged english Ben Hutchings's diary of life and technology Reproducible builds blog Google Summer of Code 2018 Intern with Debian Chez Charles Bálint's blog about some of the important things in the Universe ganbatte kudasai! Entries tagged english Dude! Sweet! Entries tagged english Google Summer of Code 2018 Intern with Debian Thoughts about programming, sysadmin, Perl, Debian ... Stuff, Debian, Free Software and Craig jmtd Any sufficiently advanced thinking is indistinguishable from madness As time goes by ... Insider infos, master your Debian/Ubuntu distribution Thinking inside the box Digital-Scurf Ramblings Free Software Hacking Recent content in Debian on /home/athos Reproducible builds blog Thoughts, actions and projects Debian and Free Software sesse's blog Thinking inside the box showing latest 10
Përditësimi: 7 months 2 javë më parë

How I stopped merging broken code

Mar, 03/07/2018 - 9:22pd

It's been a while since I moved all my projects to GitHub. It's convenient to host Git projects, and the collaboration workflow is smooth.

I love pull requests to merge code. I review them, I send them, I merge them. The fact that you can plug them into a continuous integration system is great and makes sure that you don't merge code that will break your software. I usually have Travis-CI setup and running my unit tests and code style check.

The problem with the GitHub workflow is that it allows merging untested code.

What?

Yes, it does. If you think that your pull requests, all green decorated, are ready to be merged, you're wrong.

This might not be as good as you think

You see, pull requests on GitHub are marked as valid as soon as the continuous integration system passes and indicates that everything is valid. However, if the target branch (let's say, master) is updated while the pull request is opened, nothing forces to retest that pull request with this new master branch. You think that the code in this pull request works while that might have become false.

Master moved, the pull request is not up to date though it's still marked as passing integration.

So it might be that what went into your master branch now breaks this not-yet-merged pull request. You've no clue. You'll trust GitHub and press that green merge button, and you'll break your software. For whatever reason, it's possible that the test will break.

If the pull request has not been updated with the latest version of its target branch, it might break your integration.

The good news is that's something that's solvable with the strict workflow that Mergify provides. There's a nice explanation and example in Mergify's blog post You are merging untested code that I advise you to read. What Mergify provides here is a way to serialize the merge of pull requests while making sure that they are always updated with the latest version of their target branch. It makes sure that there's no way to merge broken code.

That's a workflow I've now adopted and automatized on all my repositories, and we've been using such a workflow for Gnocchi for more than a year, with great success. Once you start using it, it becomes impossible to go back!

Julien Danjou https://julien.danjou.info/ Julien Danjou

Towards Debian Unstable builds on Debian Stable OBS

Mar, 03/07/2018 - 2:32pd

This is the sixth post of my Google Summer of Code 2018 series. Links for the previous posts can be found below:

My GSoC contributions can be seen at the following links

Debian Unstable builds on OBS

Lately, I have been working towards triggering Debian Unstable builds with Debian OBS packages. As reported before, We can already build packages for both Debian 8 and 9 based on the example project configurations shipped with the package in Debian Stable and with the project configuration files publicly available on OBS SUSE instance.

While trying to build packages agains Debian unstable I have been reaching the following issue:

OBS scheduler reads the project configuration and starts downloading dependencies. The deendencies get downloaded but the build is never dispatched (the package stays on a “blocked” state). The downloaded dependencies get cleaned up and the scheduler starts the downloads again. OBS enters in an infinite loop there.

This only happens for builds on sid (unstable) and buster (testing).

We realized that the OBS version packaged in Debian 9 (the one we are currently using) does not support debian source packages built with dpkg >= 1.19. At first I started applying this patch on the OBS Debian package, but after reporting the issue to Debian OBS maintainers, they pointed me to the obs-build package in Debian stable backports repositories, which included the mentioned patch.

While the backports package included the patch needed to support source packages built with newr versions of dpkg, we still get the same issue with unstable and testing builds: the scheduler downloads the dependencies, hangs for a while but the build is never dispatched (the package stays on a “blocked” state). After a while, the dependencies get cleaned up and the scheduler starts the downloads again.

The bug has been quite hard to debug since OBS logs do not provide feedback on the problem we have been facing. To debug the problem, We tried to trigger local builds with osc. First, I (successfuly) triggered a few local builds against Debian 8 and 9 to make sure the command would work. Then We proceeded to trigger builds against Debian Unstable.

The first issue we faced was that the osc package in Debian stable cannot handle builds against source packages built with new dpkg versions. We fixed that by patching osc/util/debquery.py (we just substituted the file with the latest file in osc development version). After applying the patch, we got the same results we’d get when trying to build the package remotelly, but with debug flags on, we could have a better understanding of the problem:

BuildService API error: CPIO archive is incomplete (see .errors file)

The .errors file would just contain a list of dependencies which were missing in the CPIO archive.

If we kept retrying, OBS would keep caching more and more dependencies, until the build succeeded at some point.

We now know that the issue lies with the Download on Demand feature.

We then tried a local build in a fresh OBS instance (no cached packages) using the --disable-cpio-bulk-download osc build option, which would make OBS download each dependency individually instead of doing so in bulks. For our surprise, the builds succeeded in our first attempt.

Finally, we traced the issue all the way down to the OBS API call which is triggered when OBS needs to download missing dependenies. For some reason, the number of parameters (number of dependencies to be downloaded) affects the final response of the API call. When trying to download too many packages, The CPIO archive is not built correctly and OBS builds fail.

At the moment, we are still investigating why such calls fail with too many params and why it only fails for Debian Testing and Unstable repositories.

Next steps (A TODO list to keep on the radar)
  • Fix OBS builds on Debian Testing and Unstable
  • Write patch for Debian osc’s debquery.py so it can build Debian packages with control.tar.xz
  • Write patches for the OBS worker issue described in post 3
  • Change the default builder to perform builds with clang
  • Trigger new builds by using the dak/mailing lists messages
  • Verify the rake-tasks.sh script idempotency and propose patch to opencollab repository
  • Separate salt recipes for workers and server (locally)
  • Properly set hostnames (locally)
Athos Ribeiro https://athoscr.me/categories/debian/ Debian on /home/athos

Reproducible Builds: Weekly report #166

Hën, 02/07/2018 - 11:16md

Here’s what happened in the Reproducible Builds effort between Sunday June 24 and Saturday June 30 2018:

Packages reviewed and fixed, and bugs filed diffoscope development

diffoscope versions 97 and 98 were uploaded to Debian unstable by Chris Lamb. They included contributions already covered in previous weeks as well as new ones from:

Chris Lamb also updated the SSL certificate for try.difoscope.org.

Authors

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Reproducible builds folks https://reproducible-builds.org/blog/ reproducible-builds.org

80bit x87 FPU

Hën, 02/07/2018 - 10:24md

Once again, I got surprised by the 80 bit x87 FPU stuff.

First time was around a decade ago. Back then, it was something along the lines of a sort function like:

struct ValueSorter() { bool operator (const Value& first, const Value& second) const { double valueFirst = first.amount() * first.value(); double valueSecond = second.amount() * second.value(); return valueFirst < valueSecond; } }

With some values, first and would be smaller than second, and second smaller than first. All depending on which one got truncated to 64 bit, and which one came directly from the 80bit fpu.

This time, the 80 bit version when cast to integers was 1 smaller than the 64 bit version.

Oh. The joys of x86 CPU’s.

Sune Vuorela http://pusling.com/blog english – Blog :: Sune Vuorela

Another golang port, this time a toy virtual machine.

Hën, 02/07/2018 - 11:01pd

I don't understand why my toy virtual machine has as much interest as it does. It is a simple project that compiles "assembly language" into a series of bytecodes, and then allows them to be executed.

Since I recently started messing around with interpreters more generally I figured I should revisit it. Initially I rewrote the part that interprets the bytecodes in golang, which was simple, but now I've rewritten the compiler too.

Much like the previous work with interpreters this uses a lexer and an evaluator to handle things properly - in the original implementation the compiler used a series of regular expressions to parse the input files. Oops.

Anyway the end result is that I can compile a source file to bytecodes, execute bytecodes, or do both at once:

I made a couple of minor tweaks in the port, because I wanted extra facilities. Rather than implement an opcode "STRING_LENGTH" I copied the idea of traps - so a program can call-back to the interpreter to run some operations:

int 0x00 -> Set register 0 with the length of the string in register 0. int 0x01 -> Set register 0 with the contents of reading a string from the user

etc.

This notion of traps should allow complex operations to be implemented easily, in golang. I don't think I have the patience to do too much more, but it stands as a simple example of a "compiler" or an interpreter.

I think this program is the longest I've written. Remember how verbose assembly language is?

Otherwise: Helsinki Pride happened, Oiva learned to say his name (maybe?), and I finished reading all the James Bond novels (which were very different to the films, and have aged badly on the whole).

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Modern OpenGL

Hën, 02/07/2018 - 10:24pd

New project, new version of OpenGL—4.5 will be my hard minimum this time. Sorry, macOS, you brought this on yourself.

First impressions: Direct state access makes things a bit less soul-sucking. Immutable textures is not really a problem when you design for it to begin with, as opposed to retrofitting them. But you still need ~150 lines of code to compile a shader and render a fullscreen quad to another texture. :-/ VAOs, you are not my friend.

Next time, maybe Vulkan? Except the amount of stuff to get that first quad on screen seems even worse there.

Steinar H. Gunderson http://blog.sesse.net/ Steinar H. Gunderson

nanotime 0.2.1

Hën, 02/07/2018 - 12:45pd

A new minor version of the nanotime package for working with nanosecond timestamps just arrived on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it now uses a more rigorous S4-based approach thanks to a rewrite by Leonardo Silvestri.

This release brings three different enhancements / fixes that robustify usage. No new features were added.

Changes in version 0.2.1 (2018-07-01)
  • Added attribute-preserving comparison (Leonardo in #33).

  • Added two integer64 casts in constructors (Dirk in #36).

  • Added two checks for empty arguments (Dirk in #37).

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel http://dirk.eddelbuettel.com/blog Thinking inside the box

It's been 10 years since I changed Company and Job.

Dje, 01/07/2018 - 12:15md
It's been 10 years since I changed Company and Job. If you ask me now I think it was a successful move but not without issues. I think it's a high risk move to change company and job and location at the same time, you should change one of them. I changed job and company and marital status at the same time, that was too high risk.

Junichi Uekawa http://www.netfort.gr.jp/~dancer/diary/201807.html.en Dancer's daily hackings

Thoughts on the acquisition of GitHub by Microsoft

Enj, 28/06/2018 - 10:30md

Back at the start of 2010, I attended linux.conf.au in Wellington. One of the events I attended was sponsored by GitHub, who bought me beer in a fine Wellington bar (that was very proud of having an almost complete collection of BrewDog beers, including some Tactical Nuclear Penguin). I proceeded to tell them that I really didn’t understand their business model and that one of the great things about git was the very fact it was decentralised and we didn’t need to host things in one place any more. I don’t think they were offended, and the announcement Microsoft are acquiring GitHub for $7.5 billion proves that they had a much better idea about this stuff than me.

The acquisition announcement seems to have caused an exodus. GitLab reported over 13,000 projects being migrated in a single hour. IRC and Twitter were full of people throwing up their hands and saying it was terrible. Why is this? The fear factor seemed to come from was who was doing the acquiring. Microsoft. The big, bad Linux is a cancer folk. I saw a similar, though more muted, reaction when LinkedIn were acquired.

This extremely negative reaction to Microsoft seems bizarre to me these days. I’m well aware of their past, and their anti-competitive practises (dating back to MS-DOS vs DR-DOS). I’ve no doubt their current embrace of Free Software is ultimately driven by business decisions rather than a sudden fit of altruism. But I do think their current behaviour is something we could never have foreseen 15+ years ago. Did you ever think Microsoft would be a contributor to the Linux kernel? Is it fair to maintain such animosity? Not for me to say, I guess, but I think that some of it is that both GitHub and LinkedIn were services that people were already uneasy about using, and the acquisition was the straw that broke the camel’s back.

What are the issues with GitHub? I previously wrote about the GitHub TOS changes, stating I didn’t think it was necessary to fear the TOS changes, but that the centralised nature of the service was potentially something to be wary of. joeyh talked about this as long ago as 2011, discussing the aspects of the service other than the source code hosting that were only API accessible, or in some other way more restricted than a git clone away. It’s fair criticism; the extra features offered by GitHub are very much tied to their service. And yet I don’t recall the same complaints about SourceForge, long the home of choice for Free Software projects. Its problems seem to be more around a dated interface, being slow to enable distributed VCSes and the addition of advertising. People left because there were much better options, not because of idiological differences.

Let’s look at the advantages GitHub had (and still has) to offer. I held off on setting up a GitHub account for a long time. I didn’t see the need; I self-hosted my Git repositories. I had the ability to setup mailing lists if I needed them (and my projects generally aren’t popular enough that they did). But I succumbed in 2015. Why? I think it was probably as part of helping to run an OpenHatch workshop, trying to get people involved in Free software. That may sound ironic, but helping out with those workshops helped show me the benefit of the workflow GitHub offers. The whole fork / branch / work / submit a pull request approach really helps lower the barrier to entry for people getting started out. Suddenly fixing an annoying spelling mistake isn’t a huge thing; it’s easy to work in your own private playground and then make that work available to upstream and to anyone else who might be interested.

For small projects without active mailing lists that’s huge. Even for big projects that can be a huge win. And it’s not just useful to new contributors. It lowers the barrier for me to be a patch ‘n run contributor. Now that’s not necessarily appealing to some projects, because they’d rather get community involvement. And I get that, but I just don’t have the time to be active in all the projects I feel I can offer something to. Part of that ease is the power of git, the fact that a clone is a first class repo, capable of standing alone or being merged back into the parent. But another part is the interface GitHub created, and they should get some credit for that. It’s one of those things that once you’re presented with it it makes sense, but no one had done it quite as slickly up to that point. Submissions via mailing lists are much more likely to get lost in the archives compared to being able to see a list of all outstanding pull requests on GitHub, and the associated discussion. And subscribe only to that discussion rather than everything.

GitHub also seemed to appear at the right time. It, like SourceForge, enabled easy discovery of projects. Crucially it did this at a point when web frameworks were taking off and a whole range of developers who had not previously pull large chunks of code from other projects were suddenly doing so. And writing frameworks or plugins themselves and feeling in the mood to share them. GitHub has somehow managed to hit critical mass such that lots of code that I’m sure would have otherwise never seen the light of day are available to all. Perhaps the key was that repos were lightweight setups under usernames, unlike the heavier SourceForge approach of needing a complete project setup per codebase you wanted to push. Although it’s not my primary platform I engage with GitHub for my own code because the barrier is low; it’s couple of clicks on the website and then I just push to it like my other remote repos.

I seem to be coming across as a bit of a GitHub apologist here, which isn’t my intention. I just think the knee-jerk anti GitHub reaction has been fascinating to observe. I signed up to GitLab around the same time as GitHub, but I’m not under any illusions that their hosted service is significantly different from GitHub in terms of having my data hosted by a third party. Nothing that’s up on either site is only up there, and everything that is is publicly available anyway. I understand that as third parties they can change my access at any point in time, and so I haven’t built any infrastructure that assumes their continued existence. That said, why would I not take advantage of their facilities when they happen to be of use to me?

I don’t expect my use of GitHub to significantly change now they’ve been acquired.

Jonathan McDowell https://www.earth.li/~noodles/blog/ Noodles' Emptiness

Dynamic inventories for Ansible using Python

Enj, 28/06/2018 - 10:22md

Ansible not only accepts static machine inventories represented in an inventory file, but it is capable of leveraging also dynamic inventories. To use that mechanism the only thing which is needed is a program resp. a script which creates the particular machines which are needed for a certain project and returns their addresses as a JSON object, which represents an inventory like an inventory file does it. This makes it possible to created specially crafted tools to set up the number of cloud machines which are needed for an Ansible project, and the mechanism theoretically is open to any programming language. Instead of selecting an inventory project with the option -i like with ansible-playbook, just give the name of the program you’ve set up, and Ansible executes it and evaluates the inventory which is given back by that.

Here’s a little example of an dynamic inventory for Ansible written in Python. The script uses the python-digitalocean library in Debian (https://github.com/koalalorenzo/python-digitalocean) to launch a couple of DigitalOcean droplets for a particular Ansible project:

#!/usr/bin/env python import os import sys import json import digitalocean import ConfigParser config = ConfigParser.ConfigParser() config.read(os.path.dirname(os.path.realpath(__file__)) + '/inventory.cfg') nom = config.get('digitalocean', 'number_of_machines') keyid = config.get('digitalocean', 'key-id') try: token = os.environ['DO_ACCESS_TOKEN'] except KeyError: token = config.get('digitalocean', 'access-token') manager = digitalocean.Manager(token=token) def get_droplets(): droplets = manager.get_all_droplets(tag_name='ansible-demo') if not droplets: return False elif len(droplets) != 0 and len(droplets) != int(nom): print "The number of already set up 'ansible-demo' differs" sys.exit(1) elif len(droplets) == int(nom): return droplets key = manager.get_ssh_key(keyid) tag = digitalocean.Tag(token=token, name='ansible-demo') tag.create() def create_droplet(name): droplet = digitalocean.Droplet(token=token, name=name, region='fra1', image='debian-8-x64', size_slug='512mb', ssh_keys=[key]) droplet.create() tag.add_droplets(droplet.id) return True if get_droplets() is False: for node in range(int(nom))[1:]: create_droplet(name='wordpress-node'+str(node)) create_droplet('load-balancer') droplets = get_droplets() inventory = {} hosts = {} machines = [] for droplet in droplets: if 'load-balancer' in droplet.name: machines.append(droplet.ip_address) hosts['hosts']=machines inventory['load-balancer']=hosts hosts = {} machines = [] for droplet in droplets: if 'wordpress' in droplet.name: machines.append(droplet.ip_address) hosts['hosts']=machines inventory['wordpress-nodes']=hosts print json.dumps(inventory)

It’s a simple basic script to demonstrate how you can craft something for your own needs to leverage dynamic inventories for Ansible. The parameter of droplets like the size (512mb) the image (debian-8-x64) and the region (fra1) are hard coded, and can be changed easily if wanted. Other things needed like the total number of wanted machines, the access token for the DigitalOcean API and the ID of the public SSH key which is going to be applied to the virtual machines is evaluated using a simple configuration file (inventory.cfg):

[digitalocean] access-token = 09c43afcbdf4788c611d5a02b5397e5b37bc54c04371851 number_of_machines = 4 key-id = 21699531

The script of course can be executed independently of Ansible. The first time you execute it, it creates the number of machines which is wanted (consisting of always of one load-balancer node and – given the total number of machines, which is four – three wordpress-nodes), and gives back the IP adresses of the newly created machines being put into groups:

$ ./inventory.py {"wordpress-nodes": {"hosts": ["159.89.111.78", "159.89.111.84", "159.89.104.60"]}, "load-balancer": {"hosts": ["159.89.98.64"]}}

Any consecutive execution of this script recognizes that the wanted machines already have been created, and just returns this inventory the same way one more time:

$ ./inventory.py {"wordpress-nodes": {"hosts": ["159.89.111.78", "159.89.111.84", "159.89.104.60"]}, "load-balancer": {"hosts": ["159.89.98.64"]}}

If you delete the droplets then, and run the script again, a new set of machines gets created:

$ for i in $(doctl compute droplet list | awk '/ansible-demo/{print $(1)}'); do doctl compute droplet delete $i; done $ ./inventory.py {"wordpress-nodes": {"hosts": ["46.101.115.214", "165.227.138.66", "165.227.153.207"]}, "load-balancer": {"hosts": ["138.68.85.93"]}}

As you can see, the JSON object1 which is given back represents an Ansible inventory, the same inventory represented in a file it would have this form:

[load-balancer] 138.68.85.93 [wordpress-nodes] 46.101.115.214 165.227.138.66 165.227.153.207

Like said, you can use this “one-trick pony” Python script instead of an inventory file, just given the name of that, and the Ansible CLI tool runs it and works on the inventory which is given back:

$ ansible wordpress-nodes -i ./inventory.py -m ping -u root --private-key=~/.ssh/id_digitalocean 165.227.153.207 | SUCCESS => { "changed": false, "ping": "pong" } 46.101.115.214 | SUCCESS => { "changed": false, "ping": "pong" } 165.227.138.66 | SUCCESS => { "changed": false, "ping": "pong" }

Note: the script doesn’t yet supports a waiter mechanism but completes as soon as there are IP adresses available. It always could take a little while until the newly created machines are completely created, booted, and accessible via SSH, thus there could be errors on the hosts not being accessible. In that case, just wait a few seconds and run the Ansible command again.

  1. For the exact structure of the JSON object I’m drawing from: https://gist.github.com/jtyr/5213fabf2bcb943efc82f00959b91163 [return]
Daniel Stender http://www.danielstender.com/categories/debian/index.xml Debian on Tickets'n'patches

Abuse of childhood

Enj, 28/06/2018 - 8:11md

The blog post is in homage to any abuse victims and more directly to parents and children being separated by policies formed by a Government whose chief is supposed to be ‘The leader of the free world’. I sat on the blog post for almost a week even though I got it proof-read by two women, Miss S and Miss K to see if there is or was anything wrongful about the post. Both the women gave me their blessings as it’s something to be shared.

I am writing this blog post writing from my house in a safe environment, having chai (tea), listening to some of my favorite songs, far from trauma some children are going through.

I have been disturbed by the news of families and especially young children being separated from their own families because of state policy. I was pretty hesitant to write this post as we are told to only share our strengths and not our weaknesses or traumas of the past. I partly want to share so people who might be on the fence of whether separating families is a good idea or not might have something more to ponder over. The blog post is not limited to the ongoing and proposed U.S. Policy called Separations but all and any situations involving young children and abuse.

The first experience was when my cousin sister and her family came to visit me and mum. We often do not get relatives or entertain them due to water shortage issues. It’s such a common such issue all over India that nobody bats an eye over, we will probably talk about it in some other blog post if need be.

The sister who came, she has two daughters. The older one knew me and mum and knew that both of us have a penchant for pulling legs but at the same time like to spoil Didi and her. All of us are foodies so we have a grand time. The younger one though was unknown and I were unknown to her. In playfulness, we said we would keep the bigger sister with us and she was afraid. She clung to her sister like anything. Even though we tried to pacify her but she wasn’t free with us till the time she was safely tucked in with her sister in the family car along with her mum and dad.

While this is a small incident, it triggered a memory kept hidden over 35+ years back. I was perhaps 4-5 years old. I was being bought up by a separated working mum who had a typical government 9-5 job. My grandparents were (mother’s side) used to try and run the household in her absence, my grandmother doing all household chores, my grandfather helping here and there, while all outside responsibilities were his.

In this, there was a task to put me in school. Mum probably talked to some of her colleagues or somebody or the other suggested St. Francis, a Catholic missionary school named after one of the many saints named Saint Francis. It is and was a school nearby. There was a young man who used to do odd-jobs around the house and was trusted by all who was a fan of ( Amitabh Bachchan) and who is/was responsible for my love for first-day first shows of his movies. A genuinely nice elderly brother kind of person with whom I have had lot of beautiful memories of childhood.

Anyways, his job was to transport me back and fro to the school which he did without fail. The trouble started for me in school, I do not know the reason till date, maybe I was a bawler or whatever, I was kept in a dark, dank toilet for a year (minus the holidays). The first time I went to the dark, foreboding place, I probably shat and vomited for which I was beaten quite a bit. I learnt that if I were sent to the dark room, I had to put my knickers somewhere top where they wouldn’t get dirty so I would not get beaten. Sometimes I was also made to clean my vomit or shit which made the whole thing more worse. I would be sent to the room regularly and sometimes beaten. The only nice remembrance I had were the last hour before school used to be over as I was taken out of the toilet, made presentable and was made to sit near the window-sill from where I could see trains running by. I dunno whether it was just the smell of free, fresh air plus seeing trains and freedom got somehow mixed and a train-lover was born.

I don’t know why I didn’t ever tell my mum or anybody else about the abuse happening with me. Most probably because the teacher may have threatened me with something or the other. Somehow the year ended and I was failed. The only thing probably mother and my grandparents saw and felt that I had grown a bit thinner.

Either due to mother’s intuition or because I had failed I was made to change schools. While I was terrified of the change because I thought there was something wrong with me and things will be worse, it was actually the opposite. While corporal punishment was still the norm, there wasn’t any abuse unlike in the school before. In the eleven years I spent in the school, there was only one time that I was given toilet duty and that too because I had done something naughty like pulling a girl’s hair or something like that, and it was one or two students next to me. Rather than clean the toilets we ended up playing with water.

I told part of my experience to mum about a year, year and half after I was in the new school half-expecting something untoward to happen as the teacher has said. The only thing I remember from that conversation was shock registering on her face. I didn’t tell her about the vomit and shit part as I was embarrassed about it. I had nightmares about it till I was in my teens when with treks and everything I understood that even darkness can be a friend, just like light is.

For the next 13 odd years till I asked her to stop checking on me, she used to come to school every few months, talk to teachers and talk with class-mates. The same happened in college till I asked her to stop checking as I used to feel embarrassed when other class-mates gossiped.

It was only years after when I began working I understood what she was doing all along. She was just making sure I was ok.

The fact that it took me 30+ years to share this story/experience with the world at large also tells that somewhere I still feel a bit scarred, on the soul.

If you are feeling any sympathy or empathy towards me, while I’m thankful for it. It would be much better served to direct it towards those who are in a precarious vulnerable situation like I was. It doesn’t matter what politics you believe in or peddle in, separating children from their parents is immoral as a being forget even a human being. Even in the animal world, we see how predators only attack those young whose fathers and mothers are not around to protect them.

As in any story/experience/tale there are lessons or takeaways that I hope most parents teach their young ones, especially Indian or Asiatic parents at large –

1. Change the rule of ‘Respect all elders and obey them no matter what’ to ‘Respect everybody including yourself’ should be taught from parents to their children. This will boost their self-confidence a bit and also be share any issues that happen with them.

2. If somebody threatens you or threatens family immediately inform us (i.e. the parents).

3. The third one is perhaps the most difficult ‘telling the truth without worrying about consequences’. In Indian families we learn about ‘secrets’ and ‘modifying truth’ from our parents and elders. That needs to somehow change.

4. Few years ago, Aamir Khan (a film actor) with people specializing in working with children talked and shared about ‘Good touch, bad touch’ as a prevention method, maybe somebody could also do something similar for such kinds of violence.

At the end I recently came across an article and also Terminal.

shirishag75 https://flossexperiences.wordpress.com #planet-debian – Experiences in the community

My free software activities, June 2018

Enj, 28/06/2018 - 7:55md

It's been a while since I haven't done a report here! Since I need to do one for LTS, I figured I would also catchup you up with the work I've done in the last three months. Maybe I'll make that my new process: quarterly reports would reduce the overhead on my side with little loss on you, my precious (few? many?) readers.

Debian Long Term Support (LTS)

This is my monthly Debian LTS report.

I omitted doing a report in May because I didn't spend a significant number of hours, so this also covers a handful of hours of work in May.

May and June were strange months to work on LTS, as we made the transition between wheezy and jessie. I worked on all three LTS releases now, and I must have been absent from the last transition because I felt this one was a little confusing to go through. Maybe it's because I was on frontdesk duty during that time...

For a week or two it was unclear if we should have worked on wheezy, jessie, or both, or even how to work on either. I documented which packages needed an update from wheezy to jessie and proposed a process for the transition period. This generated a good discussion, but I am not sure we resolved the problems we had this time around in the long term. I also sent patches to the security team in the hope they would land in jessie before it turns into LTS, but most of those ended up being postponed to LTS.

Most of my work this month was spent actually working on porting the Mercurial fixes from wheezy to jessie. Technically, the patches were ported from upstream 4.3 and led to some pretty interesting results in the test suite, which fails to build from source non-reproducibly. Because I couldn't figure out how to fix this in the alloted time, I uploaded the package to my usual test location in the hope someone else picks it up. The test package fixes 6 issues (CVE-2018-1000132, CVE-2017-9462, CVE-2017-17458 and three issues without a CVE).

I also worked on cups in a similar way, sending a test package to the security team for 2 issues (CVE-2017-18190, CVE-2017-18248). Same for Dokuwiki, where I sent a patch single issue (CVE-2017-18123). Those have yet to be published, however, and I will hopefully wrap that up in July.

Because I was looking for work, I ended up doing meta-work as well. I made a prototype that would use the embedded-code-copies file to populate data/CVE/list with related packages as a way to address a problem we have in LTS triage, where package that were renamed between suites do not get correctly added to the tracker. It ended up being rejected because the changes were too invasive, but led to Brian May suggesting another approach, we'll see where that goes.

I've also looked at splitting up that dreaded data/CVE/list but my results were negative: it looks like git is very efficient at splitting things up. While a split up list might be easier on editors, it would be a massive change and was eventually refused by the security team.

Other free software work

With my last report dating back to February, this will naturally be a little imprecise, as three months have passed. But let's see...

LWN

I wrote eigth articles in the last three months, for an average of three monthly articles. I was aiming at an average of one or two a week, so I didn't get reach my goal. My last article about Kubecon generated a lot of feedback, probably the best I have ever received. It seems I struck a chord for a lot of people, so that certainly feels nice.

Linkchecker

Usual maintenance work, but we at last finally got access to the Linkchecker organization on GitHub, which meant a bit of reorganizing. The only bit missing now it the PyPI namespace, but that should also come soon. The code of conduct and contribution guides were finally merged after we clarified project membership. This gives us issue templates which should help us deal with the constant flow of issues that come in every day.

The biggest concern I have with the project now is the C parser and the outdated Windows executable. The latter has been removed from the website so hopefully Windows users won't report old bugs (although that means we won't gain new Windows users at all) and the former might be fixed by a port to BeautifulSoup.

Email over SSH

I did a lot of work to switch away from SMTP and IMAP to synchronise my workstation and laptops with my mailserver. Having the privilege of running my own server has its perks: I have SSH access to my mail spool, which brings the opportunity for interesting optimizations.

The first I have done is called rsendmail. Inspired by work from Don Armstrong and David Bremner, rsendmail is a Python program I wrote from scratch to deliver email over a pipe, securely. I do not trust the sendmail command: its behavior can vary a lot between platforms (e.g. allow flushing the mailqueue or printing it) and I wanted to reduce the attack surface. It works with another program I wrote called sshsendmail which connects to it over a pipe. It integrates well into "dumb" MTAs like nullmailer but I also use it with the popular Postfix as well, without problems.

The second is to switch from OfflineIMAP to Syncmaildir (SMD). The latter allows synchronization over SSH only. The migration was a little difficult but I very much like the results: SMD is faster than OfflineIMAP and works transparently in the background.

I really like to use SSH for email. I used to have my email password stored all over the place: in my Postfix config, in my email clients' memory, it was a mess. With the new configuration, things just work unattended and email feels like a solved problem, at least the synchronization aspects of it.

Emacs

As often happens, I've done some work on my Emacs configuration. I switched to a new Solarized theme, the bbatsov version which has support for a light and dark mode and generally better colors. I had problems with the cursor which are unfortunately unfixed.

I learned about and used the Emacs iPython Notebook project (EIN) and filed a feature request to replicate the "restart and run" behavior of the web interface. Otherwise it's real nice to have a decent editor to work on Python notebooks and I have used this to work on the terminal emulators series and the related source code

I have also tried to complete my conversion to Magit, a pretty nice wrapper around git for Emacs. Some of my usual git shortcuts have good replacements, but not all. For example, those are equivalent:

  • vc-annotate (C-x C-v g): magit-blame
  • vc-diff (C-x C-v =): magit-diff-buffer-file

Those do not have a direct equivalent:

  • vc-next-action (C-x C-q, or F6): anarcat/magic-commit-buffer, see below
  • vc-git-grep (F8): no replacement

I wrote my own replacement for "diff and commit this file" as the following function:

(defun anarcat/magit-commit-buffer () "commit the changes in the current buffer on the fly This is different than `magit-commit' because it calls `git commit' without going through the staging area AKA index first. This is a replacement for `vc-next-action'. Tip: setting the git configuration parameter `commit.verbose' to 2 will show the diff in the changelog buffer for review. See `git-config(1)' for more information. An alternative implementation was attempted with `magit-commit': (let ((magit-commit-ask-to-stage nil)) (magit-commit (list \"commit\" \"--\" (file-relative-name buffer-file-name))))) But it seems `magit-commit' asserts that we want to stage content and will fail with: `(user-error \"Nothing staged\")'. This is why this function calls `magit-run-git-with-editor' directly instead." (interactive) (magit-run-git-with-editor (list "commit" "--" (file-relative-name buffer-file-name))))

It's not very pretty, but it works... Mostly. Sometimes the magit-diff buffer becomes out of sync, but the --verbose output in the commitlog buffer still works.

I've also looked at git-annex integration. The magit-annex package did not work well for me: the file listing is really too slow. So I found the git-annex.el package, but did not try it out yet.

While working on all of this, I fell in a different rabbit hole: I found it inconvenient to "pastebin" stuff from Emacs, as it would involve selection a region, piping to pastebinit and copy-pasting the URL found in the *Messages* buffer. So I wrote this first prototype:

(defun pastebinit (begin end) "pass the region to pastebinit and add output to killring TODO: prompt for possible pastebins (pastebinit -l) with prefix arg Note that there's a `nopaste.el' project which already does this, which we should use instead. " (interactive "r") (message "use nopaste.el instead") (let ((proc (make-process :filter #'pastebinit--handle :command '("pastebinit") :connection-type 'pipe :buffer nil :name "pastebinit"))) (process-send-region proc begin end) (process-send-eof proc))) (defun pastebinit--handle (proc string) "handle output from pastebinit asynchronously" (let ((url (car (split-string string)))) (kill-new url) (message "paste uploaded and URL added to kill ring: %s" url)))

It was my first foray into aynchronous process operations in Emacs: difficult and confusing, but it mostly worked. Those who know me know what's coming next, however: I found not only one, but two libraries for pastebins in Emacs: nopaste and (after patching nopaste to add asynchronous support and customize support of course) debpaste.el. I'm not sure where that will go: there is a proposal to add nopaste in Debian that was discussed a while back and I made a detailed report there.

Monkeysign

I made a minor release of Monkeysign to cover for CVE-2018-12020 and its GPG sigspoof vulnerability. I am not sure where to take this project anymore, and I opened a discussion to possibly retire the project completely. Feedback welcome.

ikiwiki

I wrote a new ikiwiki plugin called bootstrap to fix table markup to match what the Bootstrap theme expects. This was particularly important for the previous blog post which uses tables a lot. This was surprisingly easy and might be useful to tweak other stuff in the theme.

Random stuff
  • I wrote up a review of security of APT packages when compared with the TUF project, in TufDerivedImprovements
  • contributed to about 20 different repositories on GitHub, too numerous to list here
Antoine Beaupré https://anarc.at/tag/debian-planet/ pages tagged debian-planet

Protecting Software Updates

Enj, 28/06/2018 - 5:57md

In my work at the ACLU, we fight for civil rights and civil liberties. This includes the ability to communicate privately, free from surveillance or censorship, and to control your own information. These are principles that I think most free software developers would agree with. In that vein, we just released a guide to securing software update channels in collaboration with students from NYU Law School.

The guide focuses specifically on what people and organizations that distribute software can do to ensure that their software update processes and mechanisms are actually things that their users can reliably trust. The goal is to make these channels trustworthy, even in the face of attempts by government agencies to force software vendors to ship malware to their users.

Why software updates specifically? Every well-engineered system on today's Internet will have a software update mechanism, since there are inevitably bugs that need fixing, or new features added to improve the system for the users. But update channels also represent a risk: they are an unclosable hole that enables installation of arbitrary software, often at the deepest, most-privileged level of the machine. This makes them a tempting target for anyone who wants to force the user to run malware, whether that's a criminal organization, a corporate or political rival, or a government surveillance agency.

I'm pleased to say that Debian has already implemented many of the technical recommendations we describe, including leading the way on reproducible builds. But as individual developers we might also be targeted, as lamby points out, and it's worth thinking about how you'd defend your users from such a situation.

As an organization, it would be great to see Debian continue to expand its protections for its users by holding ourselves even more accountable in our software update mechanisms than we already do. In particular, I'd love to see work on binary transparency, similar to what Mozilla has been doing, but that ensures that the archive signing keys (which our users trust) can't be abused/misused/compromised without public exposure, and that allows for easy monitoring and investigation of what binaries we are actually publishing.

In addition to technical measures, if you think you might ever get a government request to compromise your users, please make sure you are in touch with a lawyer who has your back, who knows how to challenge requests in court, and who understands why software update channels should not be used for deliberately shipping malware. If you're facing such a situation, and you're in the USA and you don't have a lawyer yet yourself, you can reach out to the lawyers my workplace, the ACLU's Speech, Privacy, and Technology Project for help.

Protecting software update channels is the right thing for our users, and for free software -- Debian's priorities. So please take a look at the guidance, think about how it might affect you or the people that you work with, and start a conversation about what you can do to defend these systems that everyone is obliged to trust on today's communications.

Daniel Kahn Gillmor (dkg) https://dkg.fifthhorseman.net/blog/ dkg's blog

Debian Perl Sprint 2018

Mër, 27/06/2018 - 6:40md

Three members of the Debian Perl team met in Hamburg between May 16 and May 20 2018 as part of the Mini-DebConf Hamburg to continue perl development work for Buster and to work on QA tasks across our 3500+ packages.

The participants had a good time and met other Debian friends. The sprint was productive:

  • 21 bugs were filed or worked on, many uploads were accepted.
  • The transition to Perl 5.28 was prepared, and versioned provides were again worked on.
  • Several cleanup tasks were performed, especially around the move from Alioth to Salsa in documentation, website, and wiki.
  • For src:perl, autopkgtests were enabled, and work on Versioned Provides has been resumed.

The full report was posted to the relevant Debian mailing lists.

The participants would like to thank the Mini-DebConf Hamburg organizers for providing the framework for our sprint, and all donors to the Debian project who helped to cover a large part of our expenses.

Dominic Hargreaves https://bits.debian.org/ Bits from Debian

debian cryptsetup sprint report

Mër, 27/06/2018 - 3:40md
Cryptsetup sprint report

The Cryptsetup team – consisting of Guilhem and Jonas – met on June 15 to 17 in order to work on the Debian cryptsetup packages. We ended up working three days (and nights) on the packages, refactored the whole initramfs integration, the SysVinit init scripts and the package build process and discussed numerous potential improvements as well as new features. The whole sprint was great fun and we enjoyed a lot sitting next to each other, being able to discuss design questions and implementation details in person instead of using clunky internet communication means. Besides, we had very nice and interesting chats, contacted other Debian folks from the Frankfurt area and met with jfs on Friday evening.

Splitting cryptsetup into cryptsetup-run and cryptsetup-initramfs

First we split the cryptsetup initramfs integration into a separate package cryptsetup-initramfs. The package that contains other Debian specific features like SysVinit scripts, keyscripts, etc. now is called cryptsetup-run and cryptsetup itself is a mere metapackage depending on both split off packages. So from now on, people can install cryptsetup-run if they don't need the cryptsetup initramfs integration. Once Buster is released we intend to rename cryptsetup-run to cryptsetup, which then will no longer have a strict dependency on cryptsetup-initramfs. This transition over two releases is necessary to avoid unexpected breakage on (dist-)upgrades. Meanwhile cryptsetup-initramfs ships a hook that upon generation of a new initramfs image detects which devices need to be unlocked early in the boot process and, in case it didn't find any, suggests the user to remove the package.

The package split allows us to define more fine-grained dependencies: since there are valid use case for wanting the cryptsetup binaries scripts but not the initramfs integration (in particular, on systems without encrypted root device), cryptsetup ≤2:2.0.2-1 was merely recommending initramfs-tools and busybox, while cryptsetup-initramfs now has hard dependencies on these packages.

We also updated the packages to latest upstream release and uploaded 2:2.0.3-1 on Friday shortly before 15:00 UTC. Due to the cryptsetup → cryptsetup-{run,initramfs} package split we hit the NEW queue, and it was manually approved by an ftpmaster… a mere 2h later. Kudos to them! That allowed us to continue with subsequent uploads during the following days, which was beyond our expectations for this sprint :-)

Extensive refactoring work

Afterwards we started working on and merging some heavy refactoring commits that touched almost all parts of the packages. First was a refactoring of the whole cryptsetup initramfs implementation that downsized both the cryptroot hook and script dramatically (less than half the size they were before). The logic to detect crypto disks was changed from parsing /etc/fstab to /proc/mounts and now the sysfs(5) block hierarchy is used to detect dm-crypt device dependencies. A lot of code duplication between the initramfs script and the SysVinit init script was removed by outsourcing common functions into a shared shell functions include file that is sourced by initramfs and SysVinit scripts. To complete the package refactoring, we also overhauled the build process by migrating it to the latest Debhelper 11 style. debian/rules as well was downsized to less than half the size and as an extra benefit we now run the upstream build-time testsuite during the package build process.

Some git statistics speak more than a thousand words:

$ git --no-pager diff --ignore-space-change --shortstat debian/2%2.0.2-1..debian/2%2.0.3-2 -- ./debian/ 92 files changed, 2247 insertions(+), 3180 deletions(-) $ find ./debian -type f \! -path ./debian/changelog -print0 | xargs -r0 cat | wc -l 7342 $ find ./debian -type f \! -path ./debian/changelog -printf x | wc -c 106 On CVE-2016-4484

Since 2:1.7.3-2, our initramfs boot script went to sleep for a full minute when the number of failed unlocking attempts exceeds the configured value (tries crypttab(5) option, which defaults to 3). This was added in order to defeat local brute force attacks, and mitigate one aspect of CVE-2016-4484; back then Jonas wrote a blog post to cover that story. Starting with 2:2.0.3-2 we changed this behavior and the script will now sleep for one second after each unsuccessful unlocking attempt. The new value should provide better user experience while still offering protection against local brute force attacks for very fast password hashing functions. The other aspect mentioned in the security advisory — namely the fact that the initramfs boot process drops to a root (rescue/debug) shell after the user fails to unlock the root device too many times — was not addressed at the time, and still isn't. initramfs-tools has a boot parameter panic=<sec> to disable the debug shell, and while setting this is beyond the scope of cryptsetup, we're planing to ask the initramfs-tools maintainers to change the default. (Of course setting panic=<sec> alone doesn't gain much, and one would need to lock down the full boot chain, including BIOS and boot loader.)

New features (work started)

Apart from the refactoring work we started/continued work on several new features:

  • We started to integrate luksSuspend support into system suspend. The idea is to luksSuspend all dm-crypt devices before suspending the machine in order to protect the storage in suspend mode. In theory, this seemed as simple as creating a minimal chroot in ramfs with the tools required to unlock (luksResume) the disks after machine resume, running luksSuspend from that chroot, putting the machine into suspend mode and running luksResume after it got resumed. Unfortunately it turned out to be way more complicated due to unpredictable race conditions between luksSuspend and machine suspend. So we ended up spending quite some time on debugging (and understanding) the issue. In the end it seems like the final sync() before machine suspend ( https://lwn.net/Articles/582648/ ) causes races in some cases as the dm-crypt device to be synced to is already luksSuspended. We ended up sending a request for help to the dm-crypt mailinglist but unfortunately so far didn't get a helpful response yet.
  • In order to get internationalization support for the messages and password prompts in the initramfs scripts, we patched gettext and locale support into initramfs-tools.
  • We started some preliminary work on adding beep support to the cryptsetup initramfs and sysVinit scripts for better accessibility support.

The above features are not available in the current Debian package yet, but we hope they will be included in a future release.

Bugs and Documentation

We also squashed quite some longstanding bugs and improved the crypttab(5) documentation. In total, we squashed 18 bugs during the sprint, the oldest one being from June 2013.

On the need for better QA

In addition to the many crypttab(5) options we also support a huge variety of block device stacks, such as LUKS-LVM2-MD combined in all ways one can possibly imagine. And that's a Debian addition hence something we, the cryptsetup package maintainers, have to develop and maintain ourselves. The many possibilities imply corner cases (it's not a surprise that complex or unusual setups can break in subtle ways) which motivated us to completely refactor the Debian-specific code, so it becomes easier to maintain.

While our final upload squashed 18 bugs, it also introduced new ones. In particular 2 rather serious regressions which fell through our tests. We have thorough tests for the most usual setups, as well as for some complex stacks we hand-crafted in order to detect corner cases, but this approach doesn't scale to covering the full spectrum of user setups: even with minimal sid installations the disk images would just take far too much space! Ideally we would have a automated test-suite, each test deploying a new transient sid VM with a particular setup. As the current and past regressions show, that's a beyond-the-scenes area we should work on. (In fact that's an effort we started already, but didn't touch during the sprint due to lack of time.)

More to come

There's some more things on our list that we didn't find time to work on. Apart from the unfinished new features we mentioned above, that's mainly the LUKS nuke feature that Kali Linux ships and the lack of keyscripts support to crypttab(5) in systemd.

Conclusion

In our eyes, the sprint was both a great success and great fun. We definitely want to repeat it anytime soon in order to further work on the open tasks and further improve the Debian cryptsetup package. There's still plenty of work to be done. We thank the Debian project and its generous donors for funding Guilhem's travel expenses.

Guilhem and Jonas, June 25th 2018

mejo roaming https://blog.freesources.org// mejo roaming

Montreal's Debian &amp; Stuff June Edition

Mër, 27/06/2018 - 6:00pd

Hello world!

This is me inviting you to the next Montreal Debian & Stuff. This one will take place at Koumbit's offices in Montreal on June 30th from 10:00 to 17:00 EST.

The idea behind 'Debian & Stuff' is to have an informal gatherings of the local Debian community to work on Debian-related stuff - or not. Everyone is welcome to drop by and chat with us, hack on a nice project or just hang out!

We've been trying to have monthly meetings of the Debian community in Montreal since April, so this will be the third event in a row.

Chances are we'll take a break in July because of DebConf, but I hope this will become a regular thing!

Louis-Philippe Véronneau https://veronneau.org/ Louis-Philippe Véronneau

Add-on to control the projector from within Kodi

Mar, 26/06/2018 - 11:55md

My movie playing setup involve Kodi, OpenELEC (probably soon to be replaced with LibreELEC) and an Infocus IN76 video projector. My projector can be controlled via both a infrared remote controller, and a RS-232 serial line. The vendor of my projector, InFocus, had been sensible enough to document the serial protocol in its user manual, so it is easily available, and I used it some years ago to write a small script to control the projector. For a while now, I longed for a setup where the projector was controlled by Kodi, for example in such a way that when the screen saver went on, the projector was turned off, and when the screen saver exited, the projector was turned on again.

A few days ago, with very good help from parts of my family, I managed to find a Kodi Add-on for controlling a Epson projector, and got in touch with its author to see if we could join forces and make a Add-on with support for several projectors. To my pleasure, he was positive to the idea, and we set out to add InFocus support to his add-on, and make the add-on suitable for the official Kodi add-on repository.

The Add-on is now working (for me, at least), with a few minor adjustments. The most important change I do relative to the master branch in the github repository is embedding the pyserial module in the add-on. The long term solution is to make a "script" type pyserial module for Kodi, that can be pulled in as a dependency in Kodi. But until that in place, I embed it.

The add-on can be configured to turn on the projector when Kodi starts, off when Kodi stops as well as turn the projector off when the screensaver start and on when the screesaver stops. It can also be told to set the projector source when turning on the projector.

If this sound interesting to you, check out the project github repository. Perhaps you can send patches to support your projector too? As soon as we find time to wrap up the latest changes, it should be available for easy installation using any Kodi instance.

For future improvements, I would like to add projector model detection and the ability to adjust the brightness level of the projector from within Kodi. We also need to figure out how to handle the cooling period of the projector. My projector refuses to turn on for 60 seconds after it was turned off. This is not handled well by the add-on at the moment.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen http://people.skolelinux.org/pere/blog/ Petter Reinholdtsen - Entries tagged english

Historical inventory of collaborative editors

Mar, 26/06/2018 - 8:19md

A quick inventory of major collaborative editor efforts, in chronological order.

As with any such list, it must start with an honorable mention to the mother of all demos during which Doug Engelbart presented what is basically an exhaustive list of all possible software written since 1968. This includes not only a collaborative editor, but graphics, programming and math editor.

Everything else after that demo is just a slower implementation to compensate for the acceleration of hardware.

Software gets slower faster than hardware gets faster. - Wirth's law

So without further ado, here is the list of notable collaborative editors that I could find. By "notable" i mean that they introduce a notable feature or implementation detail.

Project Date Platform Notes SubEthaEdit 2003-2015? Mac-only first collaborative, real-time, multi-cursor editor I could find. An reverse-engineering attempt in Emacs failed to produce anything. DocSynch 2004-2007 ? built on top of IRC! Gobby 2005-now C, multi-platform first open, solid and reliable implementation and still around! The protocol ("libinfinoted") is notoriously hard to port to other editors (e.g. Rudel failed to implement this in Emacs). 0.7 release in jan 2017 adds possible python bindings that might improve this. Interesting plugins: autosave to disk. Ethercalc 2005-now Web, Javascript First spreadsheet, along with Google docs moonedit 2005-2008? ? Original website died. Other user's cursors visible and emulated keystrokes noises. Included a calculator and music sequencer! synchroedit 2006-2007 ? First web app. Inkscape 2007-2011 C++ First graphics editor with collaborative features backed by the "whiteboard" plugin built on top of Jabber, now defunct. Abiword 2008-now C++ First word processor Etherpad 2008-now Web First solid web app. Originally developped as a heavy Java app in 2008, acquired and opensourced by Google in 2009, then rewritten in Node.js in 2011. Widely used. Wave 2009-2010 Web, Java Failed attempt at a grand protocol unification CRDT 2011 Specification Standard for replicating a document's datastructure among different computers reliably. Operational transform 2013 Specification Similar to CRDT, yet, well, different. Floobits 2013-now ? Commercial, but opensource plugins for different editors LibreOffice Online 2015-now Web free Google docs equivalent, now integrated in Nextcloud HackMD 2015-now ? Commercial but opensource. Inspired by hackpad, which was bought up by Dropbox. Cryptpad 2016-now web? spin-off of xwiki. encrypted, "zero-knowledge" on server Prosemirror 2016-now Web, Node.JS "Tries to bridge the gap between Markdown text editing and classical WYSIWYG editors." Not really an editor, but something that can be used to build one. Qill 2013-now Web, Node.JS Rich text editor, also javascript. Not sure it is really collaborative. Teletype 2017-now WebRTC, Node.JS For the GitHub's Atom editor, introduces "portal" idea that makes guests follow what the host is doing across multiple docs. p2p with webRTC after visit to introduction server, CRDT based. Tandem 2018-now Node.JS? Plugins for atom, vim, neovim, sublime... uses a relay to setup p2p connexions CRDT based. Dubious license issues were resolved thanks to the involvement of Debian developers, which makes it a promising standard to follow in the future. Other lists Antoine Beaupré https://anarc.at/tag/debian-planet/ pages tagged debian-planet

two security holes and a new library

Mar, 26/06/2018 - 8:18md

For the past week and a half, I've been working on embargoed security holes. The embargo is over, and git-annex 6.20180626 has been released, fixing those holes. I'm also announcing a new Haskell library, http-client-restricted, which could be used to avoid similar problems in other programs.

Working in secret under a security embargo is mostly new to me, and I mostly don't like it, but it seems to have been the right call in this case. The first security hole I found in git-annex turned out to have a wider impact, affecting code in git-annex plugins (aka external special remotes) that uses HTTP. And quite likely beyond git-annex to unrelated programs, but I'll let their developers talk about that. So quite a lot of people were involved in this behind the scenes.

See also: The RESTLESS Vulnerability: Non-Browser Based Cross-Domain HTTP Request Attacks

And then there was the second security hole in git-annex, which took several days to notice, in collaboration with Daniel Dent. That one's potentially very nasty, allowing decryption of arbitrary gpg-encrypted files, although exploiting it would be hard. It logically followed from the first security hole, so it's good that the first security hole was under embagro long enough for us to think it all though.

These security holes involved HTTP servers doing things to exploit clients that connect to them. For example, a HTTP server that a client asks for the content of a file stored on it can redirect to a file:// on the client's disk, or to http://localhost/ or a private web server on the client's internal network. Once the client is tricked into downloading such private data, the confusion can result in private data being exposed. See the_advisory for details.

Fixing this kind of security hole is not necessarily easy, because we use HTTP libraries, often via an API library, which may not give much control over following redirects. DNS rebinding attacks can be used to defeat security checks, if the HTTP library doesn't expose the IP address it's connecting to.

I faced this problem in git-annex's use of the Haskell http-client library. So I had to write a new library, http-client-restricted. Thanks to the good design of the http-client library, particularly its Manager abstraction, my library extends it rather than needing to replace it, and can be used with any API library built on top of http-client.

I get the impression that a lot of other language's HTTP libraries need to have similar things developed. Much like web browsers need to enforce same-origin policies, HTTP clients need to be able to reject certain redirects according to the security needs of the program using them.

I kept a private journal while working on these security holes, and am publishing it now:

Joey Hess http://joeyh.name/blog/ see shy jo

Hosted monitoring

Mar, 26/06/2018 - 6:01md

I don't run hosted monitoring as a service, I just happen to do some monitoring for a few (local) people, in exchange for money.

Setting up some new tests today I realised my monitoring software had an embarassingly bad bug:

  • The IMAP probe would connect to an IMAP/IMAPS server.
  • Optionally it would login with a username & password.
    • Thus it could test the service was functional

Unfortunately the IMAP probe would never logout after determining success/failure, which would lead to errors from the remote host after a few consecutive runs:

dovecot: imap-login: Maximum number of connections from user+IP exceeded (mail_max_userip_connections=10)

Oops. Anyway that bug was fixed promptly once it manifested itself, and it also gained the ability to validate SMTP authentication as a result of a customer user-request.

Otherwise I think things have been mixed recently:

  • I updated the webserver of Charlie Stross
  • Did more geekery with hardware.
  • Had a fun time in a sauna, on a boat.
  • Reported yet another security issue in an online PDF generator/converter
    • If you read a remote URL and convert the contents to PDF then be damn sure you don't let people submit file:///etc/passwd.
    • I've talked about this previously.
  • Made plaited bread for the first time.
    • It didn't suck.

(Hosted monitoring is interesting; many people will give you ping/HTTP-fetch monitoring. If you want to remotely test your email service? Far far far fewer options. I guess firewalls get involved if you're testing self-hosted services, rather than cloud-based stuff. But still an interesting niche. Feel free to tell me your budget ;)

Steve Kemp https://blog.steve.fi/ Steve Kemp's Blog

Faqet