You are here

Agreguesi i feed

Linus apologising

Planet Debian - Hën, 17/09/2018 - 8:45pd

Someone pointed me towards this email, in which Linus apologizes for some of his more unhealthy behaviour.

The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely.

To me, this came somewhat as a surprise. I'm not really involved in Linux kernel development, and so the history of what led up to this email mostly passed unnoticed, at least for me; but that doesn't mean I cannot recognize how difficult this must have been to write for him.

As I know from experience, admitting that you have made a mistake is hard. Admitting that you have been making the same mistake over and over again is even harder. Doing so publically? Even more so, since you're placing yourself in a vulnerable position, one that the less honorably inclined will take advantage of if you're not careful.

There isn't much I can contribute to the whole process, but there is this: Thanks, Linus, for being willing to work on those things, which can only make the community healthier as a result. It takes courage to admit things like that, and that is only to be admired. Hopefully this will have the result we're hoping for, too; but that, only time can tell.

Wouter Verhelst pd

Jono Bacon: Linus, His Apology, And Why We Should Support Him

Planet Ubuntu - Hën, 17/09/2018 - 12:12pd

Today, Linus Torvalds, the creator of Linux, which powers everything from smartwatches to electrical grids posted a pretty remarkable note on the kernel mailing list.

As a little bit of backstory, Linus has sometimes come under fire for the ways in which he has expressed feedback, provided criticism, and reacted to various scenarios on the kernel mailing list. This criticism has been fair in many cases: he has been overly aggressive at times, and while the kernel maintainers are a tight-knit group, the optics (not just of what it looks like, but what is actually happening), particularly for those new to kernel development has often been pretty bad.

Like many conflict scenarios, this feedback has been communicated back to him in both constructive and non-constructive ways. Historically he has been seemingly reluctant to really internalize this feedback, I suspect partially because (a) the Linux kernel is a very successful project, and (b) some of the critics have at times gone nuclear at him (which often doesn’t work as a strategy towards defensive people). Well, things changed today.

In his post today he shared some self-reflection on this feedback:

This week people in our community confronted me about my lifetime of not understanding emotions. My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry.

He went on to not just share an admission that this has been a problem, but to also share a very personal acceptance that he struggles to understand and engage with people’s emotions:

The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely. I am going to take time off and get some assistance on how to understand people’s emotions and respond appropriately.

His post is sure to light up the open source, Linux, and tech world for the next few weeks. For some it will be celebrated as a step in the right direction. For some it will be too little too late, and their animus will remain. For some they will be cautiously supportive, but defer judgement until they have seen his future behavior demonstrate substantive changes.

My Take

I wouldn’t say I know Linus very closely; we have a casual relationship. I see him at conferences from time to time, and we often bump into each other and catch up. I interviewed him for my book and for the Global Learning XPRIZE. From my experience he is a funny, genuine, friendly guy. Interestingly, and not unusually at all for open source, his online persona is rather different to his in-person persona. I am not going to deny that when I would see these dust-ups on LKML, it didn’t reflect the Linus I know. I chalked it down to a mixture of his struggles with social skills, dogmatic pragmatism, and ego.

His post today is a pretty remarkable change of posture for him, and I encourage that we as a community support him in making these changes.

Accepting these personal challenges is tough, particularly for someone in his position. Linux is a global phenomenon. It has resulted in billions of dollars of technology creation, powering thousands of companies, and changing the norms around of how software is consumed and created. It is easy to forget that Linux was started by a quiet Finnish kid in his university dorm room. It is important to remember that just because Linux has scaled elegantly, it doesn’t mean that Linus has been able to. He isn’t a codebase, he is a human being, and bugs are harder to spot and fix in humans. You can’t just deploy a fix immediately. It takes time to identify the problem and foster and grow a change. The starting point for this is to support people in that desire for change, not re-litigate the ills of the past: that will get us nowhere quickly.

I am also mindful of ego. None of us like to admit we have an ago, but we all do. You don’t get to build one of the most fundamental technologies in the last thirty years and not have an ego. He built it…they came…and a revolution was energized because of what he created. While Linus’s ego is more subtle, and certainly not overstated and extending to faddish self-promotion, overly expensive suits, and forays into Hollywood (quite the opposite), his ego has naturally resulted in abrupt and fixed opinions on how his project should run. This sometimes results in him plugging fingers in his ears to particularly challenging viewpoints from others (he is not the only person guilty of this, many people in similar positions do too). His post today is a clear example of him putting Linux as a project ahead of his own personal ego.

This is important for a few reasons. Firstly, being in such a public position and accepting your personal flaws isn’t a problem many people face, and isn’t a situation many people handle well. I work with a lot of CEOs, and they often say it is the loneliest job on the planet. I have heard American presidents say the same in interviews. This is because they are the top of the tree with all the responsibility and expectations on their shoulders. Put yourself in Linus’s position: his little project has blown up into a global phenomenon, and he didn’t necessarily have the social tools to be able to handle this change. Ego forces these internal struggles under the surface and to push them down and avoid them. So, to accept them as publicly and openly as he did today is a very firm step in the right direction. Now, the true test will be results, but we need to all provide the breathing space for him to accomplish them.

So, I would encourage everyone to give Linus a shot. This doesn’t mean the frustrations of the past are erased, and he has acknowledged and apologized for these mistakes as a first step. He has accepted he struggles with understanding other’s emotions, and a desire to help improve this for the betterment of the project and himself. He is a human, and the best tonic for humans to resolve their own internal struggles is the support and encouragement of other humans. This is not unique to Linus, but to anyone who faces similar struggles.

All the best, Linus.

The post Linus, His Apology, And Why We Should Support Him appeared first on Jono Bacon.


Planet Debian - Dje, 16/09/2018 - 8:18md

Was my festive shirt the model for the men’s room signs at Daniel K. Inouye International Airport in Honolulu? Did I see the sign on arrival and subconsciously decide to dress similarly when I returned to the airport to depart Hawaii?

Benjamin Mako Hill copyrighteous

GIMP 2.10

Planet Debian - Dje, 16/09/2018 - 6:00pd

GIMP 2.10 landed in Debian Testing a few weeks ago and I have to say I'm very happy about it. The last major version of GIMP (2.8) was released in 2012 and the new version fixes a lot of bugs and improved the user interface.

I've updated my Beginner's Guide to GIMP (sadly only in French) and in the process I found out a few things I thought I would share:


The default theme is Dark. Although it looks very nice in my opinion, I don't feel it's a good choice for productivity. The icon pack the theme uses is a monochrome flat 2D render and I feel it makes it hard to differentiate the icons from one another.

I would instead recommend on using the Light theme with the Color icon pack.

Single Window Mode

GIMP now enables Single Window Mode by default. That means that Dockable Dialog Windows like the Toolbar or the Layer Window cannot be moved around, but instead are locked to two docks on the right and the left of the screen.

Although you can hide and show these docks using Tab, I feel Single Window Mode is more suitable for larger screens. On my laptop, I still prefer moving the windows around as I used to do in 2.8.

You can disable Single Window Mode in the Windows tab.

Louis-Philippe Véronneau Louis-Philippe Véronneau

Two days afterward

Planet Debian - Dje, 16/09/2018 - 2:03pd

Sheena plodded down the stairs barefoot, her shiny bunions glinting in the cheap fluorescent light. “My boobs hurt,” she announced.

“That happens every month,” mumbled Luke, not looking up from his newspaper.

“It does not!” she retorted. “I think I'm perimenopausal.”

“At age 29?” he asked skeptically.

“Don't mansplain perimenopause to me!” she shouted.

“Okay,” he said, putting down the paper and walking over to embrace her.

“My boobs hurt,” she whispered.

Posted on 2018-09-16 Tags: mintings C Yammering

Backing the wrong horse?

Planet Debian - Sht, 15/09/2018 - 2:13md

I started using the Ruby programming in around 2003 or 2004, but stopped at some point later, perhaps around 2008. At the time I was frustrated with the approach the Ruby community took for managing packages of Ruby software: Ruby Gems. They interact really badly with distribution packaging and made the jobs of organisations like Debian more difficult. This was around the time that Ruby on Rails was making a big splash for web application development (I think version 2.0 had just come out). I did fork out for the predominant Ruby on Rails book to try it out. Unfortunately the software was evolving so quickly that the very first examples in the book no longer worked with the latest versions of Rails. I wasn't doing a lot of web development that at the time anyway, so I put the book, Rails and Ruby itself on the shelf and moved on to looking at the Python programming language instead.

Since then I've written lots of Python, both professionally and personally. Whenever it looked like a job was best solved with scripting, I'd pick up Python. I hadn't stopped to reflect on the experience much at all, beyond being glad I wasn't writing Perl any more (the first language I had any real traction with, 20 years ago).

I'm still writing Python on most work days, and there are bits of it that I do really like, but there are also aspects I really don't. Some of the stuff I work on needs to work in both Python 2 and 3, and that can be painful. The whole 2-versus-3 situation is awkward: I'd much rather just focus on 3, but Python 3 didn't ship in (at least) RHEL 7, although it looks like it will in 8.

Recently I dusted off some 12-year old Ruby code and had a pleasant experience interacting with Ruby again. It made me wonder, had I perhaps backed the wrong horse? In some respects, clearly not: being proficient with Python was immediately helpful when I started my current job (and may have had a hand in getting me hired). But in other respects, I wonder how much time I've wasted wrestling with e.g. Python's verbose, rigid regular expression library when Ruby has nice language-native regular expression operators (taken straight from Perl), or the really awkward support for Unicode in Python 2 (this reminds me of Perl for all the wrong reasons)

Next time I have a computing problem to solve where it looks like a script is the right approach, I'm going to give Ruby another try. Assuming I don't go for Haskell instead, of course. Or, perhaps I should try something completely different? One piece of advice that resonated with me from the excellent book The Pragmatic Programmer was "Learn a new (programming) language every year". It was only recently that I reflected that I haven't learned a completely new language for a very long time. I tried Go in 2013 but my attempt petered out. Should I pick that back up? It has a lot of traction in the stuff I do in my day job (Kubernetes, Docker, Openshift, etc.). "Rust" looks interesting, but a bit impenetrable at first glance. Idris? Lua? Something else?

jmtd Jonathan Dowland's Weblog

Recommendations for software?

Planet Debian - Sht, 15/09/2018 - 11:01pd

A quick post with two questions:

  • What spam-filtering software do you recommend?
  • Is there a PAM module for testing with HaveIBeenPwnd?
    • If not would you sponsor me to write it? ;)

So I've been using crm114 to perform spam-filtering on my incoming mail, via procmail, for the past few years.

Today I discovered it had archived about 12Gb of my email history, because I'd never pruned it. (Beneath ~/.crm/.)

So I wonder if there are better/simpler/different Bayesian-filters out there at that I should be switching to? Recommendations welcome - but don't say "SpamAssassin", thanks!

Secondly the excellent Have I Been Pwned site provides an API which allows you to test if a password has been previously included in a leak. This is great, and I've integrated their API in a couple of my own applications, but I was thinking on the bus home tonight it might be worth tying into PAM.

Sure in the interests of security people should use key-based authentication for SSH, but .. most people don't. Even so, if keys are used exclusively, a PAM module would allow you to validate the password which is used for sudo hasn't previously been leaked.

So it seems like there is value in a PAM module to do a lookup at authentication-time, via libcurl.

Steve Kemp Steve Kemp's Blog

Autobuilding Debian packages on salsa with Gitlab CI

Planet Debian - Pre, 14/09/2018 - 4:45md

Now that Debian has migrated away from alioth and towards a gitlab instance known as salsa, we get a pretty advanced Continuous Integration system for (almost) free. Having that, it might make sense to use that setup to autobuild and -test a package when committing something. I had a look at doing so for one of my packages, ola; the reason I chose that package is because it comes with an autopkgtest, so that makes testing it slightly easier (even if the autopkgtest is far from complete).

Gitlab CI is configured through a .gitlab-ci.yml file, which supports many options and may therefore be a bit complicated for first-time users. Since I've worked with it before, I understand how it works, so I thought it might be useful to show people how you can do things.

First, let's look at the .gitlab-ci.yml file which I wrote for the ola package:

stages: - build - autopkgtest .build: &build before_script: - apt-get update - apt-get -y install devscripts adduser fakeroot sudo - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r - adduser --disabled-password --gecos "" builduser - chown -R builduser:builduser . - chown builduser:builduser .. stage: build artifacts: paths: - built script: - sudo -u builduser dpkg-buildpackage -b -rfakeroot after_script: - mkdir built - dcmd mv ../*ges built/ .test: &test before_script: - apt-get update - apt-get -y install autopkgtest stage: autopkgtest script: - autopkgtest built/*ges -- null build:testing: <<: *build image: debian:testing build:unstable: <<: *build image: debian:sid test:testing: <<: *test dependencies: - build:testing image: debian:testing test:unstable: <<: *test dependencies: - build:unstable image: debian:sid

That's a bit much. How does it work?

Let's look at every individual toplevel key in the .gitlab-ci.yml file:

stages: - build - autopkgtest

Gitlab CI has a "stages" feature. A stage can have multiple jobs, which will run in parallel, and gitlab CI won't proceed to the next stage unless and until all the jobs in the last stage have finished. Jobs from one stage can use files from a previous stage by way of the "artifacts" or "cache" features (which we'll get to later). However, in order to be able to use the stages feature, you have to create stages first. That's what we do here.

.build: &build before_script: - apt-get update - apt-get -y install devscripts autoconf automake adduser fakeroot sudo - autoreconf -f -i - mk-build-deps -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" -i -r - adduser --disabled-password --gecos "" builduser - chown -R builduser:builduser . - chown builduser:builduser .. stage: build artifacts: paths: - built script: - sudo -u builduser dpkg-buildpackage -b -rfakeroot after_script: - mkdir built - dcmd mv ../*ges built/

This tells gitlab CI what to do when building the ola package. The main bit is the script: key in this template: it essentially tells gitlab CI to run dpkg-buildpackage. However, before we can do so, we need to install all the build-dependencies and a few helper things, as well as create a non-root user (since ola refuses to be built as root). This we do in the before_script: key. Finally, once the packages have been built, we create a built directory, and use devscripts' dcmd to move the output of the dpkg-buildpackage command into the built directory.

Note that the name of this key starts with a dot. This signals to gitlab CI that it is a "hidden" job, which it should not start by default. Additionally, we create an anchor (the &build at the end of that line) that we can refer to later. This makes it a job template, not a job itself, that we can reuse if we want to.

The reason we split up the script to be run into three different scripts (before_script, script, and after_script) is simply so that gitlab can understand the difference between "something is wrong with this commit" and "we failed to even configure the build system". It's not strictly necessary, but I find it helpful.

Since we configured the built directory as the artifacts path, gitlab will do two things:

  • First, it will create a .zip file in gitlab, which allows you to download the packages from the gitlab webinterface (and inspect them if needs be). The length of time for which the artifacts are stored can be configured by way of the artifacts:expire_in key; if not set, it defaults to 30 days or whatever the salsa maintainers have configured (of which I'm not sure what it is)
  • Second, it will make the artifacts available in the same location on jobs in the next stage.

The first can be avoided by using the cache feature rather than the artifacts one, if preferred.

.test: &test before_script: - apt-get update - apt-get -y install autopkgtest stage: autopkgtest script: - autopkgtest built/*ges -- null

This is very similar to the build template that we had before, except that it sets up and runs autopkgtest rather than dpkg-buildpackage, and that it does so in the autopkgtest stage rather than the build one, but there's nothing new here.

build:testing: <<: *build image: debian:testing build:unstable: <<: *build image: debian:sid

These two use the build template that we defined before. This is done by way of the <<: *build line, which is YAML-ese to say "inject the other template here". In addition, we add extra configuration -- in this case, we simply state that we want to build inside the debian:testing docker image in the build:testing job, and inside the debian:sid docker image in the build:unstable job.

test:testing: <<: *test dependencies: - build:testing image: debian:testing test:unstable: <<: *test dependencies: - build:unstable image: debian:sid

This is almost the same as the build:testing and the build:unstable jobs, except that:

  • We instantiate the test template, not the build one;
  • We say that the test:testing job depends on the build:testing one. This does not cause the job to start before the end of the previous stage (that is not possible); instead, it tells gitlab that the artifacts created in the build:testing job should be copied into the test:testing working directory. Without this line, all artifacts from all jobs from the previous stage would be copied, which in this case would create file conflicts (since the files from the build:testing job have the same name as the ones from the build:unstable one).

It is also possible to run autopkgtest in the same image in which the build was done. However, the downside of doing that is that if one of your built packages lacks a dependency that is an indirect dependency of one of your build dependencies, you won't notice; by blowing away the docker container in which the package was built and running autopkgtest in a pristine container, we avoid this issue.

With that, you have a complete working example of how to do continuous integration for Debian packaging. To see it work in practice, you might want to look at the ola version

UPDATE (2018-09-16): dropped the autoreconf call, isn't needed (it was there because it didn't work from the first go, and I thought that might have been related, but that turned out to be a red herring, and I forgot to drop it)

Wouter Verhelst pd

New website for vmdb2

Planet Debian - Pre, 14/09/2018 - 3:00md

I've set up a new website for vmdb2, my tool for building Debian images (basically "debootstrap, except in a disk image"). As usual for my websites, it's ugly. Feedback welcome.

Lars Wirzenius' blog englishfeed

David Tomaschik: Course Review: Software Defined Radio with HackRF

Planet Ubuntu - Pre, 14/09/2018 - 9:00pd

Over the past two days, I had the opportunity to attend Michael Ossman’s course “Software Defined Radio with HackRF” at Toorcon XX. This is a course I’ve wanted to take for several years, and I’m extremely happy that I finally had the chance. I wanted to write up a short review for others considering taking the course.

Course Material

The material in the course focuses predominantly on the basics of Software Defined Radio and Digital Signal Processing. This includes the math necessary to understand how the DSP handles the signal. The math is presented in a practical, rather than academic, way. It’s not a math class, but a review of the necessary basics, mostly of complex mathematics and a bit of trigonometry. (My high school teachers are now vindicated. I did use that math again.) You don’t need the math background coming in, but you do need to be prepared to think about math during the class. Extracting meaningful information from the ether is, it turns out, an exercise in mathematics.

There’s a lot of discussions of frequencies, frequency mixers, and how frequency, amplitude, and phase are related. Also, despite more than 20 years as an amateur radio operator, I finally understand dB properly. It’s possible to understand reasonably without having to do logarithms:

  • +3db = x2
  • +10db = x10
  • -3db = 1/2
  • -10db = 1/10

In terms of DSP, he demonstrated extracting signals of interest, clock recovery, and other techniques necessary for understanding digital signals. It really just scratches the surface, but is enough to get a basic signal understood.

From a security point of view, there was only a single system that we “attacked” in the class. I was hoping for a little bit more of this, but given the detail in the other content, I am not disappointed.

Mike pointed out that the course primarily focuses on getting signals from the air to a digital series of 0 an 1 bits, and then leaves the remainder to tools like python for adding meaning and interpretation of the bits. While I understand this (and, admittedly, at that point it’s similar to decoding an unknown network protocol), I would still like to have gone into more detail.

Course Style

At the very beginning of the course, Mike makes it clear that no two classes he teaches are exactly the same. He adapts the course to the experience and background of each class, and that was very evident from our small group this week. With such a small class, it became more like a guided conversation than a formal class.

Overall, the course was very interactive, with lots of student questions, as well as “Socratic Method” questions from the instructor. This was punctuated with a number of hands-on exercises. One of the best parts of the hands-on exercises is that Mike provides a flash drive with a preconfigured Ubuntu Linux installation containing all the tools that are needed for the course. This allows students to boot into a working environment, rather than having to play around with tool installation or virtual machine settings. (We were, in fact, warned that VMs often do not play well with SDR, because the USB forwarding has overhead resulting in lost samples.)

Mike made heavy use of the poster pad in the room, diagramming waveforms and information about the processes involved in the SDR architecture and the DSP done in the computer. This works well because he customizes the diagrams to explain each part and answer student questions. It also feels much more engaging than just pointing at slides. In fact, the only thing displayed on the projector is Mike’s live screen from his laptop, displaying things like the work he’s doing in GNURadio Companion and other pieces of software.

If you have devices you’re interested in studying, you should bring them along with you. If time permits, Mike tries to work these devices into the analysis during the course.

Tools Used Additional Resources Opinions & Conclusion

This was a great class that I really enjoyed. However, I really wish there had been more emphasis on how you decode and interpret the unknown signals, such as discussion of common packet types over RF, any tools for signals analysis that could be built either in Python or in GNURadio. Perhaps he (or someone) could offer an advanced class that focuses on the signal analysis, interpretation, and “spoofing” portions of the problem of attacking RF-based systems.

If you’re interested in doing assessments of physical devices, or into radio at all, I highly recommend this course. Mike obviously really knows the material, and getting a HackRF One is a pretty nice bonus. Watching the videos on his website will help you prepare for the math, but will also result int a good portion of the content being duplicated in the course. I’m not disappointed that I did that, and I still feel that I more than made good use of the time in the course, but it is something to be aware of.

Gaming: Rise of the Tomb Raider

Planet Debian - Pre, 14/09/2018 - 4:07pd

Over the last weekend I have finally finished The Rise of the Tomb Raider. As I wrote exactly 4 month ago when I started the game, I am a complete newby to these kind of games, and was blown away by the visual quality and great gameplay.

I was really surprised how huge an area I had to explore over the four month. Many of them with really excellent nature scenery, some of them with a depressingly dark and solemn atmosphere.

Another thing I learned that the Challenge Tombs – some kind of puzzle challenges – haven’t been that important in previous games. I enjoyed these puzzles much more then the fighting sequences (also because I am really bad at combat and have to die soo many times before I succeed!).

Lots of sliding down on ropes, jumping, running, diving, often into the unknown.

In the last part of the game when Lara enters into the city underneath the glacier one is reminded of the scene when Frodo tries to enter into Mordor, seeing all the dark riders and troops.

The final approach starts, there is still a long way (and many strange creatures to fight), but at least the final destination is in sight!

I finished the game with 100%, because I went back to all the areas and used Stella’s Tomb Raider Site walkthrough to help me find all the items. I think it is practically impossible within life time to find all the items alone without help. This is especially funny because in one of the trailers one of the developers mentions that they count with 15h of gameplay, and 30h if you want to finish at 100%. It took me 58h (!) to finish with 100% … and that with a walkthrough!!!

Anyway, tomorrow the Shadow of the Tomb Raider will be released, and I could also start the first game of the series, Tomb Raider, but I got a bit worn out by all the combat activity and decided to concentrate on a hard-core puzzle game, The Witness, which features loads and loads of puzzles, taught from simple to complex, and combined, to create a very interesting game. Now I only need the time …

Norbert Preining There and back again

Stephen Kelly: API Changes in Clang

Planet Ubuntu - Pre, 14/09/2018 - 12:45pd

I’ve started contributing to Clang, in the hope that I can improve the API for tooling. This will eventually mean changes to the C++ API of Clang, the CMake buildsystem, and new features in the tooling. Hopefully I’ll remember to blog about changes I make.

The Department of Redundancy Department

I’ve been implementing custom clang-tidy checks and have become quite familiar with the AST Node API. Because of my background in Qt, I was immediately disoriented by some API inconsistency. Certain API classes had both getStartLoc and getLocStart methods, as well as both getEndLoc and getLocEnd etc. The pairs of methods return the same content, so at least one set of them is redundant.

I’m used to working on stable library APIs, but Clang is different in that it offers no API stability guarantees at all. As an experiment, we staggered the introduction of new API and removal of old API. I ended up replacing the getStartLoc and getLocStart methods with getBeginLoc for consistency with other classes, and replaced getLocEnd with getEndLoc. Both old and new APIs are in the Clang 7.0.0 release, but the old APIs are already removed from Clang master. Users of the old APIs should port to the new ones at the next opportunity as described here.

Wait a minute, Where’s me dump()er?

Clang AST classes have a dump() method which is very useful for debugging. Several tools shipped with Clang are based on dumping AST nodes.

The SourceLocation type also provides a dump() method which outputs the file, line and column corresponding to a location. The problem with it though has always been that it does not include a newline at the end of the output, so the output gets lost in noise. This 2013 video tutorial shows the typical developer experience using that dump method. I’ve finally fixed that in Clang, but it did not make it into Clang 7.0.0.

In the same vein, I also added a dump() method to the SourceRange class. This prints out locations in the an angle-bracket format which shows only what changed between the beginning and end of the range.

Let it bind

When writing clang-tidy checks using AST Matchers, it is common to factor out intermediate variables for re-use or for clarity in the code.

auto valueMethod = cxxMethodDecl(hasName("value")); Finer->addMatcher(valueMethod.bind("methodDecl"));

clang-query has an analogous way to create intermediate matcher variables, but binding to them did not work. As of my recent commit, it is possible to create matcher variables and bind them later in a matcher:

let valueMethod cxxMethodDecl(hasName("value")) match valueMethod.bind("methodDecl") match callExpr(callee(valueMethod.bind("methodDecl"))).bind("methodCall") Preload your Queries

Staying on the same topic, I extended clang-query with a --preload option. This allows starting clang-query with some commands already invoked, and then continue using it as a REPL:

bash$ cat cmds.txt let valueMethod cxxMethodDecl(hasName("value")) bash$ clang-query --preload cmds.txt somefile.cpp clang-query> match valueMethod.bind("methodDecl") Match #1: somefile.cpp:4:2: note: "methodDecl" binds here void value(); ^~~~~~~~~~~~ 1 match.

Previously, it was only possible to run commands from a file without also creating a REPL using the -c option. The --preload option with the REPL is useful when experimenting with matchers and having to restart clang-query regularly. This happens a lot when modifying code to examine changes to AST nodes.



Planet Debian - Enj, 13/09/2018 - 11:54md
Reproducible Builds 2018 Paris meeting

Many lovely people interested in reproducible builds will meet again at a three-day event in Paris we will welcome both previous attendees and new projects alike! We hope to discuss, connect and exchange ideas in order to grow the reproducible builds effort and we would be delighted if you'd join! And this is the space we'll bring into life:

And whilst the exact content of the meeting will be shaped by the participants when we do it, the main goals will include:

  • Updating & exchanging the status of reproducible builds in various projects.
  • Improving collaboration both between and inside projects.
  • Expanding the scope and reach of reproducible builds to more projects.
  • Working and hacking together on solutions.
  • Brainstorming designs for tools enabling end-users to get the most benefits from reproducible builds.
  • Discussing how reproducible builds will be usable and meaningful to users and developers alike.

Please reach out if you'd like to participate in hopefully interesting, inspiring and intense technical sessions about reproducible builds and beyond!

Holger Levsen Any sufficiently advanced thinking is indistinguishable from madness

What is the difference between moderation and censorship?

Planet Debian - Enj, 13/09/2018 - 11:09md

FSFE fellows recently started discussing my blog posts about Who were the fellowship? and An FSFE Fellowship Representative's dilemma.

Fellows making posts in support of reform have reported their emails were rejected. Some fellows had CC'd me on their posts to the list and these posts never appeared publicly. These are some examples of responses received by a fellow trying to post on the list:

The list moderation team decided now to put your email address on moderation for one month. This is not censorship.

One fellow forwarded me a rejected message to look at. It isn't obscene, doesn't attack anybody and doesn't violate the code of conduct. The fellow writes:

+1 for somebody to answer the original questions with real answers
-1 for more character assassination

Censors moderators responded to that fellow:

This message is in constructive and unsuited for a public discussion list.

Why would moderators block something like that? In the same thread, they allowed some very personal attack messages in favour of existing management.

Moderation + Bias = Censorship

Even links to the public list archives are giving errors and people are joking that they will only work again after the censors PR team change all the previous emails to comply with the censorship communications policy exposed in my last blog.

Fellows have started noticing that the blog of their representative is not being syndicated on Planet FSFE any more.

Some people complained that my last blog didn't provide evidence to justify my concerns about censorship. I'd like to thank FSFE management for helping me respond to that concern so conclusively with these heavy-handed actions against the community over the last 48 hours.

The collapse of the fellowship described in my earlier blog has been caused by FSFE management decisions. The solutions need to come from the grass roots. A totalitarian crackdown on all communications is a great way to make sure that never happens.

FSFE claims to be a representative of the free software community in Europe. Does this behaviour reflect how other communities operate? How successful would other communities be if they suffocated ideas in this manner?

This is what people see right now trying to follow links to the main FSFE Discussion list archive:

Daniel.Pocock - debian

Debian LTS work, August 2018

Planet Debian - Enj, 13/09/2018 - 1:54md

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 8 hours from July. I worked only 5 hours and therefore carried over 18 hours to September.

I prepared and uploaded updates to the linux-4.9 (DLA 1466-1, DLA 1481-1) and linux-latest-4.9 packages.

Ben Hutchings Better living through software

TeX Live contrib updates

Planet Debian - Enj, 13/09/2018 - 4:56pd

It is now more than a year that I took over tlcontrib from Taco and provide it at the TeX Live contrib repository. It does now serve old TeX Live 2017 as well as the current TeX Live 2018, and since last year the number of packages has increased from 52 to 70.

Recent changes include pTeX support packages for non-free fonts and more packages from the AcroTeX bundle. In particular since the last post the following packages have been added: aeb-mobile, aeb-tilebg, aebenvelope, cjk-gs-integrate-macos, comicsans, datepicker-pro, digicap-pro, dps, eq-save, fetchbibpes, japanese-otf-nonfree, japanese-otf-uptex-nonfree, mkstmpdad, opacity-pro, ptex-fontmaps-macos, qrcstamps.

Here I want to thank Jürgen Gilg for reminding me consistently of updates I have missed, big thanks!

To recall what TLcontrib is for: It collects packages that not distributed inside TeX Live proper for one or another of the following reasons:

  • because it is not free software according to the FSF guidelines;
  • because it is an executable update;
  • because it is not available on CTAN;
  • because it is an intermediate release for testing.

In short, anything related to TeX that can not be on TeX Live but can still legally be distributed over the Internet can have a place on TLContrib. The full list of packages can be seen here.

Please see the main page for Quickstart, History, and details about how to contribute packages.

Last but not least, while this is a service to make access to non-free packages more easy for users of the TeX Live Manager, our aim is to have as many as possible packages made completely free and included into TeX Live proper!


Norbert Preining There and back again

Lubuntu Blog: Lubuntu Development Newsletter #11

Planet Ubuntu - Enj, 13/09/2018 - 3:13pd
This is the eleventh issue of The Lubuntu Development Newsletter. You can read the last issue here. Changes General This list is a bit short because we have been focusing on general, behind-the-scenes (and admittedly tedious) administration tasks, but we plan on ramping up development progress in time for the next newsletter. Desktop Experience We […]

digest 0.6.17

Planet Debian - Enj, 13/09/2018 - 2:06pd

digest version 0.6.17 arrived on CRAN earlier today after a day of gestation in the bowels of CRAN, and should get uploaded to Debian in due course.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64 and murmur32 algorithms) permitting easy comparison of R language objects.

This release brings another robustifications thanks to Radford Neal who noticed a segfault in 32 bit mode on Sparc running Solaris. Yay for esoteric setups. But thanks to his very nice pull request, this is taken care of, and it also squashed one UBSAN error under the standard gcc setup. But two files remain with UBSAN issues, help would be welcome!

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel Thinking inside the box

Disappointment on the new commute

Planet Debian - Mër, 12/09/2018 - 6:53md

Imagine my disappointment when I discovered that signs on Stanford’s campus pointing to their “Enchanted Broccoli Forest” and “Narnia”—both of which that I have been passing daily on my new commute—merely indicate the location of student living groups with whimsical names.

Benjamin Mako Hill copyrighteous

Distributing static routes with DHCP

Planet Debian - Mër, 12/09/2018 - 10:00pd

This week I had to deal with a setup in which I needed to distribute additional static network routes using DHCP.

The setup is easy but there are some caveats to take into account. Also, DHCP clients might not behave as one would expect.

The starting situation was a working DHCP clinet/server deployment. Some standard virtual machines would request for their network setup over the network. Nothing new. The DHCP server is dnsmasq, and the daemon is running under Openstack control, but this has nothing to do with the DHCP problem itself.

By default, it seems dnsmasq sends to clients the Routers (code 3) option, which usually contains the gateway for clients in the subnet to use. My situation required to distribute one additional static route for another subnet. My idea was for DHCP clients to end with this simple routing table:

user@dhcpclient:~$ ip r default via dev eth0 dev eth0 proto kernel scope link src via dev eth0 <--- extra static route

To distribute this extra static route, you only need to edit the dnsmasq config file and add a line like this:


For my initial tests of this config I was simply doing requesting to refresh the lease from the client DHCP. This got my new static route online, but if in the case of a reboot, the client DHCP would not get the default route. The different behaviour is documented in dhclient-script(8).

To try something similar to a reboot situation, I had to use this command:

user@dhcpclient:~$ sudo ifup --force eth0 Internet Systems Consortium DHCP Client 4.3.1 Copyright 2004-2014 Internet Systems Consortium. All rights reserved. For info, please visit Listening on LPF/eth0/xx:xx:xx:xx:xx:xx Sending on LPF/eth0/xx:xx:xx:xx:xx:xx Sending on Socket/fallback DHCPREQUEST on eth0 to port 67 DHCPACK from RTNETLINK answers: File exists bound to -- renewal in 20284 seconds.

Anyway this was really surprissing at first, and led me to debug DHCP packets using dhcpdump:

TIME: 2018-09-11 18:06:03.496 IP: (xx:xx:xx:xx:xx:xx) > (xx:xx:xx:xx:xx:xx) OP: 2 (BOOTPREPLY) HTYPE: 1 (Ethernet) HLEN: 6 HOPS: 0 XID: xxxxxxxx SECS: 8 FLAGS: 0 CIADDR: YIADDR: SIADDR: xx.xx.xx.x GIADDR: CHADDR: xx:xx:xx:xx:xx:xx:00:00:00:00:00:00:00:00:00:00 OPTION: 53 ( 1) DHCP message type 2 (DHCPOFFER) OPTION: 54 ( 4) Server identifier OPTION: 51 ( 4) IP address leasetime 43200 (12h) OPTION: 58 ( 4) T1 21600 (6h) OPTION: 59 ( 4) T2 37800 (10h30m) OPTION: 1 ( 4) Subnet mask OPTION: 28 ( 4) Broadcast address OPTION: 15 ( 13) Domainname xxxxxxxx OPTION: 12 ( 21) Host name xxxxxxxx OPTION: 3 ( 4) Routers OPTION: 121 ( 8) Classless Static Route xxxxxxxxxxxxxx .....D.. [...] ---------------------------------------------------------------------------

(you can use this handy command both in server and client side)

So, the DHCP server was sending both the Routers (code 3) and the Classless Static Route (code 121) options to the clients. So, why would fail the client to install both routes?

I obtained some help from folks on IRC and they pointed me towards RFC3442:

DHCP Client Behavior [...] If the DHCP server returns both a Classless Static Routes option and a Router option, the DHCP client MUST ignore the Router option.

So, clients are supposed to ignore the Routers (code 3) option if they get an additional static route. This is very counter-intuitive, but can be easily workarounded by just distributing the default gateway route as another classless static route:

dhcp-option=option:classless-static-route,,,, # ^^ default route ^^ extra static route

Obviously this was my first time in my career dealing with this setup and situation. My conclussion is that even old-enough protocols like DHCP can sometimes behave in a counter-intuitive way. Reading RFCs is not always funny, but can help understand what’s going on.

You can read the original issue in Wikimedia Foundation’s Phabricator ticket T202636, including all the back-and-forth work I did. Yes, is open to the public ;-)

Arturo Borrero González


Subscribe to AlbLinux agreguesi