You are here

Agreguesi i feed

Packaging is hard. Packager-friendly is harder.

Planet Debian - Mër, 14/02/2018 - 12:21md

Releasing software is no small feat, especially in 2018. You could just upload your source code somewhere (a Git, Subversion, CVS, etc, repo – or tarballs on Sourceforge, or whatever), but it matters what that source looks like and how easy it is to consume. What does the required build environment look like? Are there any dependencies on other software, and if so, which versions? What if the versions don’t match exactly?

Most languages feature solutions to the build environment dependency – Ruby has Gems, Perl has CPAN, Java has Maven. You distribute a manifest with your source, detailing the versions of the dependencies which work, and users who download your source can just use those.

Then, however, we have distributions. If openSUSE or Debian wants to include your software, then it’s not just a case of calling into CPAN during the packaging process – distribution builds need to be repeatable, and work offline. And it’s not feasible for packagers to look after 30 versions of every library – generally a distribution will contain 1-3 versions of a given library, and all software in the distribution will be altered one way or another to build against their version of things. It’s a long, slow, arduous process.

Life is easier for distribution packagers, the more the software released adheres to their perfect model – no non-source files in the distribution, minimal or well-formed dependencies on third parties, swathes of #ifdefs to handle changes in dependency APIs between versions, etc.

Problem is, this can actively work against upstream development.

Developers love npm or NuGet because it’s so easy to consume – asking them to abandon those tools is a significant impediment to developer flow. And it doesn’t scale – maybe a friendly upstream can drop one or two dependencies. But 10? 100? If you’re consuming a LOT of packages via the language package manager, as a developer, being told “stop doing that” isn’t just going to slow you down – it’s going to require a monumental engineering effort. And there’s the other side effect – moving from Yarn or Pip to a series of separate download/build/install steps will slow down CI significantly – and if your project takes hours to build as-is, slowing it down is not going to improve the project.

Therein lies the rub. When a project has limited developer time allocated to it, spending that time on an effort which will literally make development harder and worse, for the benefit of distribution maintainers, is a hard sell.

So, a concrete example: MonoDevelop. MD in Debian is pretty old. Why isn’t it newer? Well, because the build system moved away from a packager ideal so far it’s basically impossible at current community & company staffing levels to claw it back. Build-time dependency downloads went from a half dozen in the 5.x era (somewhat easily patched away in distributions) to over 110 today. The underlying build system changed from XBuild (Mono’s reimplementation of Microsoft MSBuild, a build system for Visual Studio projects) to real MSbuild (now FOSS, but an enormous shipping container of worms of its own when it comes to distribution-shippable releases, for all the same reasons & worse). It’s significant work for the MonoDevelop team to spend time on ensuring all their project files work on XBuild with Mono’s compiler, in addition to MSBuild with Microsoft’s compiler (and any mix thereof). It’s significant work to strip out the use of NuGet and Paket packages – especially when their primary OS, macOS, doesn’t have “distribution packages” to depend on.

And then there’s the integration testing problem. When a distribution starts messing with your dependencies, all your QA goes out the window – users are getting a combination of literally hundreds of pieces of software which might carry your app’s label, but you have no idea what the end result of that combination is. My usual anecdote here is when Ubuntu shipped Banshee built against a new, not-regression-tested version of SQLite, which caused a huge performance regression in random playback. When a distribution ships a broken version of an app with your name on it – broken by their actions, because you invested significant engineering resources in enabling them to do so – users won’t blame the distribution, they’ll blame you.

Releasing software is hard.

directhex debian – APEBOX.ORG

Sean Davis: Exo 0.12.0 Stable Release

Planet Ubuntu - Mër, 14/02/2018 - 12:05md

With full GTK+ 2 and 3 support and numerous enhancements, Exo 0.12.0 provides a solid development base for new and refreshed Xfce applications.

What’s New?

Since this is the first stable release in nearly 2.5 years, I am going to provide a quick summary of the changes since version 0.10.7, released September 13, 2015.

New Features GTK Extensions Helpers
  • WebBrower: Added Brave, Google Chrome, and Vivaldi
  • MailReader: Added Geary, dropped Opera Mail (no longer available for Linux)
  • exo-csource: Added a new --output flag to write the generated output to a file
  • exo-helper: Added a new --query flag to determine the preferred application
  • Replaced non-standard gnome-* icons
  • Replaced non-existent “missing-image” icon
  • Build requirements were updated. Exo now requires GTK+ 2.24, GTK+ 3.22, GLib 2.42, libxfce4ui 4.12, and libxfce4util 4.12. Building GTK+ 3 libraries is not optional.
  • Default debug setting is now “yes” instead of “full”.
  • Added missing per-release API indices
  • Resolved undocumented symbols (100% symbol coverage)
  • Updated project documentation (HACKING, README, THANKS)
Release Notes
  • The full release notes can be found here.
  • The full change log can be found here.

The latest version of Exo can always be downloaded from the Xfce archives. Grab version 0.12.0 from the below link.

  • SHA-256: 64b88271a37d0ec7dca062c7bc61ca323116f7855092ac39698c421a2f30a18f
  • SHA-1: 364a9aaa1724b99fe33f46b93969d98e990e9a1f
  • MD5: 724afcca224f5fb22b510926d2740e52

David Tomaschik: Preparing for Penetration Testing with Kali Linux

Planet Ubuntu - Mër, 14/02/2018 - 9:00pd
The Penetration Testing with Kali Linux (PWK) course is one of the most popular information security courses, culminating in a hands-on exam for the Offensive Security Certified Professional certification. It provides a hands-on learning experience for those looking to get into penetration testing or other areas of offensive security. These are some of the things you might want to know before attempting the PWK class or the OSCP exam.


Using VLC to stream bittorrent sources

Planet Debian - Mër, 14/02/2018 - 8:00pd

A few days ago, a new major version of VLC was announced, and I decided to check out if it now supported streaming over bittorrent and webtorrent. Bittorrent is one of the most efficient ways to distribute large files on the Internet, and Webtorrent is a variant of Bittorrent using WebRTC as its transport channel, allowing web pages to stream and share files using the same technique. The network protocols are similar but not identical, so a client supporting one of them can not talk to a client supporting the other. I was a bit surprised with what I discovered when I started to look. Looking at the release notes did not help answering this question, so I started searching the web. I found several news articles from 2013, most of them tracing the news from Torrentfreak ("Open Source Giant VLC Mulls BitTorrent Streaming Support"), about a initiative to pay someone to create a VLC patch for bittorrent support. To figure out what happend with this initiative, I headed over to the #videolan IRC channel and asked if there were some bug or feature request tickets tracking such feature. I got an answer from lead developer Jean-Babtiste Kempf, telling me that there was a patch but neither he nor anyone else knew where it was. So I searched a bit more, and came across an independent VLC plugin to add bittorrent support, created by Johan Gunnarsson in 2016/2017. Again according to Jean-Babtiste, this is not the patch he was talking about.

Anyway, to test the plugin, I made a working Debian package from the git repository, with some modifications. After installing this package, I could stream videos from The Internet Archive using VLC commands like this:


The plugin is supposed to handle magnet links too, but since The Internet Archive do not have magnet links and I did not want to spend time tracking down another source, I have not tested it. It can take quite a while before the video start playing without any indication of what is going on from VLC. It took 10-20 seconds when I measured it. Some times the plugin seem unable to find the correct video file to play, and show the metadata XML file name in the VLC status line. I have no idea why.

I have created a request for a new package in Debian (RFP) and asked if the upstream author is willing to help make this happen. Now we wait to see what come out of this. I do not want to maintain a package that is not maintained upstream, nor do I really have time to maintain more packages myself, so I might leave it at this. But I really hope someone step up to do the packaging, and hope upstream is still maintaining the source. If you want to help, please update the RFP request or the upstream issue.

I have not found any traces of webtorrent support for VLC.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Petter Reinholdtsen Petter Reinholdtsen - Entries tagged english

BH 1.66.0-1

Planet Debian - Mër, 14/02/2018 - 2:37pd

A new release of the BH package arrived on CRAN a little earlier: now at release 1.66.0-1. BH provides a sizeable portion of the Boost C++ libraries as a set of template headers for use by R, possibly with Rcpp as well as other packages.

This release upgrades the version of Boost to the Boost 1.66.0 version released recently, and also adds one exciting new library: Boost compute which provides a C++ interface to multi-core CPU and GPGPU computing platforms based on OpenCL.

Besides the usual small patches we need to make (i.e., cannot call abort() etc pp to satisfy CRAN Policy) we made one significant new change in response to a relatively recent CRAN Policy change: compiler diagnostics are not suppressed for clang and g++. This may make builds somewhat noisy so we all may want to keep our ~/.R/Makevars finely tuned suppressing a bunch of warnings...

Changes in version 1.66.0-1 (2018-02-12)
  • Upgraded to Boost 1.66.0 (plus the few local tweaks)

  • Added Boost compute (as requested in #16)

Via CRANberries, there is a diffstat report relative to the previous release.

Comments and suggestions are welcome via the mailing list or the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel Thinking inside the box

Is it an upgrade, or a sidegrade?

Planet Debian - Mar, 13/02/2018 - 8:43md

I first bought a netbook shortly after the term was coined, in 2008. I got one of the original 8.9" Acer Aspire One. Around 2010, my Dell laptop was stolen, so the AAO ended up being my main computer at home — And my favorite computer for convenience, not just for when I needed to travel light. Back then, Regina used to work in a national park and had to cross her province (~6hr by a combination of buses) twice a week, so she had one as well. When she came to Mexico, she surely brought it along. Over the years, we bought new batteries and chargers, as they died over time...

Five years later, it started feeling too slow, and I remember to start having keyboard issues. Time to change.

Sadly, 9" computers were no longer to be found. Even though I am a touch typist, and a big person, I miss several things about the Acer's tiny keyboard (such as being able to cover the diagonal with a single hand, something useful when you are typing while standing). But, anyway, I got the closest I could to it — In July 2013, I bought the successor to the Acer Aspire One: An 10.5" Acer Aspire One Nowadays, the name that used to identify just the smallest of the Acer Family brethen covers at least up to 15.6" (which is not exactly helpful IMO).

Anyway, for close to five years I was also very happy with it. A light laptop that didn't mean a burden to me. Also, very important: A computer I could take with me without ever thinking twice. I often tell people I use a computer I got at a supermarket, and that, bought as new, costed me under US$300. That way, were I to lose it (say, if it falls from my bike, if somebody steals it, if it gets in any way damaged, whatever), it's not a big blow. Quite a difference from my two former laptops, both over US$1000.

I enjoyed this computer a lot. So much, I ended up buying four of them (mine, Regina's, and two for her family members).

Over the last few months, I have started being nagged by unresponsivity, mainly in the browser (blame me, as I typically keep ~40 tabs open). Some keyboard issues... I had started thinking about changing my trusty laptop. Would I want a newfangle laptop-and-tablet-in-one? Just thinking about fiddling with the OS to recognize stuff was a sort-of-turnoff...

This weekend we had an incident with spilled water. After opening and carefully ensuring the computer was dry, it would not turn on. Waited an hour or two, and no changes. Clear sign, a new computer is needed ☹

I went to a nearby store, looked at the offers... And, in part due to the attitude of the salesguy, I decided not to (installing Linux will void any warranty, WTF‽ In 2018‽). Came back home, and... My Acer works again!

But, I know five years are enough. I decided to keep looking for a replacement. After some hesitation, I decided to join what seems to be the elite group in Debian, and go for a refurbished Thinkpad X230.

And that's why I feel this is some sort of "sidegrade" — I am replacing a five year old computer with another five year old computer. Of course, a much sturdier one, built to last, originally sold as an "Ultrabook" (that means, meant for a higher user segment) much more expandable... I'm paying ~US$250, which I'm comfortable with. Looking at several online forums, it is a model quite popular with "knowledgeable" people AFAICT even now. I was hoping, just for the sake of it, to find a X230t (foldable and usable as tablet)... But I won't put too much time into looking for it.

The Thinkpad is 12", which I expect will still fit in my smallish satchel I take to my classes. The machine looks as tweakable as I can expect. Spare parts for replacement are readily available. I have 4GB I bought for the Acer I will probably be able to carry on to this machine, so I'm ready with 8GB. I'm eager to feel the keyboard, as it's often repeated it's the best in the laptop world (although it's not the classic one anymore) I'm just considering to pop ~US$100 more and buy an SSD drive, and... Well, lets see how much does this new sidegrade make me smile!

gwolf Gunnar Wolf

Eric Hammond: Replacing EC2 On-Demand Instances With New Spot Instances

Planet Ubuntu - Mar, 13/02/2018 - 9:00pd

with an SMS text warning two minutes before interruption, using CloudWatch Events Rules And SNS

The EC2 Spot instance marketplace has had a number of enhancements in the last couple months that have made it more attractive for more use cases. Improvements include:

  • You can run an instance like you normally do for on-demand instances and add one option to make it a Spot instance! The instance starts up immediately if your bid price is sufficient given spot market conditions, and will generally cost much less than on-demand.

  • Spot price volatility has been significantly reduced. Spot prices are now based on long-term trends in supply and demand instead of hour-to-hour bidding wars. This means that instances are much less likely to be interrupted because of short-term spikes in Spot prices, leading to much longer running instances on average.

  • You no longer have to specify a bid price. The Spot Request will default to the instance type’s on-demand price in that region. This saves looking up pricing information and is a reasonable default if you are using Spot to save money over on-demand.

  • CloudWatch Events can now send a two-minute warning before a Spot instance is interrupted, through email, text, AWS Lambda, and more.

Putting these all together makes it easy to take instances you formerly ran on-demand and add an option to turn them into new Spot instances. They are much less likely to be interrupted than with the old spot market, and you can save a little to a lot in hourly costs, depending on the instance type, region, and availability zone.

Plus, you can get a warning a couple minutes before the instance is interrupted, giving you a chance to save work or launch an alternative. This warning could be handled by code (e.g., AWS Lambda) but this article is going to show how to get the warning by email and by SMS text message to your phone.


You should not run a Spot instance unless you can withstand having the instance stopped for a while from time to time.

Make sure you can easily start a replacement instance if the Spot instance is stopped or terminated. This probably includes regularly storing important data outside of the Spot instance (e.g., S3).

You cannot currently re-start a stopped or hibernated Spot instance manually, though the Spot market may re-start it automatically if you configured it with interruption behavior “stop” (or “hibernate”) and if the Spot price comes back down below your max bid.

If you can live with these conditions and risks, then perhaps give this approach a try.

Start An EC2 Instance With A Spot Request

An aws-cli command to launch an EC2 instance can be turned into a Spot Request by adding a single parameter: --instance-market-options ...

The option parameters we will use do not specify a max bid, so it defaults to the on-demand price for the instance type in the region. We specify “stop” and “persistent” so that the instance will be restarted automatically if it is interrupted temporarily by a rising Spot market price that then comes back down.

Adjust the following options to suite. The important part for this example is the instance market options.

ami_id=ami-c62eaabe # Ubuntu 16.04 LTS Xenial HVM EBS us-west-2 (as of post date) region=us-west-2 instance_type=t2.small instance_market_options="MarketType='spot',SpotOptions={InstanceInterruptionBehavior='stop',SpotInstanceType='persistent'}" instance_name="Temporary Demo $(date +'%Y-%m-%d %H:%M')" instance_id=$(aws ec2 run-instances \ --region "$region" \ --instance-type "$instance_type" \ --image-id "$ami_id" \ --instance-market-options "$instance_market_options" \ --tag-specifications \ 'ResourceType=instance,Tags=[{Key="Name",Value="'"$instance_name"'"}]' \ --output text \ --query 'Instances[*].InstanceId') echo instance_id=$instance_id

Other options can be added as desired. For example, specify an ssh key for the instance with an option like:

--key $USER

and a user-data script with:

--user-data file:///path/to/

If there is capacity, the instance will launch immediately and be available quickly. It can be used like any other instance that is launched outside of the Spot market. However, this instance has the risk of being stopped, so make sure you are prepared for this.

The next section presents a way to get the early warning before the instance is interrupted.

CloudWatch Events Two-Minute Warning For Spot Interruption

As mentioned above, Amazon recently released a feature where CloudWatch Events will send a two-minute warning before a Spot instance is interrupted. This section shows how to get that warning sent to an email address and/or SMS text to a phone number.

Create an SNS topic to receive Spot instance activity notices:

sns_topic_name=spot-activity sns_topic_arn=$(aws sns create-topic \ --region "$region" \ --name "$sns_topic_name" \ --output text \ --query 'TopicArn' ) echo sns_topic_arn=$sns_topic_arn

Subscribe an email address to the SNS topic:

email_address="YOUR@EMAIL.ADDRESS" aws sns subscribe \ --region "$region" \ --topic-arn "$sns_topic_arn" \ --protocol email \ --notification-endpoint "$email_address"

IMPORTANT! Go to your email inbox now and click the link to confirm that you want to subscribe that email address to the SNS topic.

Subscribe an SMS phone number to the SNS topic:

phone_number="+1-999-555-1234" # Your phone number aws sns subscribe \ --region "$region" \ --topic-arn "$sns_topic_arn" \ --protocol sms \ --notification-endpoint "$phone_number"

Grant CloudWatch Events permission to post to the SNS topic:

aws sns set-topic-attributes \ --region "$region" \ --topic-arn "$sns_topic_arn" \ --attribute-name Policy \ --attribute-value '{ "Version": "2008-10-17", "Id": "cloudwatch-events-publish-to-sns-'"$sns_topic_name"'", "Statement": [{ "Effect": "Allow", "Principal": { "Service": "" }, "Action": [ "SNS:Publish" ], "Resource": "'"$sns_topic_arn"'" }] }'

Create a CloudWatch Events Rule that filters for Spot instance interruption warnings for this specific instance:

rule_name_interrupted="ec2-spot-interruption-$instance_id" rule_description_interrupted="EC2 Spot instance $instance_id interrupted" event_pattern_interrupted='{ "source": [ "aws.ec2" ], "detail-type": [ "EC2 Spot Instance Interruption Warning" ], "detail": { "instance-id": [ "'"$instance_id"'" ] } }' aws events put-rule \ --region "$region" \ --name "$rule_name_interrupted" \ --description "$rule_description_interrupted" \ --event-pattern "$event_pattern_interrupted" \ --state "ENABLED"

Set the target of CloudWatch Events rule to the SNS topic using an input transfomer to make sensible text for an English reader:

sns_target_interrupted='[{ "Id": "target-sns-'"$sns_topic_name"'", "Arn": "'"$sns_topic_arn"'", "InputTransformer": { "InputPathsMap": { "title": "$.detail-type", "source": "$.source", "account": "$.account", "time": "$.time", "region": "$.region", "instance": "$.detail.instance-id", "action": "$.detail.instance-action" }, "InputTemplate": "\"<title>: <source> will <action> <instance> ('"$instance_name"') in <region> of <account> at <time>\"" } }]' aws events put-targets \ --region "$region" \ --rule "$rule_name_interrupted" \ --targets "$sns_target_interrupted"

Here’s a sample message for the two-minute interruption warning:

“EC2 Spot Instance Interruption Warning: aws.ec2 will stop i-0f47ef25380f78480 (Temporary Demo) in us-west-2 of 121287063412 at 2018-02-11T08:56:26Z”

Bonus: CloudWatch Events Alerts For State Changes

In addition to the two-minute interruption alert, we can send ourselves messages when the instance is actually stopped, and when it is started again, and when it is running. This is done with slightly different CloudWatch Events pattern and input transformer, but following basically the same pattern.

Create a CloudWatch Events Rule that filters for Spot instance interruption warnings for this specific instance:

rule_name_state="ec2-instance-state-change-$instance_id" rule_description_state="EC2 instance $instance_id state change" event_pattern_state='{ "source": [ "aws.ec2" ], "detail-type": [ "EC2 Instance State-change Notification" ], "detail": { "instance-id": [ "'"$instance_id"'" ] } }' aws events put-rule \ --region "$region" \ --name "$rule_name_state" \ --description "$rule_description_state" \ --event-pattern "$event_pattern_state" \ --state "ENABLED"

And again, set the target of the new CloudWatch Events rule to the same SNS topic using another input transfomer:

sns_target_state='[{ "Id": "target-sns-'"$sns_topic_name"'", "Arn": "'"$sns_topic_arn"'", "InputTransformer": { "InputPathsMap": { "title": "$.detail-type", "source": "$.source", "account": "$.account", "time": "$.time", "region": "$.region", "instance": "$.detail.instance-id", "state": "$.detail.state" }, "InputTemplate": "\"<title>: <source> reports <instance> ('"$instance_name"') is now <state> in <region> of <account> as of <time>\"" } }]' aws events put-targets \ --region "$region" \ --rule "$rule_name_state" \ --targets "$sns_target_state"

Here’s are a couple sample messages for the instance state change notification:

“EC2 Instance State-change Notification: aws.ec2 reports i-0f47ef25380f78480 (Temporary Demo) is now stopping in us-west-2 of 121287063412 as of 2018-02-11T08:58:29Z”

“EC2 Instance State-change Notification: aws.ec2 reports i-0f47ef25380f78480 (Temporary Demo) is now stopped in us-west-2 of 121287063412 as of 2018-02-11T08:58:47Z”


If we terminate the EC2 Spot instance, the persistent Spot Request will restart a replacement instance. To terminate it permanently, we need to first cancel the Spot Request:

spot_request_id=$(aws ec2 describe-instances \ --region "$region" \ --instance-id "$instance_id" \ --output text \ --query 'Reservations[].Instances[].[SpotInstanceRequestId]') echo spot_request_id=$spot_request_id aws ec2 cancel-spot-instance-requests \ --region "$region" \ --spot-instance-request-ids "$spot_request_id"

Then terminate the EC2 instance:

aws ec2 terminate-instances \ --region "$region" \ --instance-ids "$instance_id" \ --output text \ --query 'TerminatingInstances[*].[InstanceId,CurrentState.Name]'

Remove the targets from the CloudWatch Events “interrupted” rule and delete the CloudWatch Events Rule:

target_ids_interrupted=$(aws events list-targets-by-rule \ --region "$region" \ --rule "$rule_name_interrupted" \ --output text \ --query 'Targets[*].[Id]') echo target_ids_interrupted='"'$target_ids_interrupted'"' aws events remove-targets \ --region "$region" \ --rule "$rule_name_interrupted" \ --ids $target_ids_interrupted aws events delete-rule \ --region "$region" \ --name "$rule_name_interrupted"

Remove the targets from the CloudWatch Events “state” rule (if you created those) and delete the CloudWatch Events Rule:

target_ids_state=$(aws events list-targets-by-rule \ --region "$region" \ --rule "$rule_name_state" \ --output text \ --query 'Targets[*].[Id]') echo target_ids_state='"'$target_ids_state'"' aws events remove-targets \ --region "$region" \ --rule "$rule_name_state" \ --ids $target_ids_state aws events delete-rule \ --region "$region" \ --name "$rule_name_state"

Delete the SNS Topic:

aws sns delete-topic \ --region "$region" \ --topic-arn "$sns_topic_arn"

Original article and comments:

Ubuntu LoCo Council: Three month wrap-up

Planet Ubuntu - Hën, 12/02/2018 - 11:18md

The new LoCo Council has been a little lax with updating this blog. It’s admittedly taken us a little bit of time to figure out what exactly we’re doing, but we seem to be on our feet now. I’d like to rectify the blog issue by wrapping up the first three months of our reign in a summary post to get us back on track.

December 2017

This was the first month of the new council, and our monthly meeting took place on the 11th. We had a number of LoCo verification applications to review.


Arizona had a strong application, with lots of activity, and an ambitious roadmap for the coming year. Despite their having multiple members in attendance, no questions were necessary to receive a unanimous vote for re-verification.


This one was more difficult. Their application listed the most recent event to be in 2016, although with some digging it looked like they might have had activity in 2017 as well. Unfortunately, they had no members in attendance to answer our questions, so we voted unanimously to provisionally extend their status for two months in order to give them a little more time to get their application in order.


This was probably the quickest re-verification in history. Their application was comprehensive, with an incredible number of activities over the last several years. Their re-verification was unanimously granted.


This one seemed to have an up-to-date application, but none of the supporting documentation seemed up-to-date, and no members were in attendance. We again voted for a two-month extension.


Portugal had several team members in attendance, and their application was impressive. They even split events into those that they organized, and those in which they participated (but did not organize) because the lists were too long to manage. They were unanimously re-verified.


Their application was still in draft form, and they had no one in attendance. We again provisionally extended two months.


Our January meeting took place on the 8th, and our agenda included two LoCos that were provisionally extended in December.


This time, Tunisia had members in attendance. Their application was similar to the one we reviewed in December, but this time they were there to explain that they actually have nearly 300 wiki pages that previous leadership had created, and they were in the midst of pruning them. They’re also working very hard to grow membership. After some discussion, we agreed that they seemed to have a solid plan and good leadership, so we unanimously voted to re-verify.


Once again, Myanmar had no members in attendance, and their application timestamp was the same as when we reviewed in December. As a result, we decided to skip reviewing the application and wait for February.


Our February meeting took place today, on the 12th. Our agenda included two LoCos that were provisionally extended in December.


This time, Myanmar had some members in attendance. However, the timestamp of their application still hadn’t changed since the December review. Fortunately, members were there to answer our questions. They explained that there was activity, but it hadn’t made it to the application. They promised to update the application if we extended for one more month, which we did. This was not unanimous, however.


Their application was no longer in draft form, but we still had a number of questions about their application. In an email to the Council, their leadership requested that we have our discussion in Launchpad since they couldn’t make the meeting. We obliged, and provisionally extended their status for one month.

Daniel Espinosa: Python for GNOME Mobile?

Planet GNOME - Hën, 12/02/2018 - 10:39md

As you may already know, Python is one of the hottest programming language out there, with thousand of job offerings, so makes sense, at least for me, to push this language as official one for GNOME Mobile applications.

elementary OS is doing a good job on engagement new developers, while use Vala as its official language. For me, Vala is a good candidate for advanced/performance constrained Mobile applications.

Both languages uses GNOME’s technologies, through GObject Introspection. So, any new widget designed for responsive Mobile applications, will be available to Python and Vala.

An old License issue on GLib’s static linking on Android, can be tackled by Purism, in a form of tools to allow a dynamically loaded version. For free software, this is not an issue, but for proprietary one.

Provide a high level programing language, potentially distributed in binary form, could incentive app development.

On Vala side, allowing to develop software in this highly productive GObject focused programing language, can push up games or any performance constrained applications, development offering; while you get all goodness of GObject and C world. Thanks to C, GNOME technologies, are available to many other languages; so, Rust  and C++, could find their own way.

This is just for discussion and a proposal to Mobile OSs, using GObject based software.

Jeremy Bicha: GNOME Tweaks 3.28 Progress Report 2

Planet GNOME - Hën, 12/02/2018 - 6:39md

GNOME 3.28 has reached its 3.27.90 milestone. This milestone is important because it means that GNOME is now at API Freeze, Feature Freeze, and UI Freeze. From this point on, GNOME shouldn’t change much, but that’s good because it allows for distros, translators, and documentation writers to prepare for the 3.28 release. It also gives time to ensure that new feature are working correctly and as many important bugs as possible are fixed. GNOME 3.28 will be released in approximately one month.

If you haven’t read my last 3.28 post, please read it now. So what else has changed in Tweaks this release cycle?


As has been widely discussed, Nautilus itself will no longer manage desktop icons in GNOME 3.28. The intention is for this to be handled in a GNOME Shell extension. Therefore, I had to drop the desktop-related tweaks from GNOME Tweaks since the old methods don’t work.

If your Linux distro will be keeping Nautilus 3.26 a bit longer (like Ubuntu), it’s pretty easy for distro maintainers to re-enable the desktop panel so you’ll still get all the other 3.28 features without losing the convenient desktop tweaks.

As part of this change, the Background tweaks have been moved from the Desktop panel to the Appearance panel.


Historically, laptop touchpads had two or three physical hardware buttons just like mice. Nowadays, it’s common for touchpads to have no buttons. At least on Windows, the historical convention was a click in the bottom left would be treated as a left mouse button click, and a click in the bottom right would be treated as a right mouse button click.

Macs are a bit different in handling right click (or secondary click as it’s also called). To get a right-click on a Mac, just click with two fingers simultaneously. You don’t have to worry about whether you are clicking in the bottom right of the touchpad so things should work a bit better when you get used to it. Therefore, this is even used now in some Windows computers.

My understanding is that GNOME used Windows-style “area” mouse-click emulation on most computers, but there was a manually updated list of computers where the Mac style “fingers” mouse-click emulation was used.

In GNOME 3.28, the default is now the Mac style for everyone. For the past few years, you could change the default behavior in the GNOME Tweaks app, but I’ve redesigned the section now to make it easier to use and understand. I assume there will be some people who prefer the old behavior so we want to make it easy for them!

GNOME Tweaks 3.27.90 Mouse Click Emulation

For more screenshots (before and after), see the GitLab issue.


There is one more feature pending for Tweaks 3.28, but it’s incomplete so I’m not going to discuss it here yet. I’ll be sure to link to a blog post about it when it’s ready though.

For more details about what’s changed, see the NEWS file or the commit log.

Jeremy Bicha: GNOME Tweaks 3.28 Progress Report 2

Planet Ubuntu - Hën, 12/02/2018 - 6:35md

GNOME 3.28 has reached its 3.27.90 milestone. This milestone is important because it means that GNOME is now at API Freeze, Feature Freeze, and UI Freeze. From this point on, GNOME shouldn’t change much, but that’s good because it allows for distros, translators, and documentation writers to prepare for the 3.28 release. It also gives time to ensure that new feature are working correctly and as many important bugs as possible are fixed. GNOME 3.28 will be released in approximately one month.

If you haven’t read my last 3.28 post, please read it now. So what else has changed in Tweaks this release cycle?


As has been widely discussed, Nautilus itself will no longer manage desktop icons in GNOME 3.28. The intention is for this to be handled in a GNOME Shell extension. Therefore, I had to drop the desktop-related tweaks from GNOME Tweaks since the old methods don’t work.

If your Linux distro will be keeping Nautilus 3.26 a bit longer (like Ubuntu), it’s pretty easy for distro maintainers to re-enable the desktop panel so you’ll still get all the other 3.28 features without losing the convenient desktop tweaks.

As part of this change, the Background tweaks have been moved from the Desktop panel to the Appearance panel.


Historically, laptop touchpads had two or three physical hardware buttons just like mice. Nowadays, it’s common for touchpads to have no buttons. At least on Windows, the historical convention was a click in the bottom left would be treated as a left mouse button click, and a click in the bottom right would be treated as a right mouse button click.

Macs are a bit different in handling right click (or secondary click as it’s also called). To get a right-click on a Mac, just click with two fingers simultaneously. You don’t have to worry about whether you are clicking in the bottom right of the touchpad so things should work a bit better when you get used to it. Therefore, this is even used now in some Windows computers.

My understanding is that GNOME used Windows-style “area” mouse-click emulation on most computers, but there was a manually updated list of computers where the Mac style “fingers” mouse-click emulation was used.

In GNOME 3.28, the default is now the Mac style for everyone. For the past few years, you could change the default behavior in the GNOME Tweaks app, but I’ve redesigned the section now to make it easier to use and understand. I assume there will be some people who prefer the old behavior so we want to make it easy for them!

GNOME Tweaks 3.27.90 Mouse Click Emulation

For more screenshots (before and after), see the GitLab issue.


There is one more feature pending for Tweaks 3.28, but it’s incomplete so I’m not going to discuss it here yet. I’ll be sure to link to a blog post about it when it’s ready though.

For more details about what’s changed, see the NEWS file or the commit log.

Alexandre Franke: FOSDEM 2018

Planet GNOME - Dje, 11/02/2018 - 11:24md

The GNOME Foundation advisory board meeting was happening on Friday the 2nd so I travelled to Brussels on Thursday. Years ago, there were two train routes from Strasbourg to Brussels: the direct one was using slow trains, through a large part of Belgium and Luxembourg, and took a bit more than 5 hours; the other one meant taking a TGV from Strasbourg to Paris (~2 hours), changing stations (5 minutes walk from Gare de l’Est to Gare du Nord) and taking a Thalys to Brussels (~2 hours). I was pleased to learn that there was now a direct TGV route. Even if the announced time of 3 hours and 50 minutes was only a tiny bit shorter than the indirect one, the confort of a journey with no connection adds real value. Of course I wasn’t expecting a direct route to go through the Charles de Gaulle airport train station, but well… still better than the alternative! This nice journey was made possible thanks to the financial support of the Foundation.

Then on Saturday I went to attend my 11th FOSDEM (I did ten in a row and skipped last year). The first day was dedicated to the hallway track, spending my time with people I knew and had not met for a while. I also was behind the GNOME booth for a bit, but nothing compared to the likes of Bastian, Benjamin, Carlos, Florian, or Luis. After failing to get in that Matrix talk and that Rust one, as well as that one, I went across the street to watch Shaun’s talk, from which I want to share that nugget of wisdom:

The problem with XML is that it’s XML. -- Shaun McCance

I stayed in the Tool the docs room for the next talk by Jessica Parsons, “Finding a home for docs. I liked her approach where she doesn’t come up with a single solution that is supposed to solve all cases, but instead studies a few choices with their pros and cons. After lunch I joined a group of friends to cheer for Marc while he presented the best practises he’s pushing into Sympa. We were then conveniently on the right floor to head to Tobias’s talk.

My return trip was in the afternoon on Monday so I joined a group of friends to visit the Atomium in the morning. Quite surprising that it took me so many trips to Brussels before I got to see this place. Built in 1958 for the World’s fair —in a way it’s the Belgian Eiffel tower— as a representation of a magnified iron crystal, the cubic structure doesn’t look one bit dated. To contrast with that, the exhibition it hosts about its creation and the historical context gives out a classic vibe reminiscent of Stark expo in the Marvel movies.

Buying a ticket to the Atomium also grants you access to the Art and Design Atomium Museum. The exhibit there was focused on the use of plastic since its creation. While most of the items we saw qualified as stuff we wouldn’t want to have at home because of their style, it was fun to see pieces from another era, some of which we may have had when we were children.

Same “direct” trip for the way home, concluding that journey uneventfully.

Shaun McCance: Math Tricks for Kilograms and Pounds

Planet GNOME - Sht, 10/02/2018 - 10:50md

I’m going to share some math tricks for converting between kilograms and pounds, something I often deal with when weightlifting. This post is long, but if you stick around to the end, I’ll share the super secret divisibility rule for 11. (Originally posted to Facebook. Reposted on my blog at Jim Campbell’s suggestion.)

If you find a mathematician or engineer who’s good at doing math in their head and ask them how they do it, you’ll find they have a handful of techniques they apply in different situations. Many of these techniques involve turning problems into things we’re already good at. What are we good at? Well, multiplying and dividing by 10 is trivial. It’s just moving a decimal point. And most of us are pretty good at doubling and halving things. So, multiplying and dividing by 10 and 2 are sweet spots.

1kg is approximately 2.2lb. (If you need better precision, use a computer.) So to convert kilograms to pounds, we have to multiply by 2.2. Let’s pull out that distributive law. 2.2x = (2 + 0.2)x = 2x + 0.2x = 2x + (2x)/10. We’ve reduced the problem to multiplication by 2 and division by 10.

“Woah Shaun, stop with the algebra!” OK. Take the kilograms. Double it. Take that double, shift the decimal place. Add the double and the decimal-shifted double. For example, take 150kg, a respectable squat weight. Double it = 300. Divide by 10 = 30. Add these two together = 330lb. (Google will tell you the answer is 330.693lb.)

But what about converting pounds back to kilograms? Do we have to divide by 2.2? That sounds awful. But division by 2.2 is multiplication by 5/11. Does that make it any better? Yes! Really? YES! Division by 11 is awesome, and for some reason, nobody learns it in school.

To divide by 11, first divide by 10. That’s your current total. Divide that by 10 and subtract the result from the current total. That’s your new current total. Divide that by 10 and add it to the current total. That’s your new current total. Continue dividing by 10 and alternating addition and subtraction. Do this until you die of exhaustion, you see the two-digit repeating pattern, or you’re happy with the precision. Recall that 2.2 was only an approximate conversion to begin with, so I stop when all the action is after the decimal point. Round whole numbers are good enough for me.

But we needed to multiply by 5/11, not just 1/11. No worries. To multiply by x/11, instead of starting with 1/10, start with x/10. Luckily, 5/10 is just 1/2. We like dividing by 2.

My body weight is about 190lb. What is that in kilograms? Half of 190 is 95. Divide by 10 for 9.5. Subtract 9.5 from 95 for 85.5. Divide 9.5 by 10 for almost one. Addition is next, so let’s call it 86kg. (Google will tell you the answer is 86.1826kg.)

So there you go. Quick tricks to help you get approximate conversions between kilograms and pounds in your head.

But what about the super secret divisibility rule for 11 I promised? It follows the same pattern as the technique I gave for dividing by 11. Just do alternating addition and subtraction on the digits of a number. If the result is divisible by 11, so is the original number. Is 1936 divisible by 11? 1-9+3-6 = -11. It sure is.

Eleven is awesome.

*tap* *tap* *tap* testing testing *tap* *tap* *tap*

Still here? Cool. Here’s a bonus tip that wasn’t in my original post. All that stuff about 11? It works the same way for whatever number “11” happens to represent in any radix. Need to divide by 11_8 (decimal 9) in octal? Same division algorithm. Need to check divisibility by 11_16 (decimal 17) in hexadecimal? Same division rule. Looking for some fun weekend math? Take a look at the divisibility rules you know, figure out why they work, and use that to figure out what divisibility rules you have in other bases. Hint: every divisibility rule I can name stems from three basic kinds of tests.

David Tomaschik: Book Review: Red Team by Micah Zenko

Planet Ubuntu - Sht, 10/02/2018 - 9:00pd

Red Team: How to Succeed By Thinking Like the Enemy by Micah Zenko focuses on the role that red teaming plays in a variety of institutions, ranging from the Department of Defense to cybersecurity. It’s an excellent book that describes the thought process behind red teaming, when red teaming is a success and when it can be a failure, and the way a red team can best fit into an organization and provide value. If you’re looking for a book that’s highly technical or focused entirely on information security engineering, this book may disappoint. There’s only a single chapter covering the application of red teaming in the information security space (particularly “vulnerability probes” as Zenko refers to many of the tests), but that doesn’t make the rest of the content any less useful – or interesting – to the Red Team practitioner.


Daniel Holbach: Took a year off…

Planet Ubuntu - Pre, 09/02/2018 - 10:59pd

Since many of you reached out to me in the past weeks to find out if I was still travelling the world and how things were going, I thought I’d reconnect with the online world and write a blog post again.

After a bit more than a year, my sabbatical is coming to an end now. I had a lot of time to reflect, recharge batteries, be curious again, travel and make new experiences.

In December ’16 I fled the winter in Germany and went to Ecuador. Curiosity was my guidebook, I slowed down, let nature sink in, enjoyed the food and hospitality of the country, met many simply beautiful people along the way, learned some Spanish, went scuba diving with hammerhead sharks and manta rays, sat on top of mountains, hiked, listened to stories from village elders in Kichwa around the fire, went paragliding, camped in the jungle with Shuar people, befriended a macaw in a hippie village and got inspired by many long conversations.

As always when I’m travelling, my list of recommended next destinations grew and I could easily have gone on. After some weeks, I decided to get back to Berlin though and venture new paths there.

When I first got involved in Ubuntu, I was finishing my studies in Computer Sciences. Last March, thirteen years later, I felt the urge to study again. To open myself up to new challenges, learn entirely new skills, exercise different parts of the brain and make way for a possible new career path in the future. I felt quite uncertain, I wasn’t sure if I was crazy to attempt it,  but I was going to try. I went back to square one and started training as a psychotherapist. This was, and still is, an incredibly exciting step for me and has been a very rewarding experience so far.

I wasn’t just looking for a new intellectual exercise – I was also looking for a way to work more closely with people. Although it’s quite different from what I did up until now, this decision still was very consistent with my beliefs, passions and personality in general. Supporting another human being on their path, helping to bring out their potential and working out new perspectives together have always deeply attracted me.

I had the privilege of learning about and witnessing the work of great therapists, counsellors and trainers in seminars, workshops, books, talks and groups, so I had some guidance which supported me and I chose body psychotherapy as the method I wanted to learn. It is part of the humanistic psychotherapy movement and at its core are (among others) the following ideals:

  • All people are inherently good.
  • People are driven towards self-actualisation: development of creativity, free will, and positive human potential.
  • It is based on present-tense experience as the main reference point.
  • It encourages self-awareness and mindfulness.
  • Wikipedia quotes an article, which describes the benefits as having a "crucial opportunity to lead our troubled culture back to its own healthy path. More than any other therapy, Humanistic-Existential therapy models democracy. It imposes ideologies of others upon the client less than other therapeutic practices. Freedom to choose is maximized."

If you know me just a little bit you can probably tell, that this all very much resonated with me. In a way, it’s what led me to the Ubuntu project in 2004 – there is a lot of “humanity towards others” and “I am what I am because of who we all are” in there.

Body psychotherapy was also specifically interesting to me, as it offers a very rich set of interventions and techniques, all experience-based and relying on the wisdom of our body. Furthermore it seeks to reconcile the body and mind split our culture so heavily promotes.

Since last March I immersed myself in this new world: took classes, read books, attended a congress and workshops and had quite a bit of self-experience. In November I took the required exams and became “Heilpraktiker für Psychotherapie”. The actual training in body psychotherapy I’m going to start this year in March. As this is going to take still several years, I’m not exactly sure when or how I will start working in this field. While it’s still quite some time off and right now only an option for some time in the future, I know that this process will encourage me to become more mindful, patient, empathic and a better listener, colleague, partner and friend.

Does this mean, I’m going to leave the tech world? No, absolutely not. My next steps in this domain I’m going to leave to another blog post though.

I feel very privileged having been able to take the time and embark on this adventure and add a new dimension to my coordinate system. All of this wouldn’t have been possible without close people around me who supported and encouraged me. I’m very grateful for this and feel quite lucky.

This has been a very exciting year, a very important experience and I’m very much looking forward to what’s yet to come.

Marcus Lundblad: Entering the “home stretch” for GNOME 3.28

Planet GNOME - Enj, 08/02/2018 - 10:12md
Earlier this week I´ve released GNOME Maps 3.27.90 (even though I just read an e-mail about the deadline for the release tarballs had been postponed for one week just after uploading the tarball).

This weekend I (like some 8000 others) participated in an exciting FOSDEM with lots of interesting talks and the week before that I gave presentation of GNOME Maps, and in particular the public transit functionality for TrafikLab (the sort of “developer community” driven by the Swedish organization Samtrafiken AB, who coordinates and aggregates data from all public transit operators, both commercial/private and regional/public ones.

One of the larger features landed in 3.27.90, which isnt´t visible on the surface is that Maps now uses some new language features introduced to JS in the ES6 standard, namely classes and ”arrow functions”.

So, when it comes to classes, as known from traditional OO languages such as Java or C++, earlier one would typically use prototypes to model object classes, but as of ES6 the language has support for a more traditional classes and with a method syntax. GJS also gained a new way to declare GObject classes.

So when earlier declaring an extending class would look something like this:

var MyListBoxRow = new Lang.Class({
    Name: 'MyListRow',
    Extends: Gtk.ListBoxRow,
    Template: 'resource:///<app-id>/ui/my-list-box-row.ui',

    myMethod: function(args) {


this now becomes:

var MyListBoxRow = GObject.registerClass({
   Template: 'resource:///<app-id>/ui/my-list-box-row.ui'
} ,class MyListBoxRow extends Gtk.ListBoxRow {

  myMethod(args) {


and in cases where we don´t need to inherit from GObject (such as when not declaring any signals, i.e. for utility data-bearing classes) we can skip the registering part and it becomes just a simple ES6 class:

var SomeClass = class {
   someMethod(args) {


We still need the assign using “var” to export those outside the module in question, but when we gain ES7 support in GJS we should be able to utilize the “export” keyword here instead. Another simplication that should arrive with ES7 is that we´d be able to use a decorator pattern in place of GObject.registerClass so that it would become something like:

class MyListRow extends Gtk.ListBoxRow

Techinically this could be done today using a transpiler step (using something like Babel) in the build system. These decorators will pretty much be higher-order functions. But I choose not to do this at this point, since we still use GNU Autotools as our build system and eventually we should switch to Meson.

The second change involves using the “arrow operator” to bind this to anonymous functions (in async callbacks). So instead of something like:

asyncFunctionCall(onObject, (function(arg) {

this becomes:

asyncFunctionCall(onObject, (arg) => doStuffWith(arg));

These changes results in a reduction of 284 lines of code, which isn´t too bad for a change that doesn´t actually involving removing or really rewriting any code.

Thanks go to Philip Chimento (and Endless) for bringing these improvements for GJS!

Some other changes since the last release is some visual fixes and tooltips for some of the buttons in routing sidebar contributed by Vibhanshu Vaibhav and a fix for a keyboard navigation bug (that I introduces when changing the behavior of the search entry to always activate when starting to type with no other entry active) contributed by Tomasz Miąsko. Thank you!


Jono Bacon: Case Study: Building Product, Community, and, Sustainability at Fractal Audio Systems

Planet Ubuntu - Enj, 08/02/2018 - 9:26md

In musicians circles, the Fractal Audio Systems Axe FX range of products has become one of the most highly regarded product lines. Aside from just being a neat product, what is interesting to me is the relationship they have built with their community and value they have created in the product via sustained software updates.

As a little background, the Axe FX and their other AX8/FX8 floor-board products, are hardware units that replicate in software the characteristics of an analog tube guitar amplifier and speaker cabinets. Now, for years there have been companies (e.g. Line6, IK Multimedia) trying to create a software replication of popular Marshall, Mesa Boogie, Ampeg, Peavey, Fender, and other amp tones, with the idea being that you can spend far less on the software and have a wide range of amps to choose from as well. This not only saves on physical space and dollars, but also simplifies recording these amps as you won’t need to plug in a physical microphone – you just play direct through the software. Sadly, the promise has been largely pretty disappointing. Most generally sound like fizzy, cheap knockoffs.

The Axe FX II

While this may be a little strange to grok for the non-musicians reading this, but there isn’t just a tonality assessment to determine if the amp simulator sounds like the real thing, but there is a feel element. Tube amps feel different to play. They have tonal characteristics that adjust as you dial in different settings, and one of the tricky elements for amp simulators to solve is that analog tubes adjust as you use them; the tone adjusts in subtle ways depending on what you play, how you play it, which power supply you are using, how you dial in the amp, and more.

The Axe FX changed much of this. While many saw it initially as just another amp simulator, it has evolved to a point where in A/B testing it is virtually indistinguishable from the amps it is modelling tonally, and the feel is very much there too. This is why bands such as Metallica, U2, Periphery, Steve Vai, and others carry them on tour with them: they can accomplish the same tonal and feel results without the big, unreliable, and complex-to-maintain tube amps.

Sustained Software Updates

The reason why this has been such a game changer is that Cliff Chase, founder of Fractal Audio Systems, has taken a borderline obsessive approach to detail in building this amp/speaker modelling and creating a small team to deliver it.

Cliff Chase, head honcho at Fractal Audio Systems (middle).

From a technology perspective, this is interesting for a few reasons.

Firstly, Fractal have been fairly open about how their technology has evolved. They published a whitepaper on their MIMIC technology and they have been fairly open about how this modelling technology has evolved. You can see the release notes, some further technical details, and a collection of technical posts by Cliff on the forum.

What I found particularly interesting here was Fractal have consistently delivered these improvements via repeated firmware updates out to existing devices. As an example, the MIMIC technology I mentioned above was a major breakthrough in their technology and really (no pun intended) amped up the simulation quality, but it was delivered as a free firmware update to existing hardware.

Now, many organizations would have seen such a technologically important and significant product iteration software update as an opportunity to either release a new hardware product or sell a new line of firmware at a cost. Fractal didn’t do this and have stuck to their philosophy that when you buy their hardware, it is “future proofed” with firmware updates for years to come.

This is true. As an example, the Axe FX II was released in May 2011 and has received 20+ firmware updates which have significantly improved the quality of the product.

In a technology culture where companies release new-feature software updates for a limited period of time (often 2 – 3 years) and then move firmly into maintenance/security updates for a stated “product life” (often 4 – 7 years), Fractal Audio Systems are bucking this trend significantly.


This regular stream of firmware updates that bring additional value, not just security/compatibility fixes, is notable for a few reasons.

Firstly, it has significantly expanded the lifespan and market impact of these devices. Musicians and producers can be a curmudgeonly bunch, and it can take a while for a product to take hold. This is particularly true in a world where “purism” of the art of creating and producing music, and the purism of the tools you use would ordinarily reject any kind of simulated equipment. The Axe FX has become a staple in touring and production rigs because of it’s constant evolution and improvements.

Tones can be shaped using the Axe Edit desktop client.

Secondly, from a consumer perspective, there is something so satisfying about purchasing a hardware product that consistently improves. Psychologically, we are used to software evolving (in either good or bad directions), but hardware has more of a “cast in stone” psychological impression in many of us. We buy it, it provides a function, and we don’t expect it to change much. In the case of the Fractal Audio Systems hardware, it does change, and this provides that all important goal companies focus on: customer delight.

Thirdly, and most interestingly for me, Fractal Audio Systems have fostered a phenomenally devoted, positive, and supportive community. From a community strategy perspective, they have not done anything particularly special: they have a forum, a wiki, and members of the Fractal Audio Systems team post periodically in the forum. They have the usual social media accounts and they release videos on YouTube. This devotion in the community is not from any community engagement fakery…it is from (a) a solid product, and (b) a company who they feel isn’t bullshitting them.

This latter element, the bullshit factor, is key. When I work with my clients I always emphasize the importance of authenticity in the relationship between a company and their community of users/customers. This doesn’t mean pandering to the community and the critics, it means an honest exchange of ideas and discussion in which the company and the community/users can derive equal levels of value out of the relationship.

In my observation of the Fractal Audio Systems community, they have done just this. Cliff Chase, as the supreme leader at Fractal Audio Systems is revered in the community as a mastermind, a reputation that is rightly earned. He is an active participant with the community, sharing his input both on the musical use of his products as well as the technology that has gone into them. He isn’t a CEO who is propped up on conference stages or bouncing from journalist to journalist merely talking about vision, he is knee-deep, sleeves rolled fully-up, working on improvements that then get rolled out…freely…to an excitable community of users.

This puts the community in a valuable position. They become the logical feedback loop (again, no pun intended) for how well the products and firmware updates are working, and while the community can’t participate in improving the products directly (as they don’t have access to the code or in many cases, the skills to contribute) they get to see the fruits of their feedback in these firmware updates.

This serves two important benefits. Firstly, validation is an enormous force in what we do. Everyone, no matter who you are, needs validation of their input and ideas. When the community share feedback that is then validated by Cliff and co., and then rolled out in a freely available firmware update that benefits everyone, this is a deeply satisfying experience. Secondly, in many communities there is a suspicion about providing value (such as feedback or other technical contributions) to a company if only the company benefits from this (e.g. by selling a new product encompassing that feedback). Given that Fractal Audio Systems pushes out these updates freely, it largely eradicates that issue.

In Conclusion

Everything I have outlined here could be construed as a master plan on behalf of the folks at Fractal Audio Systems. I don’t think this is the case. I don’t believe that when Cliff Chase founded the company he layed all of this out as a grand plan for how to build community and customer engagement.

This goes back to purity. My guess is that Cliff and team just wanted to build a solid product that makes their customers happy and providing this regular stream of updates was the most obvious way to do it. It wouldn’t surprise me if they themselves were surprised by how much goodwill would be generated throughout this process.

This is all paving away to the next iteration of this journey, when the Axe FX III was announced last week. It provides significantly greater horsepower, undoubtedly to usher in the next era of improvements. This is a journey I will be following along with when I get an Axe FX III of my own in March.

The post Case Study: Building Product, Community, and, Sustainability at Fractal Audio Systems appeared first on Jono Bacon.

Stuart Langridge: Sorry Henry

Planet Ubuntu - Enj, 08/02/2018 - 7:34md

I think I found a bug in a Henry Dudeney book.

Dudeney was a really famous puzzle creator in Victorian/Edwardian times. For Americans: Sam Loyd was sort of an American knock-off of Dudeney, except that Loyd stole half his puzzles from other people and HD didn’t. Dudeney got so annoyed by this theft that he eventually ended up comparing Loyd to the Devil, which was tough talk in 1910.

Anyway, he wrote a number of puzzle books, and at least some are available on Project Gutenberg, so well done the PG people. If you like puzzles, maths or thinking sorts, then there are a few good collections (and there are nicer to read versions at the Internet Archive too). The Canterbury Puzzles is his most famous work, but I’ve been reading Amusements in Mathematics. In there he presents the following puzzle:

81.—THE NINE COUNTERS. 15879 ×23×46

I have nine counters, each bearing one of the nine digits, 1, 2, 3, 4, 5, 6, 7, 8 and 9. I arranged them on the table in two groups, as shown in the illustration, so as to form two multiplication sums, and found that both sums gave the same product. You will find that 158 multiplied by 23 is 3,634, and that 79 multiplied by 46 is also 3,634. Now, the puzzle I propose is to rearrange the counters so as to get as large a product as possible. What is the best way of placing them? Remember both groups must multiply to the same amount, and there must be three counters multiplied by two in one case, and two multiplied by two counters in the other, just as at present.


In this case a certain amount of mere “trial” is unavoidable. But there are two kinds of “trials”—those that are purely haphazard, and those that are methodical. The true puzzle lover is never satisfied with mere haphazard trials. The reader will find that by just reversing the figures in 23 and 46 (making the multipliers 32 and 64) both products will be 5,056. This is an improvement, but it is not the correct answer. We can get as large a product as 5,568 if we multiply 174 by 32 and 96 by 58, but this solution is not to be found without the exercise of some judgment and patience.

But, you know what? I don’t think he’s right. Now, I appreciate that he probably had to spend hours or days trying out possibilities with a piece of paper and a fountain pen, and I just wrote the following 15 lines of Python in five minutes, but hey, he didn’t have to bear with his government trying to ban encryption, so let’s call it even.

from itertools import permutations nums = [1,2,3,4,5,6,7,8,9] values = [] for p in permutations(nums, 9): one = p[0]*100 + p[1]*10 + p[2] two = p[3]*10 + p[4] three = p[5]*10 + p[6] four = p[7]*10 + p[8] if four > three: continue # or we'll see fg*hi and hi*fg as different if one*two == three*four: expression = "%s*%s = %s*%s = %s" % ( one, two, three, four, one*two) values.append((expression, one*two)) values.sort(key=lambda x:x[1]) print("Solution for 1-9") print("\n".join([x[0] for x in values]))

The key point here is this: the little programme above indeed recognises his proposed solutions (158*32 = 79*64 = 5056 and 174*32 = 96*58 = 5568) but it also finds two larger ones: 584*12 = 96*73 = 7008 and 532*14 = 98*76 = 7448. Did I miss something about the puzzle? Or am I actually in the rare position of finding an error in a Dudeney book? And all it took was seventy years of computer technology advancement to put me in that position. Maths, eh? Tch.

It’s an interesting book. There are lots of money puzzles, in which I have to carefully remember that ha’pennies and farthings are a thing (a farthing is a quarter of a penny), there are 12 pennies in a shilling, and twenty shillings in a pound. There’s some rather racist portrayals of comic-opera Chinese characters in a few of the puzzles. And my heart sank when I read a puzzle about husbands and wives crossing a river in a boat, where no man would permit his wife to be in the boat with another man without him, because I assumed that the solution would also say something like “and of course the women cannot be expected to row the boat”, and I was then pleasantly surprised to discover that this was not the case and indeed they were described as probably being capable oarswomen and it was likely their boat to begin with! Writings from another time. But still as good as any puzzle book today, if not better.

From Henry Dudeney's Amusements in Mathematics, published 1917. We did, Henry, mate. Cheers for the puzzle book.

— Stuart Langridge (@sil) 25 September 2017

Jonathan Riddell: A Decade of Plasma

Planet Ubuntu - Enj, 08/02/2018 - 6:23md

I realised that it’s now a decade of KDE releasing its Plasma desktop.  The KDE 4 release event was in January 2008.  Google were kind enough to give us their office space and smoothies and hot tubs to give some talks and plan a way forward.

The KDE 4 release has gained something of a poor reputation, at the time we still shipped Kubuntu with KDE 3 and made a separate unsupported release for Plasma, but I remember it being perfectly useable and notable for being the foundation that would keep KDE software alive.  It had been clear for sometime that Kicker and the other elements of the KDE 3 desktop were functional but unlikely to gain much going forward.  When Qt 4 was announced back in (I’m pretty sure) 2004 Akademy in Ludwigsberg it was seen as a chance to bring KDE’s desktop back up to date and leap forward.  It took 4 long years and to keep community momentum going we had to release even if we did say it would eat your babies.

Kubuntu at KDE 4 release event

Somewhere along the way it felt like KDE’s desktop lost mindshare with major distros going with other desktops and the rise of lightweight desktops.  But KDE’s software always had the best technological underpinnings with Qt and then QtQuick plus the move to modularise kdelibs into many KDE Frameworks.

This week we released Plasma 5.12 LTS and what a fabulous reception we are getting.  The combination of simple and familiar by default but customisable and functional is making many people realise what an offering we now have with Plasma. When we tried Plasma on an ARM laptop recently we realised it used less memory then the “lightweight” Linux desktop that laptop used pre-installed.  Qt being optimised for embedded use means KDE’s offerings are fast whether you’re experimenting with Plasma Mobile or using it on the very latest KDE Slimbook II means it’ll run smooth and fast.

Some quotes from this week:

“Plasma, as tested on KDE neon specifically, is almost perfect” Ask Noah Show

“This is the real deal.. I’m going all in on this.. ” Linux Unplugged

“Become a Plasma Puppy”

Here’s @popey installing @KdeNeon at 30,000 feet. See how excited he looks. How did it turn out?

— Martin Wimpress (@m_wimpress) February 4, 2018

Elite Ubuntu community spod Alan Pope tired to install KDE neon in aeroplane mode (fails because of a bug which we have since fixed, thanks for the poke).

Anyone up for a Plasma desktop challenge for the next week?

I’m installing @KdeNeon on all my systems for the next week:

Maybe some of you will try it along with me, and send me your updates as you go? Also here’s my current setup:

— Chris Fisher (@ChrisLAS) February 4, 2018

Chris Fisher takes the Plasma Desktop Challenge, can’t wait to find out what he says next week.

On Reddit Plasma 5.12 post:

“KDE plasma is literally worlds ahead of anything I’ve ever seen. It’s one project where I felt I had to donate to let them know I loved it!”
“I’ve switched to Plasma a little over a year ago and have loved it ever since. I’m glad they’re working so hard on it!”
“Yay! Good to see Kickass Desktop Environment get an update!”

Or here’s a random IRC conversation I had today in a LUG channel

<yeehi> Riddell – I adore KDE now!
<yeehi> It is gobsmackingly beautiful
<yeehi> I put in the 12.0 LTS updates yesterday, maybe over a hundered packages, and all the time I was thinking, “Man, I just love those KDE developers!
<yeehi> It is such a pleasure to use and see. Also, I have been finding it to be my most stable GNU+Linux experience

So after a decade of hard work I’m definitely feeling the good vibes this week. Take the Plasma Challenge and be a Plasma Puppy! KDE Plasma is lightweight, functional and rocking your laptop.




Robert Roth: Calculator, System Monitor and games happenings

Planet GNOME - Enj, 08/02/2018 - 12:02pd
We are getting close to the 3.28 release, we are in the freeze, so it's time for a quick summary of what happened this cycle with the projects I occasionally contribute to.

Calculator was the major player this cycle (well, lately, to be more precise) having a quick bugs cleanup (both on gnome bugzilla and ubuntu launchpad), merging of older patches, creating new bugs by merging old patches, here's a few of the most relevant ones:
* Meson port got into the calculator repository, thanks Robert Ancell and Niels de Graef for the patches sent, and for the people reporting bugs since that happened, I am trying to keep up with the bugs as they are coming in. Thankfully, Meson is not only faster, but a lot easier to understand and use (and with better reference) for me, so the fixes do not (always) start with me shouting out for help. Please, go ahead and try the meson build and if you find anything to complain about, just do it on the issue tracker (bugzilla, or hopefully gitlab soon).
* If you did use the ans variable, be aware, that it was replaced with _ variable (do you know python), to avoid being confused with a time unit in a language. This was quite a big failure of me handling the issue, as it popped up late in the cycle, and didn't know how to handle it, as it would have required a freeze exception, translation updates, etc, something which I wasn't ready for (the first one is the hardest). Instead, I chose the easy way for me, which meant lots of headache for some people (not being to be able to use results of previous calculations with the given locale), more headache for maintainers in various distributions dealing with the bugs and patches related to that. Well, that is something I need to get better at, namely not choosing the easy way out and postponing to the next cycle in case of bugs found late in the cycle.
* Calculator is resizable again. This is a somewhat controversial move (just like the one to make it non-resizable), hopefully people will forget me for it. For now it is freely resizable, the history view (the view showing previous calculations) expanding vertically, and the buttons remain fixed-height. The problem is that both the history and the buttons area expands horizontally, and the buttons expanding horizontally can result in very wide buttons, which is not ideal. Thankfully, Allan Day already has the mockups ready on how the calculator should resize, and Emanuelle Bassi already has built emeus, a constraint-based GTK container, because I haven't found a way of describing Allan's mockups in current GTK+CSS terms.

System-Monitor is not dead yet, and will not die anytime soon. That is a statement which was not clear until now, with Usage being in development. We had several discussions with the designers about how to make one application out of the two, merging the two, but we agreed that probably the target audience is different: Usage is for simple use-cases, easy-to-use (and fairly beautiful, I have to admit), and System Monitor is for monitoring your system, your processes. Usage handles applications, with as few details as possible, e.g. network traffic, disk usage, CPU usage, while System Monitor monitors processes, their statuses, uptimes, cgroups, open files, etc. for advanced users.
On the development front there are only a couple of changes worth mentioning there:
* Dark theme support for charts - if you use a dark theme, I'm sure you've already been blinded by the hardcoded white background of the Resources' charts. Thanks to Voldemar Khramtsov it is fixed, please check the implementation with your themes and report bugs. I have experimented with ~15 different themes, and made sure to have theme-based colors for the background and the grid of the charts, but it is really hard to make the non-theme dependent colors of the charts visible on a theme-dependent background, and we might need some more tweaking. Ideas are welcome.
* Multiple terms search in process list - you can filter the process list by multiple words, separated with ' ' or '|', e.g. "foo bar" or "foo|bar" for showing only processes matching foo or bar. There was a discussion on making this search regexp-based, but I didn't see a use-case for it. Let me know if you would use a regexp-filtering in the process tree, and explain why you would use (real use-case), and I will re-consider the decision taken in the bug.

Other recent work:
* My first Meson port was swell-foop, the same as for everything I do, please check it, and if you see anything wrong, just let me know.
* I did some GetText migrations, mostly on games, and most of them have already been merged, and already received some fixes (e.g. appdata was not installed, as I thought that removing the APPSTREAM_XML automake rules is part of the migration).
* I finally got to play a bit with flatpak and flatpak-builder, which greatly simplify building and distributing apps. I intend to do some more excercises on the games I maintain, as the games from the old GNOME Games suite (not to be confused with the new GNOME Games application for playing emulator games) are not present on flathub.
* I rediscovered my old project eMines, written as an elementary minesweeper using py-gtk (almost 7 years ago), just pulled it and it is still working.

Uh-oh, and I almost forgot, I proposed a GSoC idea for modernizing Five-Or-More, aka GNOME lines a bit, as that game didn't get the quick make-over most other games received a couple of years ago. Let's see whether it will happen or not.


Subscribe to AlbLinux agreguesi