You are here

Agreguesi i feed

Chris Glass: Serving a static blog from a Snap

Planet Ubuntu - Mër, 29/11/2017 - 10:31pd

Out of curiosity, I decided to try and package this blog as a snap package, and it turns out to be an extremely easy and convenient way to deploy a static blog!


There are several advantages that the snappy packaging format bring to the table as far as applications developers are concerned (which I am, my application in this case being my blog).

Snapcraft makes it very easy to package things, there's per-application jails preventing/sandboxing your applications/services that basically comes for free, and it also comes with a distribution mechanism that takes care of auto-upgrading your snap on any platform.



Since this blog is generated using the excellent "pelican" static blog generator from a bunch of markdown articles and a theme, there's not a lot of things to package in the first place :)

A webserver for the container age

A static blog obviously needs to be served by a webserver.

Packaging a "full" traditional webserver like apache2 (what I used before) or nginx is a little outside the scope of what I would have liked to do with my spare time, so I looked around for another way to serve it.


  • A static files webserver.
  • Able to set headers for cache control and HSTS.
  • Ideally self-contained / statically linked (because snapping the whole would be much faster/easier this way)
  • SSL ready. I've had an A+ rating on SSLlabs for years and intend to keep it that way.
  • Easy to configure.

After toying with the idea to write my own in rust, I instead settled on an already existing project that fits the bill perfectly and is amazingly easy to deploy and configure - Caddy.

A little bit of snapcraft magic

Of course, a little bit of code was needed in the snapcraft recipe to make it all happen.

All of the code is available on a github project, and most of the logic can be found in the snapcraft.yaml file.

Simply copying the Caddyfile and the snap/ subfolder to your existing pelican project should be all you need to get going, then run the following to get a snap package:

# On an Ubuntu system. snap install snapcraft snapcraft

With your site's FQDN added to the Caddyfile and pushed to production, you can marvel at all the code and configuration you did not have to write to get an A+ rating with SSLlabs :)

Questions? Comments?

As usual, feel free to reach out with any question or comment you may have!

Stephen Michael Kellat: Looking Towards A Retrospective Future

Planet Ubuntu - Mër, 29/11/2017 - 5:40pd

I wish this was about Ubuntu MATE. It isn't, alas. With the general freak-out again over net neutrality in the United States let alone the Internet blackout in Pakistan, it is time to run some ideas.1

The Internet hasn't been healthy for a while. Even with net neutrality rules in the United States, I have my Internet Service Provider neutrally blocking all IPv6 traffic and throttling me. As you can imagine, that now makes an apt update quite a pain. When I have asked my provider, they have said they have no plans to offer this on residential service. When I have raised the point that my employer wants me to verify the ability to potentially work from home in crisis situations, they said I would need to subscribe to "business class" service and said they would happily terminate my residential service for me if I tried to use a Virtual Private Network.

At this point, my view of the proposed repeal of net neutrality rules in the United States is simple. To steal a line from a former presidential candidate: What difference at this point does it make?2 I have exactly one broadband provider available to me.3 Unless I move to HughesNet or maybe something exotic, I have what is generally available.4

The Internet, if we can even call it a coherent whole anymore, has been quite stressed over the past few years. After all, a simple hurricane can wipe out Internet companies with their servers and networks based in New York City.5 In Puerto Rico, mail carriers of the United States Postal Service were the communications lifeline for quite a while until services could come back online.6 It can be popular on the African continent to simply make Internet service disappear at times to meet the needs of the government of the day.7 Sometimes bad things simply happen, too.8

Now, this is not say people are trying to drive forward. I have found concept papers with ideas that are not totally "pie in the sky".9 Librarians see the world as one where it is littered with PirateBoxes that are instead called LibraryBoxes.10 Alphabet's own Project Loon has been field tested in the skies of Puerto Rico thanks to the grant of a "Special Temporary Authority" by the Federal Communications Commission's Office of Engineering Technology.11

Now, I can imagine life without an Internet. My first e-mail address was tremendously long as it had a gateway or two in it to get the message to the BBS I dialed into that was tied into FidoNet. I was hunting around for FidoNews and, after reading a recent issue, noticed some names that correlate in an interesting fashion with the Debian & Ubuntu realms. That was a very heartening thing for me to find. With the seeding of apt-offline on at least the Xubuntu installation disc, I know that I would be able to update a Xubuntu installation whenever I actually found access somewhere even if it was not readily available to me at home. Thankfully with that bit of seeding we solved the "chicken and the egg" problem of how do you install something like that which you need when you don't have the access to get it to use.

We can and likely will adapt. We can and likely will overcome. These bits of madness come and go. As it was, I already started pricing the build-out of a communications hub with a Beverage antenna as well as an AN-FLR9 Wullenweber array at a minimum. On a local property like a couple acres of farm land I could probably set this up for just under a quarter million dollars with sufficient backups.12 One farm was positioned close enough to a physical corridor to the PIT Internet Exchange Point but that would still be a little over 100 miles to traverse. As long as I could get the permissions, could get the cable laid, and find a peer, peering with somebody who uses YYZ as their Internet Exchange Point is oddly closer due to quirks of geography.

Earlier in today's news, it appeared that the Democratic People's Republic of Korea made yet another unauthorized missile launch.13 This one appears to have been an ICBM that landed offshore from Japan.14 The DPRK's leader has threatened missile strikes of various sorts over the past year on the United States.15 A suborbital electromagnetic pulse blast near our Pacific coast, for example, would likely wipe out the main offices of companies ranging from Google to Yahoo to Apple to Amazon to Microsoft in terms of their computers and other electronic hardware.16

I'm not really worried right now about the neutrality of internetworking. I want there to still be something carried on it. With the increasingly real threat of an EMP possibly wiping out the USA's tech sector due to one rogue missile, bigger problems exist than mere paid prioritization.17

  1. Megan McArdle, "The Internet Had Already Lost Its Neutrality," Bloomberg.Com, November 21, 2017, ; M. Ilyas Khan, "The Politics behind Pakistan's Protests," BBC News, November 26, 2017, sec. Asia,

  2. The candidate in this case is Hillary Clinton. That sentence, often taken out of context, was uttered before the Senate Foreign Relations Committee in 2013.

  3. Sadly the National Broadband Map project was not funded to be continually updated. It would have continued to show that, even though cell phone services are available, those are not meant for use in place of a wired broadband connection. Updates stopped in 2014.

  4. I am not made of gold but this is an example of an offering on the Iridium constellation:

  5. Sinead Carew, "Hurricane Sandy Disrupts Northeast U.S. Telecom Networks," Reuters, October 30, 2012,

  6. Hugh Bronstein, "U.S. Mail Carriers Emerge as Heroes in Puerto Rico Recovery," Reuters, October 9, 2017,

  7. "Why Has Cameroon Blocked the Internet?," BBC News, February 8, 2017, sec. Africa,

  8. "Marshall Islands' 10-Day Internet Blackout Extended," BBC News, January 9, 2017, sec. News from Elsewhere,

  9. Pekka Abrahamsson et al., "Bringing the Cloud to Rural and Remote Areas - Cloudlet by Cloudlet," ArXiv:1605.03622 [Cs], May 11, 2016,

  10. Jason Griffey, "LibraryBox: Portable Private Digital Distribution," Make: DIY Projects and Ideas for Makers, January 6, 2014,

  11. Nick Statt, "Alphabet's Project Loon Deploys LTE Balloons in Puerto Rico," The Verge, October 20, 2017,

  12. One property reviewed with a house, two barns, and a total of six acres of land came to $130,000. The rest of the money would be for licensing, equipment, and construction.

  13. "North Korea Fires New Ballistic Missile." BBC News, November 28, 2017, sec. Asia.

  14. "N Korea 'Tested New Long-Range Missile.'" BBC News, November 29, 2017, sec. Asia.

  15. Kim, Christine, and Phil Stewart. "North Korea Says Tests New ICBM, Can Reach All U.S. Mainland." Reuters, November 29, 2017.

  16. For example: Zimmerman, Malia. "Electromagnetic Pulse Attack on Hawaii Would Devastate the State." Fox News, May 12, 2017.

  17. Apparently the last missile test can reach the Pacific coast of the United States. See: Smith, Josh. "How North Korea’s Latest ICBM Test Stacks up." Reuters, November 29, 2017.

Riccardo Padovani: A generic introduction to Gitlab CI

Planet Ubuntu - Mar, 28/11/2017 - 10:00md

At fleetster we have our own instance of Gitlab and we rely a lot on Gitlab CI. Also our designers and QA guys use (and love) it, thanks to its advanced features.

Gitlab CI is a very powerful system of Continuous Integration, with a lot of different features, and with every new releases, new features land. It has a very rich technical documentation, but it lacks a generic introduction for whom want to use it in an already existing setup. A designer or a tester doesn’t need to know how to autoscale it with Kubernetes or the difference between an image or a service.

But still, he needs to know what is a pipeline, and how to see a branch deployed to an environment. In this article therefore I will try to cover as many features as possible, highlighting how the end users can enjoy them; in the last months I explained such features to some members of our team, also developers: not everyone knows what Continuous Integration is or has used Gitlab CI in a previous job.

If you want to know why Continuous Integration is important I suggest to read this article, while for finding the reasons for using Gitlab CI specifically, I leave the job to itself.


Every time a developer changes some code he saves his changes in a commit. He can then push that commit to Gitlab, so other developers can review the code.

Gitlab will also start some work on that commit, if the Gitlab CI has been configured. This work is executed by a runner. A runner is basically a server (it can be a lot of different things, also your PC, but we can simplify it as a server) that executes instructions listed in the .gitlab-ci.yml file, and reports the result back to Gitlab itself, which will show it in his graphical interface.

When a developer has finished implementing a new feature or a bugfix (activity that usual requires multiple commits), can open a merge request, where other member of the team can comment on the code and on the implementation.

As we will see, also designers and testers can (and really should!) join this process, giving feedbacks and suggesting improvements, especially thanks to two features of Gitlab CI: environments and artifacts.


Every commit that is pushed to Gitlab generates a pipeline attached to that commit. If multiple commits are pushed together the pipeline will be created only for the last of them. A pipeline is a collection of jobs split in different stages.

All the jobs in the same stage run in concurrency (if there are enough runners) and the next stage begins only if all the jobs from the previous stage have finished with success.

As soon as a job fails, the entire pipeline fails. There is an exception for this, as we will see below: if a job is marked as manual, then a failure will not make the pipeline fails.

The stages are just a logic division between batches of jobs, where doesn’t make sense to execute next jobs if the previous failed. We can have a build stage, where all the jobs to build the application are executed, and a deploy stage, where the build application is deployed. Doesn’t make much sense to deploy something that failed to build, does it?

Every job shouldn’t have any dependency with any other job in the same stage, while they can expect results by jobs from a previous stage.

Let’s see how Gitlab shows information about stages and stages’ status.


A job is a collection of instructions that a runner has to execute. You can see in real time what’s the output of the job, so developers can understand why a job fails.

A job can be automatic, so it starts automatically when a commit is pushed, or manual. A manual job has to be triggered by someone manually. Can be useful, for example, to automatize a deploy, but still to deploy only when someone manually approves it. There is a way to limit who can run a job, so only trustworthy people can deploy, to continue the example before.

A job can also build artifacts that users can download, like it creates an APK you can download and test on your device; in this way both designers and testers can download an application and test it without having to ask for help to developers.

Other than creating artifacts, a job can deploy an environment, usually reachable by an URL, where users can test the commit.

Job status are the same as stages status: indeed stages inherit theirs status from the jobs.


As we said, a job can create an artifact that users can download to test. It can be anything, like an application for Windows, an image generated by a PC, or an APK for Android.

So you are a designer, and the merge request has been assigned to you: you need to validate the implementation of the new design!

But how to do that?

You need to open the merge request, and download the artifact, as shown in the figure.

Every pipeline collects all the artifacts from all the jobs, and every job can have multiple artifacts. When you click on the download button, it will appear a dropdown where you can select which artifact you want. After the review, you can leave a comment on the MR.

You can always download the artifacts also from pipelines that do not have a merge request open ;-)

I am focusing on merge request because usually is where testers, designer, and shareholder in general enter the workflow.

But merge requests are not linked to pipelines: while they integrate nice one in the other, they do not have any relation.


In a similar way, a job can deploy something to an external server, so you can reach it through the merge request itself.

As you can see the environment has a name and a link. Just clicking the link you to go to a deployed version of your application (of course, if your team has setup it correctly).

You can click also on the name of the environment, because Gitlab has also other cool features for environments, like monitoring.


This was a small introduction to some of the features of Gitlab CI: it is very powerful, and using it in the right way allows all the team to use just one tool to go from planning to deploying. A lot of new features are introduced every month, so keep an eye on the Gitlab blog.

For setting it up, or for more advanced features, take a look to the documentation.

In fleetster we use it not only for running tests, but also for having automatic versioning of the software and automatic deploys to testing environments. We have automatized other jobs as well (building apps and publish them on the Play Store and so on).

Speaking of which, do you want to work in a young and dynamically office with me and a lot of other amazing guys? Take a look to the open positions at fleetster!

Kudos to the Gitlab team (and others guys who help in their free time) for their awesome work!

If you have any question or feedback about this blog post, please drop me an email at or tweet me :-) Feel free to suggest me to add something, or to rephrase paragraphs in a clearer way (English is not my mother tongue).

Bye for now,

P.S: if you have found this article helpful and you’d like we write others, do you mind to help us reaching the Ballmer’s peak and buy me a beer?

Sebastian Kügler: KDE’s Goal: Privacy

Planet Ubuntu - Mar, 28/11/2017 - 8:29md

by BanksyAt Akademy 2016, the KDE community started a long-term project to invigorate its development (both, technically and organizationally) with more focus. This process of soul-searching has already yielded some very useful results, the most important one so far being agreement of a common community-wide vision:

A world in which everyone has control over their digital life and enjoys freedom and privacy.

This presents a very high-level vision, so a logical follow-up question has been how this influences KDE’s activities and actions in practice. KDE, being a fairly loose community with many separate sub-communities and products, is not an easy target to align to a common goal. A common goal may have very different on each of KDE’s products, for an email and groupware client, that may be very straight-forward (e.g. support high-end crypto, work very well with privacy-respecting and/or self-hosted services), for others, it may be mostly irrelevant (a natural painting app such as Krita simply doesn’t have a lot of privacy exposure), yet for a product such as Plasma, the implications may be fundamental and varied.
So in the pursuit of the common ground and a common goal, we had to concentrate on what unites us. There’s of course Software Freedom, but that is somewhat vague as well, and also it’s already entrenched in KDE’s DNA. It’s not a very useful goal since it doesn’t give us something to strive for, but something we maintain anyway. A “good goal” has to be more specific, yet it should have a clear connection to Free Software, since that is probably the single most important thing that unites us. Almost two years ago, I posed that privacy is Free Software’s new milestone, trying to set a new goal post for us to head for. Now the point where these streams join has come, and KDE has chosen privacy as one of its primary goals for the next 3 to 4 years. The full proposal can be read here.
“In 5 years, KDE software enables and promotes privacy”

Privacy, being a vague concept, especially given the diversity in the KDE community needs some explanation, some operationalization to make it specific and to know how we can create software that enables privacy. There are three general focus areas we will concentrate on: Security, privacy-respecting defaults and offering the right tools in the first place.


Improving security means improving our processes to make it easier to spot and fix security problems and avoiding single points of failure in both software and development processes. This entails code review, quick turnaround times for security fixes.

Privacy-respecting defaults

Defaulting to encrypted connections where possible and storing sensible data in a secure way. The user should be able to expect the KDE software Does The Right Thing and protect his or her data in the best possible way. Surprises should be avoided as much as possible, and reasonable expectations should be met with best effort.

Offering the right tools

KDE prides itself for providing a very wide range of useful software. From a privacy point of view, some functions are more important than others, of course. We want to offer the tools that most users need in a way that allows them to lead their life privately, so the toolset needs to be comprehensive and cover as many needs as possible. The tools itself should make it easy and straight-forward to achieve privacy. Some examples:

  • An email client allowing encrypted communication
  • Chat and instant messenging with state-of-the art protocol security
  • Support for online services that can be operated as private instance, not depending on a 3rd party provider

Of course, this is only a small part, and the needs of our userbase varies wildly.

Onwards from here…

In the past, KDE software has come a long way in providing privacy tools, but the tool-set is neither comprehensive, nor is privacy its implications widely seen as critical to our success in this area. Setting privacy as a central goal for KDE means that we will put more focus on this topic and lead to improved tools that allow users to increase their level of privacy. Moreover, it will set an example for others to follow and hopefully increase standards across the whole software ecosystem. There is much work to do, and we’re excited to put our shoulder under it and work on it.

Jono Bacon: WeWork to Aquire Some Thoughts

Planet Ubuntu - Mar, 28/11/2017 - 7:30pd

Rumors are abound that WeWork are to acquire for $30 million. I wanted to share a few thoughts here. The caveat: I have no behind-the-scenes knowledge here, these are just some thoughts based on a somewhat cursory knowledge of both organizations. This is also (currently) speculation, so the nature and numbers of an acquisition might be different.

It is unsurprising that WeWork would explore for an acquisition. From merely a lead-generation perspective, making it simple for the hundreds of thousands of meetups around the world to easily host their events at WeWork spaces will undoubtedly have a knock-on effect of people registering as WeWork members, either as individual entrepreneurs, or hiring hosted office space for their startups.

Some of the biggest hurdles for meetups are (a) sourcing sponsorship funds to cover costs, and (b) facilitating the target of those very costs such as food/beverage, AV/equipment, and promotional costs/collateral. WeWork could obviously provide not just space and equipment but also potentially broker sponsorships too. As with all ecosystem-to-ecosystem acquisitions, bridging those ecosystems exposes value (e.g. Facebook and Instagram.)

The somewhat surprising element here to me is the $30 million valuation. used to publish their growth stats, but it seems they are not there any more. The most recent stats I could find (from 2012 on Quora) suggested 11.1 million users and 349,000+ meetups. There is a clear source of revenue here, and while it may be relatively limited in potential growth, I would have expected the revenue projection, brand recognition, and current market lead would be worth more than $30 million.

Mind you, and with the greatest of respect to the wonderful people at, I feel they have somewhat missed the mark in terms of their potential for innovation. There are all kinds of things they could have done to capitalize on their market position by breaking down the onboarding and lifecycle of a meetup (from new formation to regular events) and optimizing and simplifying every element of this for organizations.

There are all kinds of services models that could have been hooked in here such as partnerships with providers (e.g. food, equipment, merch etc) and partner organizations (e.g. major potential consumers and sponsors of the service), and more. I also think they could have built out the profile elements of their service to glue different online profiles together (e.g. GitHub, LinkedIn, Reddit) to not just source groups, but to become a social platform that doesn’t just connect you to neat groups, but to neat people too.

As I have been saying for a while, there is also a huge missed opportunity in converting the somewhat transitory nature of a meetup (you go along and have a good time, but after the meetup finishes, nothing happens) into a broader and more consistently connected set of engagements between members. Doing this well requires community building and collaboration experience, that I would proffer, most organizers probably don’t have.

All of this seems like a bit of a missed opportunity, but as someone sitting on the outside of the organization, who am I to judge? Running a popular brand and service is a lot of work, and from what I understand, they have a fairly small team. There is only so much you can do.

My suspicion is that were shopping around a little for a sale and prospective buyers were primarily interested in their brand potential and integration (with an expected limited cap on revenues). As such, $30 million might make sense, and would strike me as a steal for WeWork.

Either way, congratulations to WeWork and in their future partnership.

The post WeWork to Aquire Some Thoughts appeared first on Jono Bacon.


Subscribe to AlbLinux agreguesi